1. 9

    Most of what I want out of a website is unformatted text and jump links. Given that, the appropriate (already existing) technology to deliver it with the minimum of fuss is not HTML+CSS but gophermaps! I’d love to see people use gopher+gophermaps instead of HTML+HTTP when formatting isn’t necessary, though that’s an even bigger leap than getting people to avoid CMSes for that use case.

    1. 2

      Okay. gopher://i-logout.cz/1/en/bongusta/ and gopher://gopher.black/1/moku-pona are two feeds of gopher based blogs (aka phlogs). Read and enjoy.

      1. 1

        Thanks! I’ve read Alex Schroder’s phlog but I wasn’t aware of phlog aggregators

    1. 2

      This is interesting, but another 20 years of software development continues to prove him wrong.

      The current dominant paradigm is flat, single-ordered lists, and search (perhaps augmented with tags like our dear lobste.rs here).

      This is even more of all the bad stuff he’s railing against at the start of the article, but this is the stuff that works and there are innumerable other approaches dead or dying.

      It suspect that for UI’s less freedom is simpler (one button, one list, one query, one purpose, etc.) and not the other way around.


      For developers, I think he was right, and it’s also what we’ve got today. It’s clearly preferable for developers to have a simple model to work against (Like URIs + JSON).

      apt-get install firefox (Which unpacks to a resource identifier and a standardized, machine-readable package file) is quite probably as good as it gets. It’s a directed graph instead of an undirected graph like his zipper system, but undirected graphs require an unrealistic (and in my opinion probably harmful) amount of federation between producers of API’s and their consumers.

      1. 7

        When the pitch is “good computing is possible”, “bad computing has dominated” isn’t actually a great counterargument – particularly when the history of so much of it comes down to dumb luck, path dependence, tradeoffs between technical ability & marketing skills, and increasingly fast turnover and the dominance of increasingly inexperienced devs in the industry.

        If you’re trying to suggest that the way things shook out is actually ideal for users – I don’t know how to even start arguing against that. If you’re suggesting that it’s inevitable, then I can’t share that kind of cynicism because it would kill me.

        A better world is possible but nobody ever said it would be easy.

        1. 4

          Your comment is such a good expression of how I feel about the status quo! I was just having a similar discussion in another thread about source code, where I said “text is hugely limiting for working with source code”, and somebody objected with “but look at this grep-like tool, it’s totally enough for me”. I can understand when people raise practical objections to better tools (hard to get traction, hard to interface with existing systems etc.). What’s dispiriting is the refusal to even admit that better tools are possible.

          1. 2

            The mistake is believing that we’re anywhere close to status quo in software development. The tools and techniques used today are completely different from the tools we used 5 and 10 years ago, and are almost unrecognizable next to the tools and techniques used 40 and 50 years ago.

            Some stuff sticks around, (keyboards are fast!) but other things change and there is loads of innovative stuff going on all the time. With reference to visual programming: I recently spent a weekend playing with the Unreal 4 SDK’s block programming language (they call it blueprints) it has fairly seamless C++ integration and I was surprised with how nice it was for certain operations… You might also be interested in Scratch.

            Often, these systems are out there, already existing. Sometimes they’re not in the mainstream because of institutional momentum, but more often they’re not in the mainstream because they’re not good (the implementations or the ideas themselves).

            The proof of the pudding is in the eating.

            1. 4

              I don’t think I can agree with this. I’m pretty sure the “write code-compile-run” approach to writing code that is still in incredibly widespread use is over 40 years old. Smalltalk was developed in the 70s. Emacs was developed in the 70s. Turbo Pascal, which had an integrated compiler and editor, was released in mid-80s (more than 30 years ago). CVS was developed in mid-80s (more than 30 years ago). Borland Delphi and Microsoft Visual Studio, which were pretty much full-fledged IDEs, were released in the 90s (20 years ago). I could go on.

              What do we have now that’s qualitatively different from 20 years ago?

              1. 3

                Yup. Some very shallow things have changed but the big ideas in computing really all date to the 70s (and even the ‘radical’ ideas from the 70s still seem radical). I blame the churn: half of the industry has less than 10 years of experience, and degree programs don’t emphasize an in-depth understanding of the variety of ideas (focusing instead on the ‘royal road’ between Turing’s UTM paper and Java, while avoiding important but complicated side-quests into domains like computability).

                Somebody graduating with a CS degree today can be forgiven for thinking that the web is hypertext, because they didn’t really receive an education about it. Likewise, they can be forgiven for thinking (for example) that inheritance is a great way to do code reuse in large java codebases – because they were taught this, despite the fact that everybody knows it isn’t true. And, because more than half their coworkers got fundamentally the same curriculum, they can stay blissfully unaware of all the possible (and actually existing) alternatives – and think that what they work with is anywhere from “all there is” to “the best possible system”.

                1. 1

                  I got your book of essays - interested in your thinking on these topics.

                  1. 1

                    Thanks!

                    There are more details in that, but I’m not sure whether or not they’ll be any more accessible than my explanation here.

                2. 2
                  • Most languages aren’t AOT compiled, there’s usually a JIT in place (if even that, Ruby and python are run-time languages through and through). These languages did not exist 20 years ago, though their ancestors did (and died, and had some of the good bits resurrected, I use Clojure regularly, which is both modern and a throwback).

                  • Automated testing is very much the norm today, it was a fringe idea 10 years ago and something that you were only crazy enough to do if you were building rockets or missiles or something.

                  • Packages and entire machines are regularly downloaded from the internet and executed in production. I had someone tell me that a docker image was the best way to distribute and run a desktop Linux application.

                  • Smartphones, and the old-as-new challenges of working around vendors locking them down.

                  • The year of the Linux desktop surely came sometime in the last or next 20 years.

                  • Near dominance of Linux in the cloud.

                  • Cloud computing and the tooling around it.

                  • The browser wars ended, though they started to heat up before the 20 year cutoff.

                  • The last days of Moore’s law and the 10 years it took most of the industry to realize the party was over.

                  • CUDA, related, the almost unbelievable advances in computer graphics. (Which we aren’t seeing in web/UI design, again, probably not for lack of trying, but maybe the right design hasn’t been struck)

                  • Success with Neural Networks on some problem sets and their fledgling integration into other parts of the stack. Wondering when or if I’ll see a NN based linter I can drop into Emacs.


                  I could go on too, QWERTY keyboards have been around 150 years because it’s good enough and the alternatives aren’t better then having one standard. I don’t think that the fact that my computer has a QWERTY keyboard on it is an aberration or failure, and not for lack of experimentation on my own part and on the parts of others. Now if only we could do something about that caps lock key… Oh wait, I remapped it.


                  It’s easy to pick up on the greatest hits in computer science, 20, 30, and 40 years ago. There’s a ton of survivorship bias and you don’t point to all of those COBOL-alikes and stack-based languages which have all but vanished from the industry. If it seems like there’s no progress today, it’s only because it’s more difficult to pick the winners without the benefit of hindsight. There might be some innovation still buried that makes two way linking better then one way linking, but I don’t know what it is and my opinion is that it doesn’t exist.

                  1. 2

                    Fair enough. Let me clarify my comment, which was narrowly focused on developer tools for no good reason.

                    There is no question that there have been massive advances in hardware, but I think the software is a lot more hit and miss.

                    In terms of advances on the software front, I would point to distributed storage in addition to cloud computing and machine learning. For end users, navigation and maps are finally really good too. There are probably hundreds of other specific examples like incredible technology for animated films.

                    I think my complaints are to do with the fact that most of the effort in the last 20 years seems to have been directed to reimplementing mainframes on top of the web. In many ways, there is churn without innovation. I do not see much change in software development either, as I mentioned in the previous comment (I don’t think automated testing counts), and it’s what I spend most of my time on so there’s an availability bias to my complaints. There is also very little progress in tools for information management and, for lack of a better word, “end user computing” (again, spreadsheets are very old news).

                    I think my perception is additionally coloured by the fact that we ended up with both smartphones and the web as channels for addictive consumption and advertising industry surveillance. It often feels like one step forward and ten back.

                    I hope this comment provides a more balanced perspective.

            2. 2

              In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

              Opensource and the internet have given a ton of ideas a fair shake, including these ideas. Stuff is getting better (not worse). The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

              1. 4

                In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

                Dozens of people, and I’ve met or worked with approximately half of them. Post-web, the hypertext community is tiny. I can describe at length the problems preventing these implementations from becoming commercially successful, but none of them are that the underlying ideas are difficult or impractical.

                The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

                I wrote some of those systems, while working under the author of that paper. That’s how I formed my opinions.

                1. 1

                  That’s awesome. Maybe you can change my mind!

                  Directed graphs are more general then undirected graphs (You can implement two-way undirected graphs out of one way arrows, you can’t go the other way around). Almost every level of the stack from the tippy top of the application layer to the deepest depths of CPU caching and branch prediction is implemented in terms of one-way arrows and abstractions, I find it difficult to believe that this is a mistake.


                  EDIT: I realized that ‘general’ in this case has a different meaning for a software developer then it does in mathematics and here I was using the software-developers perspective of “can be readily implemented using”. Mathematically, something is more general when it can be described with fewer terms or axioms. Undirected graphs are more maths general because you have to add arrowheads to an undirected graph to make a directed graph, but for the software developer it feels more obvious that you could get a “bidirected” graph by adding a backwards arrow to each forwards arrow. The implementation of a directed graph from an undirected graph is difficult for a software developer because you have to figure out which way each arrow is supposed to go.

                  1. 1

                    Bidirectional links are not undirected edges. The difference is not that direction is unknown – it’s that the edge is visible whichever side of the node you’re on.

                    (This is only hard on the web because HTML decided against linkbases in favor of embedded representations that must be mined by a third party in order to reverse them – which makes jump links a little bit easier to initially implement but screws over other forms of linking. The issue, essentially, is that with a naive host-centric way of performing jump links, no portion of the graph is actually known without mining.

                    Linkbases are literally the connection graph, and links are constructed from linkbases. In the XanaSpace/XanaduSpace model, you’ve got a bunch of arbitrary linkbases representing arbitrary subgraphts that are ‘resident’ – created by whoever and distributed however – and when a node intersects with one of the resident links, the connection is displayed and made navigable.

                    Also in this model a link might actually be a node in itself where it has multiple points on either side, or it might have zero end points on one side, but that’s a generalization & not necessarily interesting since it’s equivalent to all combinations of either end’s endsets.)

                    TL;DR: bidirectional links are not undirected links – merely links understood above the level of the contents of a single node.

                    1. 1

                      Ok then, and how is it that you construct a graph out of a set of subgraphs? Is that construction also two way links thereby assuring that every participant constructs the same graph?

                      1. 1

                        Participants are not guaranteed to construct the same graph, and the graphs aren’t guaranteed to even be fully connected. (The only difference between bidirectional links & jump links is that you can see both points.)

                        Instead, you get whatever collection of connected subgraphs are navigable from the linkbases you have resident (which are just lists of directed edges).

                        This particular kind of graph-theory analysis isn’t terribly meaningful for either the web or translit, since it’s the technical detail of how much work you have to do to get a link graph that differs, not the kind of graph itself. (Graph theory is useful for talking about ZigZag, but ZigZag is basically unrelated to translit / hypertext and is more like an everted tabular database.)

                        1. 1

                          I guess I’m trying to understand how this is better or different from what already exists. If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now, time to throw one back and celebrate.

                          1. 1

                            I’m trying to understand how this is better or different from what already exists

                            Well, when the project started, none of what we have existed. This was the first attempt.

                            If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now,

                            ‘Link’ doesn’t actually mean ‘URL’ in this sense. A link is an edge between two nodes – each of these nodes being a collection of positions within a document. So, a linkbase isn’t anything like a collection of URLs, but it it’s a lot like a collection of pairs of URLs with an array of byte offsets & lengths affixed to each URL. (In fact, this is exactly what it is in the XanaSpace ODL model.) A URL by itself is only capable of creating a jump link, not a bidirectional link.

                            It’s not a matter of commenting on a URL, but of creating sharable lists of connections between sections of already-existing content. That’s the point of linking: that you can indicate a connection between two existing things without coordinating with any authors or owners.

                            URL-sharing sites like lobste.rs provide one quarter of that function: by coordinating with one site, you can share a URL to another site, but you don’t have control over either side beyond the level of an entire document (or, if you’re very lucky and the author put useful anchors, you can point to the beginning of a section on only the target side of the link).

                            1. 1

                              To take an example of a system which steps in the middle and does take greater control over both ends, Google’s AMP. I feel like it is one of the worse things anyone has ever tried to do to the internet in it’s entire existence.

                              Control oriented systems like AMP and to a lesser degree sharing sites like Imgur, Pinterest, Facebook, and soon (probably) Medium, represent existential threats to forums like lobste.rs.

                              So, in short, you’re really not selling me on why this two way links thing is better.

                              1. 2

                                We actually don’t have centralization like that in the system. (We sort of did in XU88 and XU92 but that stopped in the mid-80s.)

                                It’s not about controlling the ends. The edges are not part of the ends, and therefore the edges can be distributed and handled without permission from the ends.

                                Links are not part of a document. Links are an association between sections of documents. Therefore, it doesn’t make any sense to embed them in a document (and then require a big organization like Google to extract them and sell them back to you). Instead, people create connections between existing things & share them.

                                I’m having a hard time understanding what your understanding of bidirectional linking is, so let me get down to brass tacks & implementation details:

                                A link is a pair of spanpointers. A spanpointer is a document address, a byte offset from the beginning of the document, and a span length. Anyone can make one of these between any two things so long as you have the addresses. This doesn’t require control of either endpoint. It doesn’t require any third party to control anything either. I can write a link on a piece of paper and give it to you, and you can make the same link on your own computer, without any bits being transferred between our machines.

                                We do not host the links. We do not host the endpoints. We don’t host anything. We let you see connections between documents.

                                Seeing connections between documents manifests in two ways:

                                1. transpointing windows – we draw a line between the sections that are linked together, and maybe color them the same color as the line
                                2. bidirectional navigation – since you can see the link from either side, you can go left instead of going right

                                It’s not about control, or centralization. Documents aren’t aware of their links.

                                The only requirement for bidirectional linking is that an address points to the same document forever. (This is a solved problem: ignore hosts & use content addressing, like IPFS.)

                                1. 1

                                  Wow, thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                  I still think we’ve got this, or could implement it on the existing web stack. I think any user could implement zig-zag links in a hierarchal windows-style file structure since ’98 if not ‘95. I think it’s informative that most users do not construct those links, who knows how many of us have tried it in the name of getting organized.

                                  I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                  I’m going to be thinking over this a lot more. A system like git stores the differences between documents instead of the documents themselves, so clearly there are places for other ways of relating documents to each other then what we’ve got, which work!

                                  1. 3

                                    I should clarify: I’ve been describing bidirectional links in translit (aka hypertext or transliterature). ZigZag is actually a totally different (incompatible) system. The only similarity is that they’re both interactive methods of looking at associations between data invented by Ted Nelson.

                                    If we want to compare to existing stacks, transliterature is a kind of whole-document authoring and annotation thing like Word, while ZigZag is a personal database like Access – though in both cases the assumptions have been turned inside-out.

                                    You’re right that these things, once they’re understood, aren’t very difficult to implement. (I implemented open source versions of core data structures after leaving the project, specifically as demonstrations of this.)

                                    I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                    Depending on how you chunk, a site like this has a whole host of items. I see a lot of characters, for instance. I see multiple buttons, and multiple jump links. We’ve sort of gotten used to a particular way of working with the web, so its inherent complexity is forgotten.

                                    thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                    No problem! I feel like it’s my duty to explain Xanadu ideas because they’re explained so poorly elsewhere. I spent years trying to fully understand them from public documentation before I joined the project and got direct feedback, and I want to make it easier for other people to learn it than it was for me.

              2. 1

                I wouldn’t say so. What you have is more and more people are using the same tools, therefore you will never get a “perfect” solution. Generally, nature doesn’t provide a perfect system but “good enough to survive”. My partner and I are getting a child at the moment, and the times the doctor told us: “This is not perfect, but nature doesn’t care about that. It just cares about good enough to get the job done”.

                After I’ve heard this statement, I see it everywhere. Also with computers. Code and how we work run a huge chunk of important systems, and somehow they work. Maybe they work because they are not perfect.

                I agree that things will change (“for the better”), but it will come in phases. We will have a bigger catastrophic thing happening and afterwards systems and tools will change and adapt. As long everything sort of works, well, there is no big reason to change it (for the majority of people) since they can get the job done and then enjoy the sun, beaches and human interactions.

                1. 1

                  Nobody’s complaining that we don’t have perfection here. We’re complaining about the remarkable absence of not-awful in projects by people who should know better.

              3. 3

                I think the best way to describe what we have is “Design by Pop Culture”. Our socio-economic system is a low pass filter, distilling ideas until you can package them and sell them. Is it the best we got given those economic constraints? maybe…

                But that’s like saying “Look, this is the best way to produce cotton, it’s the best we got” during the slave era…(slavery being a different socio-economic system)

              1. 2

                This guy is absolutely terrible at communicating his ideas.

                I agree that it would be nice to have rich content formats. As he puts it “hypermedia” might be ok: it’s a medium that spans several dimensions of expressions, and can thus be read in several ways.

                But his example is terrible, the execution is poor, his drawings are a joke, even the argumentation is full of circumvolutions. This is obvious, to me, that this guy has no idea what he is talking about. He has no idea about the complexity of implementing what he is talking about (properly I mean, not as PoC).

                When we look at history and what finally happened, I contend that while at a base level we can see that economic or political factors could sway computer science one way or another, but I think that’s for trivial stuff, like choosing one encoding over another, because for example designing an ASIC capable of doing it in hardware cost less, even if the other encoding is better in pretty much all other ways. But looking at the big picture, what won was simplicity, the least (overall) effort.

                This is, I think, what he means when he says that current content is in the format choosen by computer scientists. And this is true. Because all technic will evolve this way.

                His idea are infeasible IMO. I would love to receive an article one day where I will be able to open it using mathematica to look at graphs and play with the data, read the content in another interface, navigate the cited papers and have more context about the authors for example. But all of this is already possible. There is no need for a data-structure to be specifically designed for it, this is implementation detail, abstraction leakage. ZigZag is just a bad idea. It should not exist (and I can’t help but be extremely sceptical at the amount of trademark this guy uses, as if his ideas were worth anything).

                What I find weird actually, is to be speaking about hypertext like some kind of invention. The concept is just so trivial and so self-evident, no one invented it! What was invented was a proper grammar to describe the object and protocols to communicate about. But the concept is trivially simple. Same stuff for hypermedia, but then, the implementation is infeasible (in a standardized, content-agnostic way).

                1. 5

                  This guy is absolutely terrible at communicating his ideas.

                  Fair.

                  The concept [of hypertext] is just so trivial and so self-evident

                  This isn’t really accurate. Most people today don’t understand what hypertext is. (Just look at the comments on this thread!)

                  The idea of navigable connections between ideas through mechanism is trivial (assuming you’re familiar with the western cyclopedic tradition) & many people independently invented similar systems, but hypertext has a very specific set of rules that interact in a fairly nuanced way. (The web implements approximately one-half of one of these rules, which is the source of a lot of confusion.)

                  He has no idea about the complexity of implementing what he is talking about

                  He has a pretty clear idea of the complexity of implementing what he’s talking about, because he’s been in close communication with different teams of serious professional developers actually implementing versions of it for many years.

                  It’s easier to implement a proper hypertext system than a modern web browser – but, where browsers have hundreds of developers, all of the implementations of Xanadu ideas since the mid-80s have (as far as I am aware) had teams of at most three people.

                  His idea are infeasible IMO.

                  They’ve been implemented. Implementations are being used internally.

                  The core ideas are pretty straightforward to implement. (I’ve written open source implementations of them in my free time, after leaving the project.)

                  The primary difficulty in implementing these things is poor public-facing documentation (because Ted wrote all the public-facing documentation, and he doesn’t separate technical ideas from rants & marketing material). This is why I wrote my own documentation.

                  Once the concepts are understood, most of them can be implemented in an hour or two. (I know, because I did exactly that many times.)

                  what won was simplicity, the least (overall) effort

                  Take a look at any W3C standard and tell me, with a straight face, that simplicity won.

                  What won was organic growth. In other words: instead of thinking carefully and seriously about how things should be designed, they went with their gut and used the design that came to mind most quickly. This gives them an edge in terms of communication: a stupid idea is much easier to communicate than a simple idea, because it will be as obvious to the person who hears it as it is to the person who says it. However, it’s a nightmare when it comes to maintainability, because poorly-thought-out designs are inflexible.

                  In terms of the actual number of elements necessary & the actual amount of text required to explain it, hypertext is simpler than webtech. The effort in a hypertext or translit system is the fifteen minutes you spend thinking hard about how all the pieces fit together, while the effort in webtech is trying to figure out how to make a pile of mismatched pieces do something that shouldn’t be done in the first place a decade after you learned to use all of them.

                1. 4

                  MUMPS is still in active usage, I think? A few years ago I interviewed someone out of the midwest (Minneapolis or Michigan, I think) who worked for a large health software vendor. They’ve been around for a long time, and they’re in a lot of hospitals across the US. I’m pretty sure their whole system is built in MUMPS.

                  1. 1

                    Epic is the vendor iirc. It is also all up in the VA Vista system.

                    It is part of the fractal of sadness of healthcare.

                    1. 1

                      Is the supply/demand ratio enough to add MUMPS to the list, along with COBOL, of languages that are liable to make consultants rich?

                      1. 2

                        I don’t really know enough about the MUMPS ecosystem to answer, but maybe? It seems to fit the COBOL criteria:

                        1. “Outdated”[0] language people don’t really want to work in,
                        2. Relied upon in core parts of large, “boring”, slow-moving companies who have a lot of code and,
                        3. Who are very unlikely to be disrupted out of their space any time soon

                        I got the impression that this particular company was one of the few/only software games in town, wherever they were, and that they kinda hoovered-up a lot of the engineering talent from the surrounding region. That said, the person I spoke to was very aware that MUMPS is incredibly niche and was looking to build transferable skills in a more mainstream software environment, and was looking in the Bay Area. So, I could guess they might have a retention problem, at least amongst people who are also willing to relocate for work. That’s speculation, though.

                        Caveat: This is all from memory of an interview a few years ago

                        [0] I used air-quotes because not everyone has these perceptions

                        1. 1

                          “Outdated” is a loaded term, sure, but I think we can explore it a bit.

                          Things can be outdated because more work has been done in the space to create technology which is better in every technical respect, but that doesn’t mean the existing technology is going to be changed, because social and political aspects factor in as well. Of course, in both software and buildings, something becomes outdated once flaws are no longer repaired when found, but fixing bugs becomes a political and social football as well.

                          (Then, of course, there are the sad cases who need to be contrarian to the point they’ll argue that technology which has been superseded and abandoned is in no way outdated. They’re the ones who throw out the most heat, and obscure the most light, when someone’s trying to understand the field.)

                    1. 5

                      The more I learn about MUMPS, the less ‘awful’ it seems.

                      It’s a strange language, sure – but so is Python (semantically-meaningful indentation), Lua (use of tilde for negation, combined array-dictionary-object type), FORTH & LISP (for obvious reasons), JavaScript (ditto), and basically every popular language besides. There’s so much variety in languages that we can’t reject something like indenting with period as an indicator of quality.

                      MUMPS has language features I wish were more common! For instance:

                      • first-order arbitrarily-nested non-relational databases with autovivification
                      • an almost smalltalk-like image/persistence, because all variables prefixed with ^ become persistent across runs

                      Between these two features, MUMPS as a computing environment has an out-of-the-box usability that shames more ‘modern’ languages & REPLs for certain types of problems (in particular, problems involving big, complex, free-form relationships between different kinds of objects).

                      1. 3

                        Don’t sacrafice fonts for being “minimal”, though. That font and text size is not great for reading if that’s your primary goal.

                        1. 1

                          That what browser text zoom is for!

                          1. 1

                            Not really. Browser zoom is so that people who need to zoom can do so on the page. However, the page still should be designed with your average user in mind. There’s no reason to force the average user to zoom when unnecessary, that is a usability concern. My other usability concern with this suggestion is that browser text zoom makes the usability of your documents pretty poor on mobile.

                          2. 1

                            Whatever happened to the end user being in control of fonts and colors, anyway? If minimal sites became common again, I’d like to see client-side styling become much more prominent (say, a font & size dropdown right next to the URL bar on every major browser, along with a background & foreground color selector).

                            Leaving the web designer in charge of the theming is a boon for branding, but the end user doesn’t care about supporting some company’s branding (which in some cases – like the prevelence of blue-heavy designs – does real harm), and it’s ultimately they who use the thing so they should have full control. Yet, overriding default colors and fonts breaks most websites (not just webapps – which we would expect to be fragile against that – but web SITES).

                            1. 1

                              I’m absolutely not suggesting that the user shouldn’t be in control of the fonts and colors - which they completely are even in many modern web documents - but only to suggest that the defaults provided for your document should be reasonable for your average user.

                              The way to think about font size is based on a rough average number of characters width per-column, because it helps prevent eye strain. My assumption here is that most users are human and using their eyes to read the content. Other cases exist, so the defaults must not be assumed to be the only case - but they should be reasonable for the average user.

                              1. 2

                                they completely are even in many modern web documents

                                As someone who, for years, set his default font style to monospace & color to orange-on-black for usability reasons & enabled font override on as many sites as possible, this does not track with my experience at all. Even the main google search page was not usable when font colors were overridden – most buttons became invisible.

                                It was probably a mistake to allow web designers to control the fonts, colors, and positions of elements in the first place. Giving that control to them has only provided shallow benefits & an invitation to implement really bad ideas, while nearly every time they’re taken advantage of, usability & accessibility suffers.

                                1. 1

                                  Sounds like an issue with your browser tbh.

                                  1. 1

                                    We can’t possibly be talking about the same facility.

                                    Every major browser has, buried in the settings, control over default typeface, size, and color, along with a checkbox indicating that the recommendations by the website itself should be overridden. This configuration (for the past ~10 years, on both chrome and firefox) will not fix hard-coded CSS alignment (which is tragically common) and will also not fix the use of transparent images on top of faux-buttons.

                                    The result: if you increase font size and use a dark background color with a light foreground color, text overruns boxes and sits on top of other text while faux-buttons become totally invisible. This is a behavior that happens in all major browsers (because it’s not a browser behavior but a result of idiomatic use of CSS being fragile), and it’s a huge accessibility issue for people who have poor vision but do not use a screen reader.

                                    It’s trivial to reproduce: go into your browser settings & invert the colors, then visit gmail. This problem is, essentially, the reason extensions like deluminate & features like default zoom exist: normal font & color controls are borderline useless because most existing CSS breaks in response to these controls.

                                    1. 1

                                      The alignment thing is tragically common because CSS didn’t have any other way to perform alignment until recently.

                                      There is the facility that you mentioned, users are allowed to install extensions, and you can disable sites from using CSS. If you use the first one then disabling CSS is probably reasonable.

                                      I’m sure there are other ways to solve this. Either way, you are describing problems with browsers and not problems with the way the website is designed or the web itself?

                                      Still sounds like an issue with your browser. links renders Google fine, for instance.

                                      1. 2

                                        Links renders google fine because links ignores all css color information (meaning that background & foreground colors cannot be specified through secondary methods in piecemeal ways).

                                        And yes, I consider browsers, web standards, and web developers equally at fault for the state of the world in this respect. These idioms (justified by browser features, made possible by web standards, and used by web developers) are user-hostile.

                                        How the web designer would like something to look is completely irrelevant. A site that doesn’t work if you turn off CSS is broken. But, people who actually modify how sites look in any way are rare enough and quiet enough that it’s possible for web developers to go through life not considering whether or not their sites still work when they’ve been re-styled. That is not a ‘browser problem’ – it’s a culture problem.

                                        1. 1

                                          Seems like semantics but maybe I should have emphasized links2.

                          1. 2

                            I’m about halfway through your article, & a lot of the ideas you float here are familiar from Xanadu work.

                            With regard to link creation, we had a planned editing system for XanaSpace that uses selection in a similar way but might be a little more intuitive. Specifically, we had the idea that each window might have its own persistent selections, and because links here are bidirectional, dragging a selection into another selection produces a link, while dragging a selection to a point produces a transclusion (and is the equivalent of a copy-and-paste). Since editing documents is intended to be transclusion-heavy, the idea is that text you’ve typed goes into an unnamed (but permanent) scratch document (perhaps the private permascroll) – appearing as a noodle of floating text – and gets dragged into another document or snapped into place alongside another orphan noodle.

                            Unfortunately, two things made this awkward: cross-platform inter-window communication supporting drag & drop isn’t really well-supported by UI libraries, and neither is dragging between distinct text objects on a canvas. Already, for XanaSpace, we were rolling our own text layout for 3d, but this meant that the 2d version was also going to be difficult to manipulate this way, so we put actually implementing it on hold until the (never-completed) large text rendering optimizations.

                            I also like the ‘management view’; we had a similar thing in XanaSpace done just with 3d zoom, and planned a ‘lolly-pop’ view wherein links/beams remained visible between documents but their actual content was collapsed sideways, with a ball on the top indicating the total document length.

                            1. 1

                              I don’t separate my technical posts from other topics. I publish to https://medium.com/@enkiv2 with periodic backups to http://www.lord-enki.net/medium-backup/ & highlights periodically featured & categorized on http://www.lord-enki.net/

                              1. 4

                                After spending a few months with Forth earlier this year, I absolutely agree that Forth can be extraordinarily simple and compact, that mainstream software is an endless brittle tower of abstractions, and that the aha moment when you find the right abstractions of a problem can be transcendent. But writings like this also indicate limitations that the Forth community unquestioningly accepts.

                                Forth is quite “individualistic”, tailored to single persons or small groups of programmers.. nobody says that code can’t be shared, that one should not learn to understand other people’s code or that designs should be hoarded by lone rangers. But we should understand that in many cases it is a single programmer or very small team that does the main design and implementation work.

                                This is fine. However, the next step is not:

                                once it becomes infeasible for a single person to rewrite the core functionality from scratch, it is dead. The ideal is: you write it, you understand it, you maintain and change it, you rewrite it as often as necessary, you throw it away and do something else.

                                I’ll suggest an alternative meaning of “dead”: when it stops being used. By this definition, most Forth programs are dead. (duck) More seriously, it is abuse of privilege to claim some software is dead just because it’s hard to modify. If people are using it, it is providing value.

                                It is the fundamental property and fate of all software to outlive its creator. The mainstream approach, for all of its many problems, allows software to continue to serve its users long after the original authors leave the scene. They decay, yes, but in some halting, limping fashion they continue to work for a long time. It’s worth acknowledging the value of this longevity. Any serious attempt to replace mainstream software must design for longevity. That requires improving on our ability to comprehend each other’s creations. And Forth (just like Scheme and Prolog) doesn’t really improve things much here. Even though ‘understanding’ is mentioned above, it is in passing and clearly not a priority. Even insightful Forth programs can take long periods of work to appreciate. If a tree has value in the forest but nobody can appreciate it, does it really have value? I believe comprehensibility is the last missing piece that will help Forth take over the world. Though it may have to change beyond recognition in the process.

                                (This comment further develops themes I wrote about earlier this year. Lately I’ve been working on more ergonomic machine code, adding only the minimum syntax necessary to improve checking of programs, adding guardrails to help other programmers comprehend somebody else’s creation. Extremely rudimentary, highly speculative, very much experimental.)

                                1. 5

                                  I’ll suggest an alternative meaning of “dead”: when it stops being used. By this definition, most Forth programs are dead. (duck) More seriously, it is abuse of privilege to claim some software is dead just because it’s hard to modify. If people are using it, it is providing value.

                                  We ought to distinguish dead-like-a-tree from dead-like-a-duck. A dead tree still stands there and you can probably even put a swing on it & use it for another 15-20 years, but it’s no longer changing in response to the weather. A dead duck isn’t useful for much of anything, and if you don’t eat it real quick or otherwise get rid of it, it’ll liable to stink up the whole place.

                                  A piece of code that is actively used but no longer actively developed is dead-like-a-tree: it’s more or less safe but it has no capacity for regeneration or new growth, and if you make a hole in it, that hole is permanent. Once the termites come (once it ceases to fit current requirements or a major vulnerability is discovered) it becomes dead-like-a-duck: useless at best and probably also a liability.

                                1. 1

                                  This is getting a lot of hate. I thought it was pretty even-handed. Do people dislike it because I said culture fit should only matter when it impacts effectiveness, or because I think culture matters at all?

                                  1. 9

                                    Someone who writes ostensibly production-ready code in PHP or Perl should be treated like someone who refuses to vaccinate their children: their behavior should be considered acceptable only if they are extremely careful and they have a very good excuse.

                                    I think the author is a bit melodramatic, and that is the toxic part – there’s criticism, and then there’s criticism without concern for the humans receiving the criticism. I agree with the author that there’s a vast majority of programmers who could benefit from better tooling (I put myself in this camp), but how that knowledge is conveyed makes all the difference (as they mention in the article itself).

                                    It sounds like the author has knowledge and wants to spread it – either by mentoring or teaching directly. To be successful at that, they need to understand where a mentee/student is and jump into their worldview before moving them forward. There is a huge difference between “here is where I think you are, which means you’ve probably seen these kind of frustrations pop up, why not try X instead” and “hey you’re doing this all wrong, here’s the best way.”

                                    Sidney Dekker talks about how people don’t show up at work to do a bad job – everyone is usually trying their best, but is working within constraints (“my boss won’t let me use X”, “I haven’t heard about this,” “I have outdated knowledge about this,” “I don’t want to support this organization for moral reasons,” “I’m under a tight deadline and can’t afford the rewrite,” etc). People will only be receptive if you acknowledge that they are doing their best within constraints, and you are removing a constraint. “You use PHP, therefore you are bad” statements make you feel good and self-righteous, but are less effective than “do you hit these kinds of errors?” “yes!” “then you might consider this other language, it’s easier than you think.”

                                    As I write this I feel a bit hypocritical as I’ve spent a long time criticising {large, well known software product with a defensive, tribal culture} for not doing {well established modern software engineering practice} and releasing a buggy and infuriating product. My ranting and raving did nothing. Absolutely nothing. What was effective? Another team got merged into that one and started flooding the mailing lists with “you thought {software engineering practice} was impossible at this scale but it’s actually doable.” It worked. That’s how you effect change in organizations that might even be hostile to you: meet them where they are, treat them as doing their best within constraints, and show how to remove the constraints.

                                    1. 4

                                      I think the author is a bit melodramatic, and that is the toxic part – there’s criticism, and then there’s criticism without concern for the humans receiving the criticism.

                                      Absolutely. I fell into that trap recently. The organisation I was in was using PHP, and I engaged what I thought at the time as collegial bitching at that poor language. Turns out they were mostly humoring me and taking a minor bit of offense at each nag, and the taken offense piled up in time.

                                      It was a lesson for me obviously, but there’s something to be learned on the other side of this fence: if you’re getting offended by something your colleagues do, bring it up very early and don’t just assume the offender is aware of how they’re affecting you.

                                      1. 3

                                        I actually make this point later in the essay:

                                        I consider this really to be an issue of beginners graduating to higher levels of understanding (and systematic pressure making it harder for certain groups to graduate out of the beginner classification), and one way to help this is to be extremely clear in your criticisms about the nature of the problems you criticize — in other words, rather than saying “PHP users are dumb”, say “PHP is a deeply flawed language, and PHP users should be extremely careful when using these particular patterns”.

                                        When we are talking to individuals, I think we have the responsibility, as @stip says, to “understand where a mentee/student is and jump into their worldview before moving them forward”. However, one-on-one mentoring is not where programmers of any level learn most of their habits these days (and I’m not sure it ever was!). Instead, the most powerful shapers of industry and community norms are publications (ranging from comments and blog posts to books and standardized lesson plans) and myth (ranging from hype and stereotypes to stuff like programming epigrams and The Story of Mel). Publications shape myth through hyperbole, so if you’re writing for a very general audience, it pays to be melodramatic enough to be memorable.

                                        (While overwrought, I don’t think the comparison to antivaxxers is unfair. Using insecure tools in serious projects because you think it doesn’t matter bites you in the ass when the project suddenly acquires scale and problems start affecting every install, in the same way that an individual decision to avoid vaccination becomes a failure of herd immunity at scale. The fundamental mistake is the same: evaluating decisions about interactions with a group from only an individual lens.)

                                        Something I originally meant to address more directly in this piece is a flaw I see in articles of the type it criticises – namely, I think social pressure is extremely valuable because it performs soft enforcement of rules, and articles critical of gatekeeping in tech often ignore the social dynamics of this. The particular article I’m responding to only bumps up against the problem.

                                        Ultimately: when your software doesn’t matter (when it has zero or one users, or all its users are also its core developers, and when nobody ever pays for it, uses it for important tasks, or gives it sensitive data), then tool choice also doesn’t matter. This is great! But, when your software matters (when people depend on it working), making sure it’s reliable and secure matters more than your ego and personal preferences. At this point, so long as they properly modify behavior in the appropriate direction, social pressure (to the point of hostility) is justified.

                                        When I see people complain that they get criticized for using in a serious project & their defense is that they’re a beginner and don’t know how to use something better-suited for the job, my basic response is that when somebody is depending on a project, it shouldn’t be implemented by someone who doesn’t know how to implement it reliably. (Making sure the thing works is more important than even profit: if you can only do a half-assed job without going broke, you shouldn’t do it at all.) In the extreme case, you work on it until it is reliable (by learning new techniques and tools) and discourage people from depending on it (by marking it alpha) until it’s actually release quality.

                                        (And, of course, on the other side: when nobody depends on it, we should encourage beginners to expand their horizons by using all sorts of tools – particularly tools that are a poor fit – since dealing with a poorly-fitting tool when both the tool and the problem are new to you is a very productive learning experience.)

                                    1. 28

                                      That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).

                                      Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                      1. 19

                                        Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                        Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524

                                        1. 13

                                          I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.

                                          1. 8

                                            Medium started with an illusion of simplicity and gradually got more and more complex.

                                            1. 3

                                              I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.

                                              I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.

                                          2. 3

                                            That is a very reductionist view of what people use the web for.

                                            I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.

                                            1. 19

                                              Useful.

                                              algernon hides

                                              1. 5

                                                YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.

                                                I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.

                                                Minimalist Slack has been around for decades, it’s called IRC.

                                                1. 2

                                                  It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.

                                                  Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.

                                                  1. 2

                                                    The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.

                                                  2. 2

                                                    That seems to be of extreme bad faith though.

                                                    1. 11

                                                      In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.

                                                      1. 2

                                                        Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.

                                                        1. 3

                                                          Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.

                                                          The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.

                                                          1. 3

                                                            The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly

                                                            Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?

                                                            1. 4

                                                              They are, in fact, downloading an application and running it locally.

                                                              That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.

                                                              1. 3

                                                                As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.

                                                                1. 1

                                                                  A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.

                                                                  The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.

                                                                  1. 3

                                                                    JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language

                                                                    (a == b) !== (a === b)

                                                                    but only some times…

                                                                    1. 3

                                                                      Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.

                                                                      (And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)

                                                                      Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).

                                                              2. 1

                                                                Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.

                                                                The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.

                                                              3. 1

                                                                I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.

                                                                1. 4

                                                                  There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.

                                                                  If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).

                                                                  All of these have well-maintained bindings to all popular scripting languages.

                                                                  1. 1

                                                                    QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.

                                                                    The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.

                                                                    1. 5

                                                                      Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).

                                                                      Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.

                                                                      I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.

                                                                      Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.

                                                                      (But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)

                                                                    2. 1

                                                                      I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.

                                                                      I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.

                                                                      I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).

                                                                      The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.

                                                                      1. 3

                                                                        I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.

                                                                        TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.

                                                                        When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.

                                                                        1. 5

                                                                          It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.

                                                                          It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.

                                                                          1. 2

                                                                            Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.

                                                                            If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.

                                                        2. 1

                                                          Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.

                                                          Google docs would be a desktop program.

                                                          Slack would be IRC.

                                                          1. 1

                                                            What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.

                                                            Google docs would be a desktop program.

                                                            This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.

                                                      1. 2

                                                        Haven’t read the paper entirely, but enjoyed the walkthrough of design decision references and the list of other implementations. You may also want to share your work to https://twitter.com/clausatz who’s trying to make a museum of xanadu/zigzag implementations.

                                                        (fenfire was one of the projects that still makes me think of how where zigzag could go, https://twitter.com/i/web/status/1012438773253263361 )

                                                        1. 1

                                                          Thanks for pointing me to this guy! It looks like he might already be in touch with the team but I’ve sent him some info anyhow.

                                                          (Internal Xanadu documentation, particularly of the timeline of unreleased implementations, is messy and piecemeal, so even if he’s in touch with Ted that doesn’t mean he’s actually getting a clear picture of the timeline.)

                                                        1. 3

                                                          I’m getting a 404. Do you have a mirror?

                                                          1. 3

                                                            It’s working for me. Here’s a Wayback copy.

                                                            1. 3

                                                              Thanks!

                                                          1. 3

                                                            I ought to rewrite this to bring it up to my current quality standards. Something like a third of the document is just bitching about bad UI libraries.

                                                            Nevertheless, if somebody is looking at my open source ZigZag backend and wondering how to write a frontend for it, this probably answers their question.

                                                            1. 3

                                                              I recently finished Karel Čapek’s “War With the Newts”, which I highly recommend. Fantastic sci-fi satire.

                                                              Now I’m in reading limbo. I’ve read a few chapters into several books that seem interesting (on fluid simulation, computer vision, knot theory, computation geometry), but haven’t had one really capture my interest yet. At some point I’ll just have to pick one.

                                                              1. 1

                                                                If you’re into early 20th century political sci-fi satire, may I recommend The Clockwork Man by E. V. Odle? I found it surprisingly modern in its prose style & pretty amusing. (If you don’t feel like buying it & don’t mind reading online, hilobrow.com serialized it about a decade ago as a prelude to printing their new edition.)

                                                                1. 2

                                                                  Thanks! I’m going to start reading it tonight.

                                                                  I couldn’t find it from their front page, but the hilobrow.com version is still online http://hilobrow.com/2013/03/20/the-clockwork-man-1/.

                                                              1. 3

                                                                In progress:

                                                                Recently finished:

                                                                • Building E-commerce Applications, a complete waste of money and basically just a lazy compilation of undedited blog posts. Booooo.

                                                                • Come and Take It: The Gun Printer’s Guide to Thinking Free, by Cody Wilson of Defense Distributed fame. I finished this probably a week before the current kerfluffle started. There’s a whoooole lot of self-congratulatory bullshit and bluster in this, as Wilson is first and foremost (in my opinion) an attention whore, but buried in there are a couple of good reflections on the role of toolmakers in the pursuit of independence.

                                                                • Come as You Are, a delightful book by Emily Nagoski that I heard about through OhJoySexToy (webcomic about sexual health and practices). It covers a lot of interesting academic information about sex, attraction, and romance, and can help in debugging certain failure modes of relationships or in preemptively being a better partner.

                                                                1. 3

                                                                  buried in there are a couple of good reflections on the role of toolmakers in the pursuit of independence.

                                                                  We cannot be free until we control the means of production? That sounds like a good reflection, all right :-)

                                                                  (Note: this may sound like I’m trying to rile you. I’m not, I am genuinely amused to see Marx echoed in this unexpected context.)

                                                                  1. 4

                                                                    As the good Chairman once said, “Political power grows out of the barrel of the gun…”.

                                                                    A lot of Marxists, communists, and libertarians I think would actually have a lot to talk to each other about if they weren’t so busy engaging in culture war these days.

                                                                    1. 3

                                                                      It isn’t too surprising, since all three sprang from the same philosophical tradition.

                                                                      A funny aside: a friend of mine recently noted, with regard to economics, we’re all Marxists now.

                                                                      1. 3

                                                                        Yup! Certain groups don’t really like to think about it, but because Marx did the first serious systematic analysis of how economies worked on a global scale (and coined the word “capitalism”, although contrary to popular opinion he did not coin but merely redefined “communism”), all modern economics owes a debt to Marx at least as big as the one it owes to Von Neumann. Even those opposed to Marx’s conclusions are using methods he pioneered to fight them. (Or, to be more direct: “economics begins with Marx” / “Karl Marx invented capitalism”)

                                                                        1. 2

                                                                          You might like this recent podcast episode from BBC Thinking Allowed: Marx and Marxism: https://www.bbc.co.uk/programmes/b0b2kpm0

                                                                  2. 3

                                                                    Come and Take It: The Gun Printer’s Guide to Thinking Free, by Cody Wilson of Defense Distributed fame. I finished this probably a week before the current kerfluffle started. There’s a whoooole lot of self-congratulatory bullshit and bluster in this, as Wilson is first and foremost (in my opinion) an attention whore, but buried in there are a couple of good reflections on the role of toolmakers in the pursuit of independence.

                                                                    This was on my reading list; but, after I did the ’ol Amazon “Look Inside,” I took it off because it looked like the signal/noise would be unacceptable. Please give a shout if it ends up being worthwhile. I watched a few of his pre-DD/early-DD lectures on philosopy, and the guy gave me stuff to chew on.

                                                                    1. 2

                                                                      So, again, having finished it I think the same points could be handled in a pamphlet instead of the drawn-out narrative Wilson attenpts.

                                                                      1. 1

                                                                        Thanks for humouring my obviously lacking reading comprehension skills. 🤦🏾‍♂️

                                                                      2. 1

                                                                        Lectures on philosophy? Had no idea he was into that, mind sharing some links?

                                                                        1. 2

                                                                          Cody Wilson Philosophy, Part I is the first of a two part series.

                                                                          Why I printed a gun is short and sweet; but, doesn’t get too deep.

                                                                    1. 6

                                                                      The challenging open issue I see with this sort of idea is the question of what links people will see by default when they view a page. If they see only the links the author put in, then we will generally have a situation no different than today (as most people will never change that default). If they see some additional set of links by default, then there will be ferocious competition from spammers to get their links included in that set and in general large arguments about what links will be included in it.

                                                                      For better or worse, HTML and browser technology today has a simple, distributed, scalable, and clearly fair answer to the question of ‘what links appear in a page by default’, and it’s one that keeps browsers and other central parties out of disputes (and generally out of the game of influencing the answer).

                                                                      (I admit that these days I look at all new protocols through the lens of ‘how can they be abused by spammers and other bad people’, but partly this is because we know spammers and other bad people are out there and will actively attempt to abuse anything they can.)

                                                                      1. 8

                                                                        The challenging open issue I see with this sort of idea is the question of what links people will see by default when they view a page.

                                                                        This is something we were concerned with at Xanadu during the development of XanaSpace (since we had all links in the form of loadable ODLs). The conclusion we came to was that a document author could recommend a particular set of links to go with their document, and that furthermore, people would produce and share collections of links (which, since they are not part of the content & are bidirectional, combine with transclusion to add additional context even to documents where the author is unaware of them) in sort of the same way as kottke.org or BoingBoing curates collections of other people’s web pages. Any resident links (including formatting links) would be applied when relevant (i.e., when the original source of any transcluded content overlapped with something mentioned in a link), and a person’s personal link collection would be private until shared.

                                                                        This sort of mirrors the fediverse / SSB model of requiring intentional hops between independent communities. Taking advantage of default link sets for spamming purposes only makes sense when the landscape is flat – where everybody sees everything unless they take countermeasures, and thus anything, no matter the quality, scales up indefinitely with no further human input. If, on the other hand, things only spread when they are actively shared by individuals from across different communities (each acting as curator for the sake of their community), the impact of these problems becomes small and it ceases to be worth the effort for bad actors.

                                                                        Ultimately, the links that appear on the page should be controllable by the person viewing the page, just like the formatting of the page should be under their control. In the case of links, that probably means trusting your friends and a handful of professional curators to have good taste & not scam you.

                                                                        1. 3

                                                                          This is really interesting. I did a reasonable chunk of reading around Xanadu in my research for this project, but ended up spending more time looking at ‘open hypermedia’ stuff and didn’t have the time to dive into a lot of the nitty gritty details as much as I would have liked. The details you’ve mentioned, for instance, seem to have totally passed me by.

                                                                          Curiously, although these kinds of ecosystem considerations weren’t really the main focus of my work, I think I came to somewhat similar (though less developed) conclusions in the assumptions I made for my prototyping work. If you wouldn’t mind, do you think you’d be able to point me towards a source for the design discussions you mentioned? I’d love to read more about this.

                                                                          1. 7

                                                                            Most Xanadu documentation is purely internal to the project, & the stuff that gets released is usually pretty non-technical. The discussions around how ODLs would be distributed were never made public at all, as far as I can tell (and they are part of a now-abandoned subproject). However, I did a technical overview of all of the Xanadu stuff I was privy to that wasn’t under trade secret(mirrored here), and this is probably the most complete & accessible public documentation on the project.

                                                                            ODL distribution isn’t covered in detail here, and XanaSpace never got to the point where it was seriously discussed in a systematic way, though there were some ideas thrown around, which I should document.

                                                                            Specifically: there was the concept of a ‘xanaful’ – a tarball containing all of the files (EDLs, ODLs, links, and sourcedocs) necessary for constructing a constellation of related documents. Paths in the tar format are just strings prefixing the content blob, so we were planning to use the full permanent address of each piece of content as its path, and check all resident xanaful tarballs for those addresses before fetching them from elsewhere. The idea is that a xanaful would be a convenient way for people to share not-yet-public documents, distribute stuff on physical media to be used where network access is limited, send bookmarks to friends in big chunks, distribute private ODLs (which contain formatting – and therefore themeing – links in addition to inter-content links), and get people who are a little skeptical to try a xanadu viewer out. Sending a xanaful out-of-band (for instance, by email) would be one method of ODL distribution.

                                                                            I wanted & was pushing for more of a peer-to-peer system. Specifically, I wanted individual XanaSpace applications to serve up the public parts of their caches to peers (to limit strain on hosts), and I wanted inter-peer communication to support a kind of friends network for sharing EDLs and ODLs directly with particular people. I was thinking of using gopher for this, since gopher is awfully simple to implement. (This is the system I wanted to use for transcopyright’s encryption oracle: the author’s machine or some trusted proxy would take requests for the OTP segment, check against a whitelist of authorized users, and distribute the OTP segment or a zeroed-out dummy.) There are some legal issues with this (which IPFS and SSB are discussing as well) and Ted wasn’t really comfortable with jumping into full distributed computing; also, this cut out any potential profit, and Ted still thinks of Xanadu as potentially profitable. As a result, none of these particular ideas got taken very seriously or had serious development work attached to them.

                                                                            Post-XanaSpace, our translit viewers have ditched the ODL entirely in favor of sticking link addresses in the EDL. (Our code always supported intermixing the two, but there was a conceptual division that I thought was useful.) It makes things easier for newcomers to understand but I think it does so at the cost of some clarity: now, people can still pretend that the author owns all the links in their document, and this only becomes clearly untrue when two authors link transclusions of overlapping segments from two resident documents. The web-based translit viewers only support one resident document at a time, so this never happens. (Having many resident documents at once is a vital feature & so we shouldn’t expect later implementations to keep this trend except accidentally.)

                                                                      1. 6

                                                                        I usually read a weird mixed bag of books about computer history and cyberculture books from the ‘90s where everyone thinks the future is awesome or the future is scary.

                                                                        I’m currently reading Net Slaves 2.0, from the website of the same name, focusing on little stories from the dot-com crash, https://www.amazon.com/Net-Slaves-2-0-Tales-Surviving/dp/1581152841

                                                                        1. 4

                                                                          Have you read Stephen Johnson’s Interface Culture? It’s my favorite in that genre.

                                                                          1. 1

                                                                            That one’s new to me. Thanks for the recommendation!

                                                                        1. 3

                                                                          I recently started Occulture, an essay collection by Carl Abrahamsson. I haven’t heard of the guy, but Erik Davis, Gary Lachmann, and Mitch Horowitz have all given it the thumbs-up (and they are my go-to authors for clear-eyed, unobscurantist histories of the impact of occult ideas on wider society).

                                                                          I’m slowly working my way through Baudrillard’s Simulation and Simulacra, which though short is very dense. I’m pretty sure I’ve read the whole thing before, but I didn’t have the historical context to really understand some of the examples given back then. It’s easier to understand once you’ve read Society of the Spectacle or become familiar with the ideas of semiotics. The tendency among midcentury french thinkers to look down on signalling as shallow rubs me the wrong way, but I’m sure there’s stuff in here that hasn’t yet been reinvented in cogsci, so it’s worth diving in.

                                                                          I’ve been slowly plodding my way through Playing at the World for nearly a year. It’s an exhaustive history of D&D (emphasis on exhaustive), and it covers a lot of interesting material just by digging very deep. The part I’m on right now is an extensive history of the prussian tradition in wargaming and its influence on the marketing of british toy soldier accessories.

                                                                          I recently finished Jeff VanderMeer’s Annihilation, and before that, the VanderMeers’ anthology The New Weird.