1. 1

    In addition to my Thinkpad, I operate a remote server (for around 10 Euros/month) for backups (via rsync) and other fun activities.

    Where do you have this remote server from? I’ve been meaning to set up an FTP box on Hetzner to make offsite backups, but curious what other services people use in Germany.

    1. 1

      I am using Kimsufi: https://www.kimsufi.com/de/

      Why? Because I want my server location to be in France! Maybe other providers are offering the same, but I used it a while ago and just stick with it now!

    1. 7

      But, as I digged deeper and deeper into the language, and watched tutorials, I figured: I can’t do Systems Programming on a MacBook Pro running macOS. Don’t get me wrong: Once you mastered a skill, it doesn’t matter where you operate it on. Also a different machine doesn’t make a huge difference. Although, I might disagree a bit with that. Take loosing weight for example.

      I - disagree with this logic, at least as stated in the article.

      I think what you were going for was “I wanted to switch to a working environment where I could strive to understand (and possibly modify!) every aspect of it all the way down to the bare metal, and in so doing increase my comfort level with and skill in systems programming”.

      I don’t mean to rewrite your article for you, and I’m hardly a writer, but I think you’re actually driving at something worth saying here.

      1. 3

        Thanks, that’s exactly what I meant. English is my second language, so sometimes it shows :)

      1. 23

        This is basically a blog post on how the author set up an Arch Linux machine to code Rust on. There’s almost no Rust-specific content.

        1. 1

          That’s right. I just wanted to give some inspiration and why changing your hardware/software setup can also lead to a better understanding of the how’s and why’s of a programming langauge, in this case Rust.

          1. 6

            changing your hardware/software setup can also lead to a better understanding of the how’s and why’s of a programming langauge, in this case Rust.

            This point wooshed past me. I wrote quite a bit of low-level stuff in Rust on macOS, including hand-rolled linear algebra code with SIMD intrinsics. I could have done the same on Windows with WSL if I had wanted to (or plain Windows, but that’s an alien environment to me). Just inspect the generated machine code (e.g. with cargo-asm). The argumentation here seems to be more about the reduction of distractions and aesthetics. You can work with a full-screen iTerm with tmux on the Mac as well and put it in permanent do not disturb mode to avoid notifications, so I am not sure how valid this point is.

            There is an interesting story to tell about system programming on Linux vs. Mac, but it would be about perf vs. DTrace/Instruments, valgrind for finding memory leaks vs. leaks, etc.

            I have mostly switched back to Linux, but my primary motivation is that if I find a bug or possible improvement in the software or distro (NixOS) that I use, I can fix it myself and submit a patch upstream. Among other things, I am really fed up with macOS bugs, some of which have stuck around for years (e.g. Preview crashing frequently on re-LaTeXing PDF files) that I cannot attempt to fix because the source code is proprietary.

            1. 4

              It seemed more about eliminating distractions than languages or systems programming. It didn’t seem language-specific at all — that same setup would let you focus on Clojure, C++, Haskell, or writing a novel.

              I didn’t get the point about not being able to do systems programming on MacOS. I mean, all those distractions in a Mac are enabled by mind-bogglingly clever systems programming that was done on MacOS!

              Is the idea that by setting up the right “feng shui” on your computer, you create a mental space conducive to the kind of work you want to do? In other words, it’s not all about lack of distraction, but about the right aesthetics? So if you want to do a certain kind of programming, namely headless network servers and the like, you want to work on a bare Linux system? I can understand that — I just recently installed the Go font to do Go in, which in some weird way is now associated with thinking like Russ Cox. :)

          1. 8

            I enjoy your enthusiasm for Rust Bastian but I think you need to be careful with statements that can be interpreted as subjective or incorrect as this is what will stick in folks minds instead of the message you are trying to send. For example:

            Rust can be compiled to a single binary, statically linked with C libraries to not even need a Rust runtime anymore.

            Rust doesn’t have a runtime in the conventional sense of the word. I links against libc by default, as do C programs and has some pre-main code and panic handling code but is generally runtime free whether statically or dynamically linked.

            The compiler helps bringing safe applications out there. Just implementing a web server in Rust will stop SQL Injections and other security risks.

            Implementing a web server in Rust is not enough to eliminate SQL injection. It’s possible to write Rust code succeptable to SQL injection. The same best practices that avoid SQL injection in other languages also apply to Rust, such as don’t interpolate unsanitised user controlled content into SQL queries. Using a library like diesel can help make this safer, just as activerecord can make it safer in Ruby.

            If you’re interested, The Rust Community Team can put you in touch with folks that can proof read posts prior to publishing.

            Anyway I don’t want to discourage you, just provide some feedback.

            1. 1

              Thanks for the feedback! Will do, maybe this one was published too fast, usually I let articles sit for a while and get proof readers in early!

              Will update the post accordingly!

            1. 2

              Does anyone have uBlock Origin filters or UserStyle to decrapify Medium?

              If not I will make one.

              1. 4

                as much as I hate the name, https://makemediumreadable.com/ really helps with this

                  1. 1

                    I use Stylus to apply custom CSS.

                    .js-stickyFooter,
                    .overlay,
                    .js-metabar,
                    .js-metabarSpacer,
                    .js-postShareWidget,
                    .collectionHeader,
                    .progressiveMedia-image,
                    .progressiveMedia-canvas,
                    .progressiveMedia,
                    .aspectRatioPlaceholder,
                    .butterBar.
                    .postMeterBar {
                        display: none;
                    }
                    .avatar-image {
                        border-radius: 4px !important;
                    }
                    
                        1. 1

                          needs to cover the fullscreen popup as well

                        1. 2

                          I wish there would be another platform where I could publish articles as easy I can on Medium. I hate their layout for not loggedin users (and a lot more). But it’s easy to see how well an article is doing and to be able to write on the go.

                          1. 8

                            Maybe https://write.as/ or https://dev.to could work. As a reader, I certainly prefer both over Medium.

                          1. 20

                            I like Rust, there are good use cases for it, but “Rust is the new JavaScript”, etc? Rust’s restrictions, which make it so different and useful also slow down development velocity and application design. Hyping it up to be something it isn’t doesn’t help anyone (except maybe the authors click count). Different requirements, different tools, different solutions. Want a good explanation of the topic (and a pro-Rust talk)? See Bryan Cantrill’s “Platform Values” https://youtu.be/2wZ1pCpJUIM?t=126

                            1. 1

                              Fair point! Allthough the headlines for these certain paragraphs are a bit catchy, they have a true point: Bitcoin is a decentralised, sort of self-governed structure. Rust is doing the same. JavaScript runs everywhere, Rust does it too.

                            1. 2

                              A significant reason I find it hard to adopt other tech stacks over nodejs for the web is the large existing ecosystem, especially the tooling, compilers and others. Would Rust be a good contender today, without one having to implement those lower level blocks?

                              1. 4

                                It’s an ongoing process and of course, a decade of Node is not caught up in a few years. http://www.arewewebyet.org/

                                1. 1

                                  That’s an amazing link, and exactly the question I was pondering, thank you!

                              1. 2

                                4 things I want on a computer to be happy: i3, tmux, Firefox, and a music player (i.e. Spotify) https://timetoplatypus.com/screenshots.html

                                1. 1

                                  On which machine are you running your setup?

                                  1. 1

                                    I normally run Arch Linux

                                1. 13

                                  For two of my three projects, my setup is almost exactly the same.

                                  Three tmux panes, holding Vim, GHCi, and an Elm compiler. I work from anywhere, and I still use a 13” MacBook Air as I have done for the better part of a decade. I have tried larger setups, but I always revert back to this. I need portability. Some developers say they need external displays for lots of screen real estate, but I actually only have one pair of eyeballs so I can’t focus on much more than what I already have.

                                  Almost everything is done in nix shells. Sometimes I’ll have other tmux windows containing a psql, mutt, ssh (over nixops), redis-cli, or weechat session. My tmux status bar has a little weather widget that I made.

                                  1. 9

                                    Would you mind posting a non-Instagram link? I blocked Facebook services in my hosts and would love to see your desktop ;)

                                    1. 8

                                      I blocked Facebook services

                                      Very wise :)

                                      Would you mind posting a non-Instagram link?

                                      Sure. Hope this works: https://imgur.com/a/Vy3gv9E

                                    2. 6

                                      Vim, GHCi, and an Elm compiler

                                      Living the dream, I see. :D

                                      1. 2

                                        I’ve come a long way from having to build everything in WordPress :)

                                      2. 5

                                        Battlestation that inspires Lobsters to achieve the career they love. Ten upvotes. :)

                                        1. 3

                                          The 13” non-retina MacBook Air is still my favourite ever laptop, even though I’ve long since moved on. My mid-2011 model is on the shelf, awaiting a fresh installation of Debian.

                                          1. 2

                                            But you can see the weather out the window ;)

                                          1. 2

                                            My question here is really: Why not drop the Browser completely and use Chrome instead? In the end, it’s just branding. Which browser I use doesn’t make much of a difference anymore.

                                            Right now I am using Brave Beta (based on Chrome) and Firefox. I don’t mind either of them.

                                            Or, why wouldn’t this great new Open Source company Microsoft not join forces with Firefox and help them to deliver a better browser then Chrome is? Then they could have their Microsoft apps as extensions or stuff like that.

                                            Then, they would also have a market reach on macOS and Linux where people still will have a hard time installing Edge, just because of branding reasons.

                                            1. 1

                                              Chrome wants to sync with your Google Account. Edge wants to sync with your Live.com Account.

                                            1. 3

                                              This is interesting, but another 20 years of software development continues to prove him wrong.

                                              The current dominant paradigm is flat, single-ordered lists, and search (perhaps augmented with tags like our dear lobste.rs here).

                                              This is even more of all the bad stuff he’s railing against at the start of the article, but this is the stuff that works and there are innumerable other approaches dead or dying.

                                              It suspect that for UI’s less freedom is simpler (one button, one list, one query, one purpose, etc.) and not the other way around.


                                              For developers, I think he was right, and it’s also what we’ve got today. It’s clearly preferable for developers to have a simple model to work against (Like URIs + JSON).

                                              apt-get install firefox (Which unpacks to a resource identifier and a standardized, machine-readable package file) is quite probably as good as it gets. It’s a directed graph instead of an undirected graph like his zipper system, but undirected graphs require an unrealistic (and in my opinion probably harmful) amount of federation between producers of API’s and their consumers.

                                              1. 7

                                                When the pitch is “good computing is possible”, “bad computing has dominated” isn’t actually a great counterargument – particularly when the history of so much of it comes down to dumb luck, path dependence, tradeoffs between technical ability & marketing skills, and increasingly fast turnover and the dominance of increasingly inexperienced devs in the industry.

                                                If you’re trying to suggest that the way things shook out is actually ideal for users – I don’t know how to even start arguing against that. If you’re suggesting that it’s inevitable, then I can’t share that kind of cynicism because it would kill me.

                                                A better world is possible but nobody ever said it would be easy.

                                                1. 4

                                                  Your comment is such a good expression of how I feel about the status quo! I was just having a similar discussion in another thread about source code, where I said “text is hugely limiting for working with source code”, and somebody objected with “but look at this grep-like tool, it’s totally enough for me”. I can understand when people raise practical objections to better tools (hard to get traction, hard to interface with existing systems etc.). What’s dispiriting is the refusal to even admit that better tools are possible.

                                                  1. 2

                                                    The mistake is believing that we’re anywhere close to status quo in software development. The tools and techniques used today are completely different from the tools we used 5 and 10 years ago, and are almost unrecognizable next to the tools and techniques used 40 and 50 years ago.

                                                    Some stuff sticks around, (keyboards are fast!) but other things change and there is loads of innovative stuff going on all the time. With reference to visual programming: I recently spent a weekend playing with the Unreal 4 SDK’s block programming language (they call it blueprints) it has fairly seamless C++ integration and I was surprised with how nice it was for certain operations… You might also be interested in Scratch.

                                                    Often, these systems are out there, already existing. Sometimes they’re not in the mainstream because of institutional momentum, but more often they’re not in the mainstream because they’re not good (the implementations or the ideas themselves).

                                                    The proof of the pudding is in the eating.

                                                    1. 4

                                                      I don’t think I can agree with this. I’m pretty sure the “write code-compile-run” approach to writing code that is still in incredibly widespread use is over 40 years old. Smalltalk was developed in the 70s. Emacs was developed in the 70s. Turbo Pascal, which had an integrated compiler and editor, was released in mid-80s (more than 30 years ago). CVS was developed in mid-80s (more than 30 years ago). Borland Delphi and Microsoft Visual Studio, which were pretty much full-fledged IDEs, were released in the 90s (20 years ago). I could go on.

                                                      What do we have now that’s qualitatively different from 20 years ago?

                                                      1. 3

                                                        Yup. Some very shallow things have changed but the big ideas in computing really all date to the 70s (and even the ‘radical’ ideas from the 70s still seem radical). I blame the churn: half of the industry has less than 10 years of experience, and degree programs don’t emphasize an in-depth understanding of the variety of ideas (focusing instead on the ‘royal road’ between Turing’s UTM paper and Java, while avoiding important but complicated side-quests into domains like computability).

                                                        Somebody graduating with a CS degree today can be forgiven for thinking that the web is hypertext, because they didn’t really receive an education about it. Likewise, they can be forgiven for thinking (for example) that inheritance is a great way to do code reuse in large java codebases – because they were taught this, despite the fact that everybody knows it isn’t true. And, because more than half their coworkers got fundamentally the same curriculum, they can stay blissfully unaware of all the possible (and actually existing) alternatives – and think that what they work with is anywhere from “all there is” to “the best possible system”.

                                                        1. 1

                                                          I got your book of essays - interested in your thinking on these topics.

                                                          1. 1

                                                            Thanks!

                                                            There are more details in that, but I’m not sure whether or not they’ll be any more accessible than my explanation here.

                                                        2. 3
                                                          • Most languages aren’t AOT compiled, there’s usually a JIT in place (if even that, Ruby and python are run-time languages through and through). These languages did not exist 20 years ago, though their ancestors did (and died, and had some of the good bits resurrected, I use Clojure regularly, which is both modern and a throwback).

                                                          • Automated testing is very much the norm today, it was a fringe idea 10 years ago and something that you were only crazy enough to do if you were building rockets or missiles or something.

                                                          • Packages and entire machines are regularly downloaded from the internet and executed in production. I had someone tell me that a docker image was the best way to distribute and run a desktop Linux application.

                                                          • Smartphones, and the old-as-new challenges of working around vendors locking them down.

                                                          • The year of the Linux desktop surely came sometime in the last or next 20 years.

                                                          • Near dominance of Linux in the cloud.

                                                          • Cloud computing and the tooling around it.

                                                          • The browser wars ended, though they started to heat up before the 20 year cutoff.

                                                          • The last days of Moore’s law and the 10 years it took most of the industry to realize the party was over.

                                                          • CUDA, related, the almost unbelievable advances in computer graphics. (Which we aren’t seeing in web/UI design, again, probably not for lack of trying, but maybe the right design hasn’t been struck)

                                                          • Success with Neural Networks on some problem sets and their fledgling integration into other parts of the stack. Wondering when or if I’ll see a NN based linter I can drop into Emacs.


                                                          I could go on too, QWERTY keyboards have been around 150 years because it’s good enough and the alternatives aren’t better then having one standard. I don’t think that the fact that my computer has a QWERTY keyboard on it is an aberration or failure, and not for lack of experimentation on my own part and on the parts of others. Now if only we could do something about that caps lock key… Oh wait, I remapped it.


                                                          It’s easy to pick up on the greatest hits in computer science, 20, 30, and 40 years ago. There’s a ton of survivorship bias and you don’t point to all of those COBOL-alikes and stack-based languages which have all but vanished from the industry. If it seems like there’s no progress today, it’s only because it’s more difficult to pick the winners without the benefit of hindsight. There might be some innovation still buried that makes two way linking better then one way linking, but I don’t know what it is and my opinion is that it doesn’t exist.

                                                          1. 3

                                                            Fair enough. Let me clarify my comment, which was narrowly focused on developer tools for no good reason.

                                                            There is no question that there have been massive advances in hardware, but I think the software is a lot more hit and miss.

                                                            In terms of advances on the software front, I would point to distributed storage in addition to cloud computing and machine learning. For end users, navigation and maps are finally really good too. There are probably hundreds of other specific examples like incredible technology for animated films.

                                                            I think my complaints are to do with the fact that most of the effort in the last 20 years seems to have been directed to reimplementing mainframes on top of the web. In many ways, there is churn without innovation. I do not see much change in software development either, as I mentioned in the previous comment (I don’t think automated testing counts), and it’s what I spend most of my time on so there’s an availability bias to my complaints. There is also very little progress in tools for information management and, for lack of a better word, “end user computing” (again, spreadsheets are very old news).

                                                            I think my perception is additionally coloured by the fact that we ended up with both smartphones and the web as channels for addictive consumption and advertising industry surveillance. It often feels like one step forward and ten back.

                                                            I hope this comment provides a more balanced perspective.

                                                    2. 2

                                                      In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

                                                      Opensource and the internet have given a ton of ideas a fair shake, including these ideas. Stuff is getting better (not worse). The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

                                                      1. 4

                                                        In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

                                                        Dozens of people, and I’ve met or worked with approximately half of them. Post-web, the hypertext community is tiny. I can describe at length the problems preventing these implementations from becoming commercially successful, but none of them are that the underlying ideas are difficult or impractical.

                                                        The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

                                                        I wrote some of those systems, while working under the author of that paper. That’s how I formed my opinions.

                                                        1. 1

                                                          That’s awesome. Maybe you can change my mind!

                                                          Directed graphs are more general then undirected graphs (You can implement two-way undirected graphs out of one way arrows, you can’t go the other way around). Almost every level of the stack from the tippy top of the application layer to the deepest depths of CPU caching and branch prediction is implemented in terms of one-way arrows and abstractions, I find it difficult to believe that this is a mistake.


                                                          EDIT: I realized that ‘general’ in this case has a different meaning for a software developer then it does in mathematics and here I was using the software-developers perspective of “can be readily implemented using”. Mathematically, something is more general when it can be described with fewer terms or axioms. Undirected graphs are more maths general because you have to add arrowheads to an undirected graph to make a directed graph, but for the software developer it feels more obvious that you could get a “bidirected” graph by adding a backwards arrow to each forwards arrow. The implementation of a directed graph from an undirected graph is difficult for a software developer because you have to figure out which way each arrow is supposed to go.

                                                          1. 1

                                                            Bidirectional links are not undirected edges. The difference is not that direction is unknown – it’s that the edge is visible whichever side of the node you’re on.

                                                            (This is only hard on the web because HTML decided against linkbases in favor of embedded representations that must be mined by a third party in order to reverse them – which makes jump links a little bit easier to initially implement but screws over other forms of linking. The issue, essentially, is that with a naive host-centric way of performing jump links, no portion of the graph is actually known without mining.

                                                            Linkbases are literally the connection graph, and links are constructed from linkbases. In the XanaSpace/XanaduSpace model, you’ve got a bunch of arbitrary linkbases representing arbitrary subgraphts that are ‘resident’ – created by whoever and distributed however – and when a node intersects with one of the resident links, the connection is displayed and made navigable.

                                                            Also in this model a link might actually be a node in itself where it has multiple points on either side, or it might have zero end points on one side, but that’s a generalization & not necessarily interesting since it’s equivalent to all combinations of either end’s endsets.)

                                                            TL;DR: bidirectional links are not undirected links – merely links understood above the level of the contents of a single node.

                                                            1. 1

                                                              Ok then, and how is it that you construct a graph out of a set of subgraphs? Is that construction also two way links thereby assuring that every participant constructs the same graph?

                                                              1. 1

                                                                Participants are not guaranteed to construct the same graph, and the graphs aren’t guaranteed to even be fully connected. (The only difference between bidirectional links & jump links is that you can see both points.)

                                                                Instead, you get whatever collection of connected subgraphs are navigable from the linkbases you have resident (which are just lists of directed edges).

                                                                This particular kind of graph-theory analysis isn’t terribly meaningful for either the web or translit, since it’s the technical detail of how much work you have to do to get a link graph that differs, not the kind of graph itself. (Graph theory is useful for talking about ZigZag, but ZigZag is basically unrelated to translit / hypertext and is more like an everted tabular database.)

                                                                1. 1

                                                                  I guess I’m trying to understand how this is better or different from what already exists. If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now, time to throw one back and celebrate.

                                                                  1. 1

                                                                    I’m trying to understand how this is better or different from what already exists

                                                                    Well, when the project started, none of what we have existed. This was the first attempt.

                                                                    If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now,

                                                                    ‘Link’ doesn’t actually mean ‘URL’ in this sense. A link is an edge between two nodes – each of these nodes being a collection of positions within a document. So, a linkbase isn’t anything like a collection of URLs, but it it’s a lot like a collection of pairs of URLs with an array of byte offsets & lengths affixed to each URL. (In fact, this is exactly what it is in the XanaSpace ODL model.) A URL by itself is only capable of creating a jump link, not a bidirectional link.

                                                                    It’s not a matter of commenting on a URL, but of creating sharable lists of connections between sections of already-existing content. That’s the point of linking: that you can indicate a connection between two existing things without coordinating with any authors or owners.

                                                                    URL-sharing sites like lobste.rs provide one quarter of that function: by coordinating with one site, you can share a URL to another site, but you don’t have control over either side beyond the level of an entire document (or, if you’re very lucky and the author put useful anchors, you can point to the beginning of a section on only the target side of the link).

                                                                    1. 1

                                                                      To take an example of a system which steps in the middle and does take greater control over both ends, Google’s AMP. I feel like it is one of the worse things anyone has ever tried to do to the internet in it’s entire existence.

                                                                      Control oriented systems like AMP and to a lesser degree sharing sites like Imgur, Pinterest, Facebook, and soon (probably) Medium, represent existential threats to forums like lobste.rs.

                                                                      So, in short, you’re really not selling me on why this two way links thing is better.

                                                                      1. 2

                                                                        We actually don’t have centralization like that in the system. (We sort of did in XU88 and XU92 but that stopped in the mid-80s.)

                                                                        It’s not about controlling the ends. The edges are not part of the ends, and therefore the edges can be distributed and handled without permission from the ends.

                                                                        Links are not part of a document. Links are an association between sections of documents. Therefore, it doesn’t make any sense to embed them in a document (and then require a big organization like Google to extract them and sell them back to you). Instead, people create connections between existing things & share them.

                                                                        I’m having a hard time understanding what your understanding of bidirectional linking is, so let me get down to brass tacks & implementation details:

                                                                        A link is a pair of spanpointers. A spanpointer is a document address, a byte offset from the beginning of the document, and a span length. Anyone can make one of these between any two things so long as you have the addresses. This doesn’t require control of either endpoint. It doesn’t require any third party to control anything either. I can write a link on a piece of paper and give it to you, and you can make the same link on your own computer, without any bits being transferred between our machines.

                                                                        We do not host the links. We do not host the endpoints. We don’t host anything. We let you see connections between documents.

                                                                        Seeing connections between documents manifests in two ways:

                                                                        1. transpointing windows – we draw a line between the sections that are linked together, and maybe color them the same color as the line
                                                                        2. bidirectional navigation – since you can see the link from either side, you can go left instead of going right

                                                                        It’s not about control, or centralization. Documents aren’t aware of their links.

                                                                        The only requirement for bidirectional linking is that an address points to the same document forever. (This is a solved problem: ignore hosts & use content addressing, like IPFS.)

                                                                        1. 1

                                                                          Wow, thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                                                          I still think we’ve got this, or could implement it on the existing web stack. I think any user could implement zig-zag links in a hierarchal windows-style file structure since ’98 if not ‘95. I think it’s informative that most users do not construct those links, who knows how many of us have tried it in the name of getting organized.

                                                                          I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                                                          I’m going to be thinking over this a lot more. A system like git stores the differences between documents instead of the documents themselves, so clearly there are places for other ways of relating documents to each other then what we’ve got, which work!

                                                                          1. 3

                                                                            I should clarify: I’ve been describing bidirectional links in translit (aka hypertext or transliterature). ZigZag is actually a totally different (incompatible) system. The only similarity is that they’re both interactive methods of looking at associations between data invented by Ted Nelson.

                                                                            If we want to compare to existing stacks, transliterature is a kind of whole-document authoring and annotation thing like Word, while ZigZag is a personal database like Access – though in both cases the assumptions have been turned inside-out.

                                                                            You’re right that these things, once they’re understood, aren’t very difficult to implement. (I implemented open source versions of core data structures after leaving the project, specifically as demonstrations of this.)

                                                                            I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                                                            Depending on how you chunk, a site like this has a whole host of items. I see a lot of characters, for instance. I see multiple buttons, and multiple jump links. We’ve sort of gotten used to a particular way of working with the web, so its inherent complexity is forgotten.

                                                                            thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                                                            No problem! I feel like it’s my duty to explain Xanadu ideas because they’re explained so poorly elsewhere. I spent years trying to fully understand them from public documentation before I joined the project and got direct feedback, and I want to make it easier for other people to learn it than it was for me.

                                                      2. 1

                                                        I wouldn’t say so. What you have is more and more people are using the same tools, therefore you will never get a “perfect” solution. Generally, nature doesn’t provide a perfect system but “good enough to survive”. My partner and I are getting a child at the moment, and the times the doctor told us: “This is not perfect, but nature doesn’t care about that. It just cares about good enough to get the job done”.

                                                        After I’ve heard this statement, I see it everywhere. Also with computers. Code and how we work run a huge chunk of important systems, and somehow they work. Maybe they work because they are not perfect.

                                                        I agree that things will change (“for the better”), but it will come in phases. We will have a bigger catastrophic thing happening and afterwards systems and tools will change and adapt. As long everything sort of works, well, there is no big reason to change it (for the majority of people) since they can get the job done and then enjoy the sun, beaches and human interactions.

                                                        1. 1

                                                          Nobody’s complaining that we don’t have perfection here. We’re complaining about the remarkable absence of not-awful in projects by people who should know better.

                                                      3. 3

                                                        I think the best way to describe what we have is “Design by Pop Culture”. Our socio-economic system is a low pass filter, distilling ideas until you can package them and sell them. Is it the best we got given those economic constraints? maybe…

                                                        But that’s like saying “Look, this is the best way to produce cotton, it’s the best we got” during the slave era…(slavery being a different socio-economic system)

                                                      1. 11

                                                        When do those “You don’t need part x of the religion called Scrum” articles stop?

                                                        The author works with probably senior people on an already well defined product with a clear vision. Great. The author also realises that blindly adding metholodgies to your workflow doesn’t help at all.

                                                        I am waiting for people writing articles like: “Form follows Function”. If you want to create a great product but have no clue about your market and work with different sets of people, maybe find a way to align them. Do you need StandUps for this? Maybe, maybe not. Who actually cares?

                                                        The one big problem about software products is: Nobody cares if the product exists or not, and most of the engineers work there to pay their rent. Scrum does a great job in keeping things tight so nobody wanders off dreaming about what they actually wanted to do with their lives.

                                                        I did tons of projects in the past for different clients. And you will quickly see what motivates the team and at which level of expertise they are operating. If you create your own product you deeply care about with people who are intrinsic motivated to help you, then you will create your own way of doing things simply because you just care about this one thing.

                                                        Then, the function is clear and form follows. Will the form have some parts of Scrum or other things? Maybe, maybe not. But all the articles just say: If you follow x (form) then you will become y (function). But no happy person has ever done that. If you want something (y: function) then you will find the right form to get there.

                                                        1. 4

                                                          I am Not a Web Developer, but can someone with a clue help me understand how the author can mix server side and client side Javascript frameworks while making his “FRAMEWORKS BAD!” point?

                                                          If he were to restrict himself to Javascript I could see his point, but IMO tarring all web frameworks with the same brush feels less than useful to me.

                                                          For instance, frameworks like Django and Rails are superlative tools for building simple CRUD apps which require very little in the way of custom interface. Does anyone really dispute that?

                                                          1. 4

                                                            There’s is a new trend now (which i don’t like), it’s to do everything on the client side – except, in most cases, but not always unfortunately, authorization.

                                                            So basically people write “micro services” which just served raw data. You can see that as SQL/KV-Store/<whatever database> over HTTP. Because they’re using micro-services “like google in their last blog-post explained”, you can scale the software better in terms of development effort, and request load. (like Facebook said in their blog posts as well) Even though, the software will never be developed by more than 3 devs, and will never have more than 100 concurrent users.

                                                            Also because the “micro services” still use MySQL in the background, you will still have a bottleneck on your SQL server (but that’s fine because the “micro-services are stateless” like Amazon recommended in their blog post)

                                                            The climax of this trend is GraphQL.

                                                            So basically, now, you have this data backend, but you need to create hyper-linkable routes, with rendered HTML data for each routes, forms to insert/update new data, and so on… (What Django and Rails would do: CRUD) So people use full client side Javascript framework to do so.

                                                            The real advantages are that:

                                                            • this is cool and hipster. (Investors love it. You just have to put some machine learning on top of it)
                                                            • You have “separation of concerns” (whatever that means, because you end up with spagghetti because of the third point)
                                                            • You can hire cheap “frontend engineers” (5 years ago they were called designers) or right out of school juniors (usually they learned basic javascript at school)
                                                            • You can write brittle tests in Selenium (or don’t write any test at all, and have a “QA person” click through the app every time you deploy)

                                                            PROFIT!

                                                            1. 3

                                                              While there are definitely some former designers working in JS on the client side, it’s far from the majority. And if you want a client side application done well, you can’t just hire cheap “frontend engineers” or you get a mess like you used to have. You should still unit and integration test aside from e2e test on the client regardless of what the server side is.

                                                              1. 1

                                                                We can’t really argue on this, because there’s no data back up any of our points. I don’t claim that all “frontend engineers” were designers. I know that my original comment was implying it, but I was making the generalization as part of the joke/satire tone of my original comment.

                                                                The bottom line is: you’re right, there are many frontend engineers who have real engineering background (many people working on react.js, or on facebook’s frontend, the designers of Elm, …) However, I disagree when you claim they’re are the vast majority.

                                                                I don’t have any number to back up my claim, but that’s true: if you just read comments and pull request on Github’s react.js project, you might think “most frontend engineers are engineers.” However, I haven’t seen that in my experience working with many companies: JavaScript is considered a toy language, very junior (with no experience) engineers are hired. Or former designers who could figure jQuery now write full single page application (in react or angular)

                                                                The result in general is a buggy result, with no tests, all the possible bad practices, okay working. And when “the guy” (or “the gal” doesn’t matter) leaves the next “frontend engineer” just rewrites it. (and the next, and the next, …)

                                                                1. 2

                                                                  I’m not sure on your experience, but I’ve worked at a lot of companies and I’ve only heard a few elitist types of individuals who called JS a toy language or assumed just anyone can do it. I’ve heard that on the server side when it comes to PHP from the same types, though.

                                                                  I think you’re being unnecessarily harsh on client side work in general when server side work is often just as buggy, untested, and just enough to “work” now and again to keep the team going with the worst possible practices. It’s not a client vs server thing, it’s just a fact of working in the “engineering” umbrella.

                                                                  That said, I’ve worked with a ton of people who never went to school for CS or anything even related (myself included) who don’t even consider themselves engineers. My title says it, but that only changed within the last 5 years or so. It used to say “developer.” It’s a marketing thing for companies to say devs are “engineers” because people attach some significance to that term, but not to “developers.”

                                                              2. 1

                                                                You couldn’t be more right on this ine. I am working as a freelance developer and the last 3 out of 5 projects were exactly that: Throwing buzzwords around, glueing stuff together. Using micro services but still have so many dependencies between each module that it’s basically a harder to seploy monolith.

                                                                1. 1

                                                                  I’m currently building one of those websites that does everything in the frontend. There are a bunch of real advantages. My primary reason was I want to write a very efficient server app for when I get more usage but for now it would be best to continue using rails because thats what I know best so I want the backend to be fairly easy to replace later and not having any app logic in it makes that a whole lot easier. The backend is mainly just an api over a database which validates things the client sends.

                                                                  Another advantage is you get a rock solid, complete API for other people to use with no extra effort. Most online services have websites that do things that simply can not be done via the API. When making an app that just exchanges JSON with the server you end up with an API that can do everything the website can. Someone else could make an alternative website using my backend and there would be no issues.

                                                                  Also after the first page load it will actually require less data sent over the network because the whole website logic is cached and only a tiny bit of json is sent on page loads.

                                                                  1. 2

                                                                    Of course there are many advantages. I don’t deny them: no reload is cool, having a universal API is nice, heavy caching of the static assets is awesome, …

                                                                    But there are many disadvantages:

                                                                    • Was SEO finally fixed? When I was still on the topic people used to serve an alternative page to google to get indexed. I heard that now google was running a full javascript engine in their crawlers. Is that fixed now? I haven’t checked further. You might say “We don’t care about SEO”, but as a user of your website, I want to be able to use google/bing/duckduckgo instead of your badly performing internal search engine when I’m looking for content.
                                                                    • There are still disparities between javascript dialects of each browsers last time I heard. If you do a server-side rendered website, you just have to care about HTML/CSS, now it’s one more language to support.
                                                                    • You will most definitely break on legacy browsers (IE6 which is still used in China), or text based browsers. (Lynx, w3m, …)
                                                                    • Your single page application (SPA) might not have all the correct aria elements for disabled people and screen readers: If you just do links and buttons with label, most screen readers will get it right.
                                                                    • You have to handle things that the browsers does for you. For example you click on a link, and the network goes down, the browser display a page “connectivity issues”. With an SPA, you have to handle failures and display the message yourself when your XHR request fails.

                                                                    That’s just the technical side. What I’ve seen is the mess it brings on the human level. Of course in you’re case if you’re alone, or with compatible developer, it will be smooth. But I’ve seen teams split by management or the employees themselves.

                                                                    Either management says “we need to share the work load, let’s have a ‘frontend team’ and a ‘backend team’.” Or, in some other case, the employees have the mindset “I’m a frontend dev, I don’t do Python”, “I do backend, not javascript”, … In both cases, you have huge friction “when do you send when I request that?”, “you told me you added this feature but it doesn’t work on staging!”, …

                                                                    1. 1

                                                                      Thanks for the comments. Those are some pretty good points. I’m currently working on this myself and just working things out.

                                                                      Was SEO finally fixed? When I was still on the topic people used to serve an alternative page to google to get indexed. I heard that now google was running a full javascript engine in their crawlers. Is that fixed now? I haven’t checked further. You might say “We don’t care about SEO”, but as a user of your website, I want to be able to use google/bing/duckduckgo instead of your badly performing internal search engine when I’m looking for content.

                                                                      I’m not really sure but my website doesn’t really have much searchable stuff anyway. It’s mainly based around maps where 90% of the data you look at is uploaded by you.

                                                                      You will most definitely break on legacy browsers (IE6 which is still used in China), or text based browsers. (Lynx, w3m, …)

                                                                      I’m not really concerned about this as my mapping library also doesn’t work on these also my website is probably automatically banned from china for being not hosted there.

                                                                      Your single page application (SPA) might not have all the correct aria elements for disabled people and screen readers: If you just do links and buttons with label, most screen readers will get it right.

                                                                      The framework I am using makes it very easy to control the HTML that gets generated so I have had no issues making accessible pages

                                                                      Overall it’s worse in some ways and better in others. I’m not sure if its the best way to do things but it has been a great learning experience.

                                                                      1. 2

                                                                        If you do something with maps, heavy interactive (like google maps), and very little text. I think you don’t have any other choice: doing single page application is fine. (and maybe best)

                                                                        In most cases, I’ve seen people write CRUD single page application. (Like an ERP) I consider that non-sense.

                                                              1. 2

                                                                I would love to use my smartphone less. I laze around on it in the morning and at night. I wish there was something that autodisabled my phone when I’m on my bed.

                                                                That all said, the smartness of a smartphone is invaluable for me. I use to to look up directions, change plans on the fly, learn about artists when I see artwork, read in the subway. I need a humane smart phone, instead of one filled with apps that aim to colonize my mind.

                                                                I sometimes save time by eating out. Quick meals at home are cheaper and less time consuming that going somewhere, ordering, waiting for the food, eating, and coming back. But anything more involved than oatmeal is hard for someone living alone. It takes lots of time to organize recipes, get the ingredients, and cook. You have to choose between eating lots of leftovers or even more time per meal. It can be sensible to eat out if you can, from a time point of view.

                                                                1. 3

                                                                  Regarding smartphone usage, I agree I can’t see myself switching away from a device that has substantial daily utility. For starters you can play with blocking websites. I felt like my productivity tanked when switching from Android to iOS, yet only recently realized you can have the same effect of editing the hosts file via settings-restrictions-websites-limitadultcontent, then adding some ‘never allow sites’

                                                                  Regarding cooking, I feel like that’s a mindset shift. I went from wanting food to be automated to having a process of cooking some routine dishes (tacos, smoothies, pizzas from scratch, rice/quinoa stirfry) in a way that I pretty much know what ingredients to keep around. Time wise things can be mixed too - stretching while the eggs are cooking, doing a bodyweight workout while the oven is heating up/sauce is getting reduced. I live alone and travel pretty regularly, just try to plan ahead and in the rare case give veggies/fruit I can’t use to a neighbor.

                                                                  1. 1

                                                                    Thanks! I didn’t know you could do hosts blocking on iOS. I hadn’t thought about mixing cooking with exercise like that.

                                                                    I started making avacado toast as a quick foray into cooking. Then I learned a mouse has been eating my bread. So now I’m block on a mouse trap getting shipped over. What a life man.

                                                                    1. 1

                                                                      Cool to hear! FWIW, I find myself keeping sliced bread in the freezer then toasting it since I eat a loaf fairly slowly (over the course of a few weeks).

                                                                  2. 2

                                                                    My big reasons for not going back to a feature phone are maps and particularly in-car maps (I use Android Auto, and don’t need a separate sat nav as a result). But other than that it just makes me waste time on places like twitter and lobsters. :)

                                                                    1. 3

                                                                      I’m the same. By way of a halfway approach, I use a oneplus 5t and Lineage for Microg. There are cheaper, compatible options out there but I plan to keep this phone for a good few years.

                                                                      Using Lineage for MicroG means I’m not signed into Google services, but still have access to maps and signal if I want them. The F-Droid app store is a lot lighter than Google Play. I haven’t tried Android Auto though. It might be worth a look, although it might not fully meet every use case.

                                                                      1. 2

                                                                        My hopes are for Light Phone 2 now. https://www.indiegogo.com/projects/light-phone-2-design I pre-ordered one, and basic messaging + navigation would be perfect.

                                                                    1. 5

                                                                      Adding story text that summaries/previews the story is frowned upon. It’s added for some things, like titles that don’t clearly indicate their content, which tends to happen with academic papers and some trade publications in PDF format.

                                                                      In this case, the story text could have been something like “I reduced my technology use and ended up feeling better.” The current story text (a list of things with a sentence afterwards that explains the list) is not really helpful. I recommend avoiding such story text in future submissions.

                                                                      1. 8

                                                                        Sorry, was my first story submit. Thanks for the editing and help!

                                                                        1. 5

                                                                          No worries. And welcome to Lobste.rs!

                                                                        2. 4

                                                                          I removed it.

                                                                        1. 4

                                                                          I think the title is misleading. He just talks about Bitcoin and proof-of-work, but calls it “Risks of Cryptocurrencies”. If the title would be “The Risk of Bitcoin”, fair enough. The only purpose of Bitcoin is now to bring money from investors into the gamble, and the distribute the money onto other projects until there is one which figures out how to make blockchain-based currencies work.

                                                                          I disagree that we should just give up on the idea. Imo, crypto currencies would let anybody take part of the system “capitalism” and therefore could improve it much faster and easier. Right now, to invest in a company, I need to go through a third party. Other investments are almost not doable for a working class person.

                                                                          in my perfect world, every Dollar would be on the Blockchain, we have then much better tools to diagnose and monitor it and make it visible what’s going on. And then, slowly, we can distribute wealth to products and people who are doing more good then harm.

                                                                          For this to work, of course, we need to improve crypto currencies (proof-of-* etc.) A LOT. But all I read is people complaining about the amount of money people put into Bitcoin, yet nobody sees how much money travels every day via the stock exchanges.

                                                                          So yes, get the rich investors hyped, put their money onto the blockchain, use it to create better systems and then monitor money flows and direct money to better products and ideas.

                                                                          1. 3

                                                                            The only purpose of Bitcoin is now to bring money from investors into the gamble

                                                                            … and also to, you know, pay for products and services.

                                                                            1. 3

                                                                              crypto currencies would let anybody take part of the system “capitalism” and therefore could improve it much faster and easier

                                                                              So far, that “anybody” has been scammers, ransomware authors and other criminals. I’m not sure how “investing” in literal Ponzi schemes improves anything.

                                                                              to invest in a company, I need to go through a third party

                                                                              What’s wrong with that? The third party is legally responsible for the stuff. You can’t sue a trustless p2p ledger for accidentally burning your money.