1. 1

    I was content using Alacritty for a while, but then I found it doesn’t work on remote X sessions and in my VMs due to what I assume is a GPU requirement. That’s too bad since I found it to be decent to use.

    1. 7

      The article decries libraries as software bloat, but in 1993, it was the opposite declaration.

      Poor management of software development is another important contributor of flab. Good management would prevent programmers from spending countless hours reinventing wheels, all of which end up on your hard disk.

      Also, OO and reusable objects will save us from oversized software.

      Perhaps the most promising development is the coming of object-oriented operating systems. The upcoming Taligent operating system from Apple and IBM, along with Cairo from Microsoft, promises to save unnecessary code by providing the user with a series of objects that can be purchased, enhanced, and linked together as necessary.

      Byte magazine, April 1993.

      1. 13

        About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much!) An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary. Were it not for a thousand times faster hardware, modern software would be utterly unusable.

        Alright; I’d like to see people using an editor fitting in 8k of storage compiled with a language fitting in 32k[1] as a daily driver.

        Okay, perhaps it’s a bit lame to focus on those numbers specifically; but my point is that programs have also become immensely more useful. Editors – even editors like Vim – today do a lot more than 40, 30, 20, or even 10 years ago. And how long did it take that guy to build that TODO app with his 13k dependencies? How long would it have taken 30 years ago to build a similar program?

        And look, I’m not a huge fan of large dependency trees either; but I think this article is a gross oversimplification.


        [1]: tcc is 275k on my system.

        1. 7

          Over time you should expect things to become more powerful and easier to use! Look at microcontrollers for instance. They have become smaller, faster and easier to use over time. The argument is that software seems to be getting slower at a more rapid rate than it is gaining functionality.

          Hardware has increased in speed and gained fancy new features at the same time. Why is it that modern websites are sluggish? Why can’t we have our cake and eat it too? More complicated software that is also faster (given that the hardware is getting faster – this should be free).

          I don’t think software being more powerful or complicated is at all a fair argument for why it is slower.

          1. 6

            I don’t think software being more powerful or complicated is at all a fair argument for why it is slower.

            I think it’s entirely fair. That doesn’t excuse sluggishness in modern software for some tasks (like say, typing lag), but it does explain some of it. If you’re doing a lot more work you should expect it use more resources. There is also user expectation: those with more computing resouces tend to want their software to do more.

            I don’t want my software to be sluggish either (it drives me up the wall when it’s slow for basic tasks, which is why I cannot abide web apps for the most part), but if you’re going to compare old software that did very little with new software that does a lot as the post does, then discounting the feature sets is not at all a fair comparison.

            1. 8

              I would 100% agree with you if the medium on which software is run on hadn’t increased in speed by orders of magnitude over the years. Has software increased by orders of magnitude in terms of power and complexity? Maybe, but then it should be the same speed as software was a few decades ago.

              The fact is that software has gotten more complex and more powerful, but not nearly to the extent that hardware has gotten faster. There is certainly some reason for this (although I disagree with the article on what that is).

              1. 4

                Certainly it seems like it should be a lot faster, and maybe it’s slow, but there’s no question that I’m more productive on systems now than I was in the mid 90s.

                1. 3

                  Recently, I was on a Windows machine and was obligated to write a text file. It had NotePad++, which I remembered as a “good editor,” so I used it. On Linux/Mac I’ve gotten used to a well-tuned neovim running in a GPU accelerated console… suffice to say that Notepad++ is no longer in its previous category.

            2. 5

              Most webpages aren’t that slow; most of the “big” websites generally work quite well in my experience. There are exceptions, of course, but we tend to remember negative experiences (such as websites being slow) much more strongly than positive ones (websites working well).

              A lot of the slowness is from the network. When you’re sitting in front of a webpage waiting for it to load stuff then most of the time the problem is that the HTML loads foo.js which triggers a XHR for overview.json and the response callback for that finally triggers data_used_to_render_the_website.json. Add to that the 19 tracker and advertisement scripts, and well… here we are.

              There’s a reason it works like that too, because people want mobile apps (for different platforms!) and whatnot these days too, and turns out it’s actually quite tricky to serve both a decent website and a decent mobile app. It’s probably underestimated how much obile complicated web dev.

              Note that some things are ridiculously slow. I have no idea how Slack manages to introduce 200ms to over a second of input lag on their chat; it’s like using a slow/inconsistent ssh connection. It’s been consistent crap ever since I first used it 5 years ago and I don’t quite understand how people can use Slack daily without wanting to chuck their computers out the window. But slow and crappy software is not a new thing, and Slack seems the exception (for every time I visit Slack, I’ve also visited dozens of sites that work well).

              At any rate, it’s much more complex than “programmers stopped thinking about the quality of their programs”.

              1. 5

                Most webpages aren’t that slow; most of the “big” websites generally work quite well in my experience. There are exceptions, of course, but we tend to remember negative experiences (such as websites being slow) much more strongly than positive ones (websites working well).

                We certainly use a different subset of the modern web! I find even GMail is sluggish these days, and often switch to the HTML-only mode. Jira (and basically all Atlassian projects) are what I would call “big” websites and wow are they slow.

                A lot of the slowness is from the network. When you’re sitting in front of a webpage waiting for it to load stuff then most of the time the problem is that the HTML loads foo.js which triggers a XHR for overview.json and the response callback for that finally triggers data_used_to_render_the_website.json. Add to that the 19 tracker and advertisement scripts, and well… here we are.

                Eh, I don’t fully buy this. Is it the network’s fault that every website comes bundled with 30 JS modules that need to load and then call out for more crap? I mean sure, with no-js this doesn’t become as much of an issue – and I don’t actually understand how someone can use the modern web without it – but I wouldn’t blame the network for these problems.

                There’s a reason it works like that too, because people want mobile apps (for different platforms!) and whatnot these days too, and turns out it’s actually quite tricky to serve both a decent website and a decent mobile app. It’s probably underestimated how much obile complicated web dev.

                Modern webdev is unbelievably complicated. I’ve been working on a project recently that dives into the depths of linker details, and it is nothing compared to how complicated setting up something like webpack is. But I would also argue that this complexity is superficial. Things like Svelte and Solid come to mind for what I think the modern web should look more like.

                Note that some things are ridiculously slow. I have no idea how Slack manages to introduce 200ms to over a second of input lag on their chat; it’s like using a slow/inconsistent ssh connection. It’s been consistent crap ever since I first used it 5 years ago and I don’t quite understand how people can use Slack daily without wanting to chuck their computers out the window. But slow and crappy software is not a new thing, and Slack seems the exception (for every time I visit Slack, I’ve also visited dozens of sites that work well).

                I’m right there with you! Its really unfortunate that no matter what company I go to and how good their engineering fundamentals are, the tools used are Jira, Slack and every other slow website.

                At any rate, it’s much more complex than “programmers stopped thinking about the quality of their programs”.

                I completely agree with you! Unfortunately, I’ve seen quality take a back seat far too many times to “just get something to work!” that I do think it is a part of the problem.

                1. 5

                  I’m not a huge fan of modern web dev either; in my own app I just use <script src=..> and for the most part ignore much of the ecosystem and other recent(-ish) developments. /r/webdev called me “like the anti-vaxx of web dev” for this, as I’m “not listening to the experts, just like the anti-vaxx people” 🤷‍♂️😂

                  But at the same time the end-result is … kind of okay, performance-wise anyway. Most of my gripes tend to be UX issues.

                  Is it the network’s fault that every website comes bundled with 30 JS modules that need to load and then call out for more crap? I mean sure, with no-js this doesn’t become as much of an issue – and I don’t actually understand how someone can use the modern web without it – but I wouldn’t blame the network for these problems.

                  That’s kind of an unrelated issue; a lot of these SPA websites are built against a JSON API, so it needs to call that to get the data and it just takes time, especially if it’s a generic API rather than an API specifically designed for the app (meaning it will take 2 or more requests to get the data). Good examples of this are the Stripe or SendGrid interfaces which feel incredibly slow not so much because they got funky JS, but because you’re waiting on those API requests

                  1. 4

                    I’m not a huge fan of modern web dev either; in my own app I just use and for the most part ignore much of the ecosystem and other recent(-ish) developments. /r/webdev called me “like the anti-vaxx of web dev” for this, as I’m “not listening to the experts, just like the anti-vaxx people” 🤷‍♂️😂

                    This is hilarious! I’m with you though.

                    That’s kind of an unrelated issue; a lot of these SPA websites are built against a JSON API, so it needs to call that to get the data and it just takes time, especially if it’s a generic API rather than an API specifically designed for the app (meaning it will take 2 or more requests to get the data). Good examples of this are the Stripe or SendGrid interfaces which feel incredibly slow not so much because they got funky JS, but because you’re waiting on those API requests

                    That’s a good point. It might not necessarily be slow frontend, but it is still slow engineering. One of the previous places I worked at I fixed up an endpoint which was just spewing data, and upon talking to the frontend engineers they were using maybe 10% of it. Made a pretty significant speed improvement by just not sending a ludicrous amount of unused data!

                    1. 2

                      Yeah, it’s inefficient engineering, but consider the requirements: you need to make some sort of interface which works in a web browser, an Android app, an iOS app, and you frequently want a customer-consumable API as well.

                      This is not easy to do; if you build a static “classic” template-driven web app it’s hard to add mobile support; so you have to build an API alongside that for the webapp for the mobile apps to consume, which is duplicate effort. You can trim the API to just what you need for this specific page, but then other API users who do need that data no longer have it.

                      For a comparatively simple site like Lobsters it’s fine to not do that since the desktop UI works reasonably well on mobile too, but as soon as things start getting more involved you really need a different UI for mobile, as it’s just a different kind of platform.

                      It’s a difficult problem to solve, and it was much easier 20 years ago because all computers were of the same type (a monitor with a keyboard and a mouse). People are kinda figuring this out how to best do this, and in the meanwhile we have to suffer Stripe’s UI making 15 API requests to load the dashboard.

                      GraphQL is intended to solve this problem, by the way, but that has its own set of problems.

                      1. 1

                        Oh its certainly not easy, but I also don’t think it is particularly difficult. The reality is that there needs to be separate APIs for each of the use-cases (app, webpage and customer-consumable), since they all have different upgrade cycles and usage-types. One of the problems I see often is everyone wanting there to be one API for everyone and that will never work efficiently.

                        Netflix has a nice blog post [1] about how they handle the number of APIs they have (Netflix has gaming consoles, smart TVs and a whole host of other platforms that their API supports and it isn’t one “mega” API for all of them). They essentially have a Proxy API on the server-side which bundles all of the microservice APIs into whatever API calls the various frontends need. That way backend engineers can keep publishing APIs as they see fit for their microservices, and frontend engineers can group together what they need into an efficient API for their platform. And note, I’m using “frontend” loosely since there are so many different platforms they support.

                        Of course whether this effort is necessary for a small shop is unclear but for a bigger place (like Stripe or SendGrid) it is frankly poor engineering to not be fast.

                        I was very excited about GraphQL for a little while, but you’re right it does come with its own set of problems. Its still yet to be seen whether it is actually worthwhile.

                        [1] https://netflixtechblog.com/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d

                        1. 3

                          Of course whether this effort is necessary for a small shop is unclear but for a bigger place (like Stripe or SendGrid) it is frankly poor engineering to not be fast.

                          Yeah, I don’t know. Netflix is perhaps a bit of an outlier; they also managed to make their extensive microservice platform work very well for them, whereas many other organisations’ experiences have been markedly less positive.

                          Perhaps a big part of the problem is that there’s no “obvious” way to do any of this. That Netflix approach looks great, but also a very non-trivial amount of bespoke engineering effort with quite a lot of operational overhead. It doesn’t look like something you can pick off from the shelf and have it “just work” for you, so companies tend to focus their engineers other efforts as they feel that gives them better ROI. That sounds kinda lazy, but these kind of projects can also fail, or worse, get implemented and then it turns out it doesn’t work all that well and then you’re stuck with it and it becomes a liability.

                          The issue with GraphQL is that it’s really hard to implement well on your backend. Basically, it allows your API consumers to construct arbitrary queries against your database (okay, you can lock this down, but that’s how it’s supposed to be used). It’s certainly possible to facilitate this, but it’s not an easy problem to solve. Perhaps radical improvements in the network stack (e.g. with QUIC and others) will alleviate much of this (because all of that is also an inefficient organically grown mess).

                          On HN one of the Stripe devs said they’re working on fixing the performance of the dashboard a few weeks ago after I complained about it by the way, so at least they’ve acknowledged it’s an issue and are actively working on a fix. Perhaps things will get better :-) Actually, I already have the impression it’s better compared to half a year ago.

                          Also, I’d like websites from “small shops” to be fast as well. All of this shouldn’t be higher engineering you can only do if you can afford an ops team and 16 devs.

                          My own approach is to just use server-rendered templates with about 700 lines of jQuery sprinkled on top :-) It sounds very old-fashioned, but it actually gives very good results, IMHO. Then again, I’m not Stripe or SendGrid and am operating on a rather different scale in almost every way.


                          Also, related: last night (after my previous comment) I happened to be talking to a friend about the new reddit design, and I figured I’d try it and see if it’s any better (I normally use old.reddit.com). I couldn’t believe how slow it is. Like, I literally can’t scroll: it just jumps all the time and Chrome uses both CPU cores to the max. Granted, I have a pretty slow laptop, but what the hell? What a train wreck. My laptop can play movies, play games, compile code, and do all sorts of things, but it struggles with … a text website.

                          This is the kind of stuff you’re talking about. Yeah, some sites really are inexcusably bad.

                        2.  

                          if you build a static “classic” template-driven web app it’s hard to add mobile support

                          It was like, a handful of lines of CSS and HTML for my static site to be supported on mobile with no frameworks. And often I go on a website and the sheer frameworks that they use get in the way of me using their site on mobile. This UK government coronavirus website was apparently optimized for mobile but none of the buttons worked, even when I zoomed in on them. This BBC News website was impossible to navigate because it kept on freezing up on my recent phone.

                          Whereas web forms from 20 years ago work fine and are perfectly fine to use, they might require some zooming but nowhere near as much effort to attempt to navigate, and at least they work.

                          Things will work well enough if you let go of the frameworks. Let the browser do it’s job!

                2. 1

                  Hardware has increased in speed and gained fancy new features at the same time.

                  Yet hardware also has loads of legacy cruft and compatibility layers making things slower than they would need to be. But that’s inevitable with the incremental nature of development. But I think incremental development is the only way to gain experience. And then every now and then you have a new technology come around and get a chance to start from a somewhat empty slate and apply what you’ve learned. I have high hopes for RISC V for example. And I think there are similar developments in software. Better tools and languages allow for writing better and faster software. A good example ist Rust, where you regularly see blog posts of Rust reimplementations of the supposedly pristine C implementations of ancient Unix tools outperform the C version, because the compiler can optimize more, because it has stronger guarantees from the language and you can use more high level constructs. Similar to that, I think that webassembly will improve the web performance story quite a bit.

                3. 5

                  Having used some old editors that come with some C compilers prior to 1985 and running them on emulators to try and do real work on those systems, I can attest to them being woefully underpowered compared to modern editors.

                  1. 2

                    I still have fond memories of the Lattice C (SAS/C) editor circa 1987. Maybe I’m a masochist.

                    That being said, I hate editors with a lot of features. I end up never using 90% of the features so all those features do for me is add complexity, slowness, bugs, and unexpected modalities. I suppose that explains why my favorite editors are ED (from AmigaDOS/TRIPOS), the original vi (yes I know modes), and sam…

                    1. 1

                      I used to be similar, but after neovim added async plug-ins (so the features don’t affect the performance of the main interface), I started to build a much more extensive collection of them. Language servers are a fantastic innovation, allowing the good parts of working in an IDE without loading up some monstrosity.

                  2.  

                    Tangentially,

                    I spent a few years using ex-vi (I’ve also in the past used ed(1) and vim’s vi(1)), C, POSIX, and surf(1), along with a minimal desktop environment (that was roughly, openbox with custom shortcuts, and XMonad) and it was pretty fun. It’s a nice feeling to know that my programs will work in a decade with minimal changes. Now I’ve moved to Python and Doom Emacs, on the one hand, nothing has changed much. Some things are maybe easier, maybe not. On the other hand, it’s given me a respect for the things that are easier.

                    One thing I will note is that the lag in using these new ‘high powered tools’ is much, much greater. Despite the fact that doom emacs, for example, goes out of it’s way to make latency when actually using the editor something that doesn’t bother the user. Loading up chromium takes an age when you’re used to surf popping up in under a second. Waiting for lightdm to start, log in, and then ‘start’ GDM is excruciating when you’re used to being dropped to a terminal after boot and loading Xorg in under a second.

                    There isn’t that much of an advantage to all of these bells and whistles, most of the complex stuff averages out because grep mostly gives results in the same time that an IDE takes to show the dialog. Everything you can do now you could achieve with shell scripts and get the same convenience and practically perform the task each day with about the same timing.

                    1. 1

                      You can do a modern programming language with a text editor from 1994 : Rus Cox co-authored Go with ACME.

                      1. 2

                        Russ made a video intro on it: https://research.swtch.com/acme

                        1. 1

                          You do not let any choice, I must try it again now! https://9fans.github.io/plan9port/ or http://drawterm.9front.org/ + http://9front.org/

                      2. 1

                        Vim is 37MB. Can someone please explain to me why a text editor like vim needs such a huge size?

                        1. 5

                          Where do you get this number? The vim executable on my work Ubuntu desktop is 2.6 MB, another <megabyte of shared objects, and 4.5 MB for libpython3.6m which is optional. A download of Vim for Windows is 3.6 megabytes, so about the same. Did you miss a decimal point?

                          1. 2

                            Not to put words into @Bowero’s mouth but the latest VIM sources are almost 60MB, so maybe they were referring to source or source+buildtime resources or translations or something?

                          2. 5

                            37M seems wrong, it’s nowhere near that on my system:

                            -rwxr-xr-x 1 root root 2.6M Jun 12 18:05 /usr/local/bin/vim*
                            

                            That 37M probably includes runtime files? Which aren’t really needed to run Vim but are just useful:

                            88K     ./plugin
                            92K     ./colors
                            128K    ./tools
                            136K    ./print
                            140K    ./macros
                            224K    ./pack
                            276K    ./compiler
                            892K    ./indent
                            1.1M    ./ftplugin
                            2.1M    ./autoload
                            2.5M    ./tutor
                            3.6M    ./spell
                            6.7M    ./syntax
                            8.2M    ./doc
                            27M     .
                            

                            Does Vim need support for 620 different filetypes, detailed reference documentation, a “tutor” in 72 different languages, or some helpful plugins shipped by default? I guess not; but it doesn’t get loaded if you don’t want to use it. It’s just useful.

                          3. 1

                            I used an editor under MS-DOS that was 3K in size, and could handle text files up to 64k (if I recall—it’s been 30 years since I last used it, and I only know the size because I typed in the source code from a magazine). It was a full screen editor.

                            Turbo Pascal pre-4 was I think 50k, and that included the editor, so that would fit your criteria. Do these editors give the functionality found in modern editors or IDEs? No. But they are usable in a sense [1].

                            [1] In the “Unix is my IDE” sense, using the command line tools to search and process text.

                            1. 6

                              But would you still choose to use those tools today? I mean, Unix was built with teletypes and printers, so you can undoubtedly build very useful things with limited tools, but why use a limited tool when you’ve got enough computing power to use a more powerful one?

                              1. 1

                                I might, especially if I found myself back on MS-DOS. I’ve tried various IDEs over the years (starting with Turbo Pascal 3) and I never found one that I liked. Back in the 80s, the notion that I would be forced to learn a new editor (when I had one I knew already) is what turned me off. Since the 90s, I’ve yet to find one that wouldn’t crash on me. The last time I tried ( just a few years ago) I tried loading (never mind editing or compiling) a single C file (57,892 bytes) and it crashed. Hell, my preferred MS-DOS editor I used (40K executable, not the 3K one I mentioned above) written in 1982 could handle that file. You can’t use what doesn’t run.

                            2. 1

                              A few minutes?

                              #!/bin/sh -e
                              
                              mkdir -p "$HOME/.config"
                              
                              case "$1" in
                              ('')
                                      exec sed '=; s/^/  / p; s/.*//' "$HOME/.config/todo"
                                      ;;
                              (add)
                                      shift
                                      exec echo "$*" >>$HOME/.config/todo
                                      ;;
                              (del)
                                      tmp=$(mktemp)
                                      sed "$2 d" "$HOME/.config/todo" >$tmp
                                      exec mv "$tmp" "$HOME/.config/todo"
                                      ;;
                              (edit)
                                      exec $EDITOR "$HOME/.config/todo"
                                      ;;
                              (*)
                                      echo >&2 "usage: todo add text of task to add"
                                      echo >&2 "       todo del num"
                                      echo >&2 "       todo edit"
                                      ;;
                              esac
                              
                              1. 4

                                Right-o, now ask your mother, spouse, brother, or other non-technical person to use that. It’s equivalent (probably even better) for people like us, but it’s not really an equivalent user-friendly GUI program.

                                1. 1

                                  I honestly think it could have been done likewise in 1985 (35 years ago) before people unable to use a shell would be entirely unable to access a computer’s files and features.

                                  If it really got extremely popular, it could have been adopted by 30 users author included.

                                  In 1995 whoever finding it on internet willing to give it a try would download it, spawn COMMAND.COM (Windows 95) try to run it, see it fail, open it and wonder “what the fuck?”, close it.

                                  I guess a todo.apk would get some more luck today for roughly the same time in Android Studio.

                                  1. 2

                                    I honestly think it could have been done likewise in 1985 (35 years ago) before people unable to use a shell would be entirely unable to access a computer’s files and features.

                                    Yeah, probably. But, for better or worse, the “average user” is quite different now than it was 35 years ago. Actually, this is probably one of the reasons things are so much more complex now: because while I’m perfectly happy with a script based on $EDITOR ~/.config/todo, this clearly won’t work for a lot of people (and that’s fine, not everyone needs to be deeply knowledgable about computers).

                                    1.  

                                      Agreed! A lot of things happen in 35 years! Software shaping society at high pace, society shaping software likewise.

                                    2. 1

                                      Which means that the challenge is becoming increasingly harder for many factors (more, less skilled, more distributed people expecting more done faster by computers of more diverse types).

                                      We need keep the software stack manageable, as if a simple Todo app already takes us libraries to get the job done in reasonable time, I do not want to know what your accounting software will require to be maintained (SAP anyone?).

                                      And the TODO app made with 13k dependencies means a huge amount of time spent maintaining the 13k dependencies for everyone. Now we cannot stop maintaining these 13k dependencies because every TODO app in the world now relies on it.

                              1. 2

                                According to the Picat developers, SETL was the first language with list comprehensions. Neat!

                                1. 2

                                  I have fond memories of looking at SETL when I wrote my thesis, but didn’t know about the list comprehension thing. That’s cool! The only language I know of that uses sets in a similar way before SETL is MADCAP, described in Sammet’s book. Kinda sad I didn’t notice this back in my grad school days.

                                1. 1

                                  I am perplexed as to the point of this. What is the value in it? Why would I want to use it? The post explains none of these things.

                                  1. 0

                                    The point is to run tests on shell scripts, so that as code changes over time, the output/result of that code is what’s expected/required.

                                    The whole concept of unit testing is explained on Wikipedia if you wish to know more: https://en.wikipedia.org/wiki/Unit_testing

                                    1. 1

                                      Then I consider this is a very poor attempt at a unit testing library.

                                      1. 1

                                        Why do you say that?

                                  1. 31

                                    1995 was the year in which programmers stopped thinking about the quality of their programs.

                                    Use software from before 1995 and you know that this isn’t even close to true.

                                    Edit: To include an example, here’s a review for the High C 386 compiler from 1987 if you want to see just one of many examples of poor quality (and expensive) software rushed to market.

                                    1. 2

                                      Well, there is even better approach - structured logging.

                                      1. 2

                                        How does structured logging address the issues brought up by the article? I can see it helping with grepable messages, but it’s not clear how using structured logging in and of itself addresses the other concerns.

                                        1. 1

                                          It makes “Add dynamic information” and “grep-able messages” pretty straightforward as it is “natural” to add dynamic information and with structured data greping is much easier. It also solves “No screaming” as with previous - you rarely need to scream at all.

                                      1. 39

                                        Images (in the modern web) are uninformative, but are simply advertisements of content.

                                        This is straight-up clickbait nonsense. Images are a vital part of conveying useful information and have been since humans started communicating. Try reading a math text without figures or describing anatomy purely with words. They do much more than advertise.

                                        I think the dig at the “modern web” may be at posts that throw in gratuitous images, especially the kind of thing you see on Medium. Such images often are pointless, but then again, so is the text.

                                        There is a good point in the post about subtracting content to get at the core of a good UX. This is well known. (“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”) If you can express your work simply with text then that is great. But it does not follow that you should start from a position where you reject things like CSS, images, and JS simply because text is “simpler”. This strikes me as the fatal conceit of modern web minimalists.

                                        1. 3

                                          What is the value in this versus using tools that already do this, without relying on the file magic database?

                                          The core functionality is easily written as a shell function (with short-circuiting).

                                          elfdbg() {
                                              readelf -S "$1" | awk '$2 ~ /^\.debug_/ {rc=1; nextfile;} END {exit(!rc)}'
                                          }
                                          
                                          1. 7

                                            I actually stopped doing this a little while ago partly because of the user environment stuff mentioned in the article but mostly because of some oddities that crept up when connecting to my machine over SSH and attaching to the Emacs daemon. Sometimes it would refuse to start a TTY client for reasons that remain a mystery.

                                            As for the environment stuff, I consider exec-path-from-shell to be an awful hack that I simply can’t trust. (Going so far as to say “it gives Emacs a bad name” is on the cusp of acceptable, but too damning.) If you’re going to set enviroment variables for use by Emacs started through systemd, then you should look at environment.d. If you know it’s going to be started through GDM, then you can take advantage of the fact that $HOME/.profile is sourced by /etc/gdm3/Xsession.

                                            And if you do use environment.d, then beware of the contents of /usr/lib/environment.d/99-environment.conf on Ubuntu 20.04 (and possibly others) that sets PATH without any regard for the current value; it will probably be the last environment conf file run if you follow the number-prefix model. You can run

                                            SYSTEMD_LOG_LEVEL=debug /usr/lib/systemd/user-environment-generators/30-systemd-environment-d-generator
                                            

                                            to see how the environment conf files will be processed.

                                            1. 3

                                              Recently I have stumbled on https://gitlab.com/blak3mill3r/emacs-ludicrous-speed/ and have decided to recreate the results with simple shell scripts combined with sudo (so without a service at all, and directly using criu not the python wrapper) and it does make emacs process restore blazing fast as advertised.

                                              When dumping “emacs –daemon” it runs within my user’s own environment - no magic required, and is pretty much good enough for me.

                                              Only hiccup I had was the need to recompile Emacs without dbus for the dump process to work flawlessly but that’s a small price I think (criu can’t dump a process with connected unix sockets and disabling dbus in emacs was easier than shutting down dbus daemon).

                                              I’m running this setup just for couple of days only, so can’t say anything about stability of the approach but even now I can say I love it. I might dump the approach (pun intended) when native compilation gets merged to master just to have even less apparent complexity. I guess it won’t be faster - hard to beat restoring from a process image probably but might be good enough.

                                              1. 2

                                                Thanks for the feedback! I’ll add a mention of environment.d in the article.

                                              1. 6

                                                This is practically the exact same procedure as the one described by an Atlassian post on the same subject.

                                                1. 5

                                                  Seems like the original source of both posts is a HN comment.

                                                  1. 1

                                                    I think any article that talks about dotfiles is very similar.

                                                  1. 4

                                                    I think the usual flags (spam) are a good first step.

                                                    1. 3

                                                      I concur. They’re pretty easily noticed and attract spam flags quite readily at the moment.

                                                      1. 1

                                                        This especially true if it is caught early, based on the patterns I have seen.

                                                    1. 3

                                                      A hard limit of X stories linking to the same domain (with a few exceptions for big domains like github or the msdn blogs) per Y units of time for each account seems like a good idea to me.

                                                      The last time this was suggested, people complained about not being able to submit articles from their blogs anymore. I’d argue this is a feature, not a bug. I believe that if the lobste.rs community likes what you write, its members will follow your blog and start posting your articles. If on the other hand it doesn’t and you can’t post anymore, nothing of value was lost for the community.

                                                      1. 4

                                                        I believe that if the lobste.rs community likes what you write, its members will follow your blog and start posting your articles.

                                                        This seems idealistic. I don’t think we should prevent people from posting to their own blog. Like most things, it is not a mertiocracy. There is some level of self-promotion that can and should happen for those who have something to contribute.

                                                        1. 2

                                                          Why not make it dependent on the number of votes a particular story gets? It sounds as if the problem is the same person posting things from the same domain, not different people posting things from the same domain (in the absence of large numbers of sock-puppet accounts, if someone’s blog is submitted by lots of people that probably means that it has a load of things different people find interesting ). For each user, track the number of votes a domain has received since the user last posted a story in that domain, initialised at 10. Posting a story in a domain costs 10 from this total. If you post something from your blog and it gets < 10 votes, you can’t post again. Someone else can, and if that pushes the total votes above 10, you can post again. A few possible refinements:

                                                          • Add a -10 penalty for submissions marked as spam.
                                                          • Add a +1 bonus for each {user, domain} pair every month, so you get another try after a bit under a year.
                                                          • Let the user’s karma score adjust the cost of posting.
                                                          • Make new users inherit the initial scores from the person who invites them, so if I post my blog and it gets no vote, no one I invite can post my blog for 10 months either.
                                                        1. 4

                                                          Here’s an example of a high-self-promotion low-engagement pattern where all or nearly all of a user’s links are to their own site/projects. If they have comments or votes they’re all or almost all on their own stories. (I’m highlighting this user because, after several failed attempts, their first non-github link came the day after their new user restriction on unseen domains expired.)

                                                          My hunch is that stories submitted by users fitting this pattern are significantly less well-received, but I haven’t yet written queries around any of this.

                                                          1. 9

                                                            My hunch is that stories submitted by users fitting this pattern are significantly less well-received, but I haven’t yet written queries around any of this.

                                                            I will state that in the last couple of years, whenever I see “authored by” in a submission and I don’t recognize the account I will take a peek at the account’s profile and possibly its submission history. If I see a lot of the same kinds of stories or many submissions to the same domain I will avoid upvoting it, even if the content seems on topic. (If it’s off topic or clearly spam, then I will flag it.)

                                                            There are a few accounts I percieve to be using Lobste.rs as a way to boost traffic to their site and to use it as their comment system. There are others that do submit almost everything they post to their blog, but they engage in other ways so I am less annoyed by that behaviour. And of course, there are those who submit something from their site but are so infrequent in doing so that I don’t perceive it to be anything other than wanting to share something they think is interesting.

                                                            I do think those in the first group end up as less well-received, but I don’t know how to detect it. Certainly in the IRC chat, the same names keep coming up.

                                                            1. 2

                                                              I do something similar, and agree.

                                                          1. 6

                                                            If someone’s using another approach to achieve the same result I’d love to hear about it!

                                                            I use a keyboard that runs the QMK firmware, and my dual-function keys are all defined at the firmware level. QMK lets you fine-tune the parameters of tap-hold behavior, but the defaults are good.

                                                            I chose to remap “;” to Control. It’s been great.

                                                            1. 2

                                                              I had a few of these dual function keys set up with my QMK board and I found that getting the timing is tricky because you can’t hit the key in succession very well. Another thing is that if you hit the key and release it, then hit it again without waiting properly, it might register as a tap instead of a hold.

                                                              I’m sure these things are fixable in firmware. It’s just another thing you might have to adjust.

                                                            1. 4

                                                              Why remap to enter key? It’s so far placed. I remapped my ‘a’ key to control as I pretty much always hover over ‘a’ key already.

                                                              1. 1

                                                                I agree with you, but the author shared their motivations in an older post, http://emacsredux.com/blog/2013/11/12/a-crazy-productivity-boost-remap-return-to-control/

                                                                1. 1

                                                                  I guess I can see it, but the advantages seem pretty minor, certainly not what I would call “crazy”. Also, having used the “dual function” key that is described, is can be pretty finicky sometimes. Hitting Enter instead of Control can be rather disruptive.

                                                                  If it works, great. It just feels unnecessary.

                                                                  1. 2

                                                                    Well, back then this type of keyboard remapping seemed pretty novel to me, probably today I wouldn’t use the adjective crazy to describe it. :-) Still, Enter is definitely easier to press than the actual left CTRL with a pinky on most ANSI keyboards, and not having to move my hand off the home row is quite nice. I did play at some point with using SPC as control, but I typing several spaces in a row becomes quite problematic with this arrangement. :-)

                                                                    1. 1

                                                                      One more thing - I came across this idea when I was working on a Mac keyboard without a left control to begin with and it was the only way I wouldn’t lose any other key (e.g. one of the Options) in exchange for the left control I desperately needed. With Linux and a normal Win keyboard that’s not as big of an issue, but I still prefer that arrangement over a control on the bottom row of the keyboard.

                                                              1. 16

                                                                I really liked I Took a COBOL Course and It Wasn’t The Worst because it took an honest look at what is considered to be a dead technology and found that, well, it’s just as quirky as modern development in many respects. I think revisiting what we consider to be “old junk” is something the software development industry is really bad at.

                                                                1. 74

                                                                  V.v.V!

                                                                  Many thanks to @jcs, @pushcx, @Irene, @kyle, and @alynpost for their work on establishing lobsters and keeping this running.

                                                                  1. 19

                                                                    Seconded.

                                                                    I really appreciate the work put forth by the admins and those who run the place.

                                                                    A sincere thanks.

                                                                  1. 6

                                                                    What I discovered later was that design documentation, encoding the intent and decisions made during developing a system, helps teams be successful in the short term, and people be successful in the long term.

                                                                    Our team has a rule that no work should be done on a major feature without a design document that has been vetted by the entire team, and possibly with input from other teams. It is a godsend because it catches so many errors and problems ahead of time. Does it catch everything? No, but it certainly prevents some wrong turns before they happen. And it has proven to be helpful when people challenge us as to why we did it one way instead of another because we document failed ideas with reasons as to why they don’t work.

                                                                    I recommend teams consider what a little design up front can do for them, especially thinking through many scenarios. It really helps.

                                                                    1. 3

                                                                      Similarly, my team has a rule that any assumptions must be clearly stated. Now granted I am not a professional programmer and I work on team with zero professional programmers, but for the code we do write, this is a must. I wrote a function earlier today that polls an API and does some work with the response and by my team’s rules I had to specify an example of what I assumed the API response looks like under normal circumstances. The code then went on to do some manipulation of that data assuming a normal API response. If someone is trying to debug the code in the future they’ll be able to see what assumptions I made (the API response looks like this) and determine really quickly if the code is failing because the API response looks different now.

                                                                      It really helps us because the API endpoints can change faster than we write new code. Explicitly documenting the state of the environment when the code was written helps the next person understand what might have gone wrong since the last time the code was changed. It’s usually the API that changed rather than our code mysteriously breaking.

                                                                      1. 2

                                                                        Do you also use that example in a test and assert the outcome of the function is what you desire it to be? Because if you don’t, I predict you would find that hugely helpful.

                                                                        1. 1

                                                                          Stupidly our company requires that any tests have to cover at least 95% of the code. That rule doesn’t apply to code with no tests. So we only use tests in projects where we have the overhead to complete the testing. We’re a team of consultants, not programmers, so we rarely have enough time to write tests with 95% coverage.

                                                                      2. 2

                                                                        What would be a design document for you/your team? How did you ensure that vetting was more than just a rubber stamp?

                                                                        In several companies I’ve been to, term ‘design document’ meant similar, yet different things. In one, it was a one pager motivating the work, in another it was a free form document where it would be good that one outlines other approaches. But in every situation, there was not a lot of challenging of the content, as if these documents are there just to convince ourselves that we did the due diligence.

                                                                        1. 1

                                                                          This is something the team mandates on itself, it’s not something forced on us. We take it seriously. Design documents often take weeks to be worked out and we present and discuss them in team meetings. In fact, the near final version must be presented to the team before it can be approved.

                                                                          I guess the answer to your question is that things don’t get rubber stamped because everyone agrees that is a bad idea.

                                                                      1. 8

                                                                        This is almost adorably naive in its implementation. I guess it’s fine for simple Makefiles, but is not of much use for complex ones, where targets are computed, generated, or included.

                                                                        Also, having to add markers manually is a hopeless endeavor.

                                                                        1. 2

                                                                          I’ve done basically what the author suggests many times and pretty much by definition I don’t need it for anything but the “main” targets that a user is expected to type. So while I totally agree with you that it won’t work in some cases, I suspect that those cases are less common than you seem to imply.

                                                                          1. 1

                                                                            Then you have been fortunate to work on projects where the Make definitions probably haven’t gotten too messy or rely on something like GNU autotools.

                                                                            Except for small, personal projects, I have not ever worked on something where this approach to Makefile documentation was viable. Documenting the “main” targets was always done as part of the README or something similar so that you would know about targets that were conditionally included or generated.

                                                                          2. 2

                                                                            I’ve done something very similar, but instead of reading all the Makefiles when running the help target, it generates an included file that is updated as a dependency of the Makefiles. (And the delimiter I used was #: rather than ##) You are right that it requires adding the markers manually, but the intent is to document the “main” target commands rather than every possible target.

                                                                            help: #: Prints this help
                                                                            .SILENT: help.Makefile
                                                                            help.Makefile: $(MAKEFILE_LIST)
                                                                            	printf '# This file was generated by make\n' help >"$@"
                                                                            	printf '.PHONY: %s\n' help >>"$@"
                                                                            	printf '%s:\n' help >>"$@"
                                                                            	grep -E '^.+: *#: *' $(MAKEFILE_LIST) \
                                                                            	| cut -d: -f2,4 \
                                                                            	| sort \
                                                                            	| awk 'BEGIN {FS = ":"};{printf "\t$$(info %-18s %s)\n",$$1,$$2}' \
                                                                            	>>"$@"
                                                                            
                                                                            -include help.Makefile
                                                                            
                                                                            1. 1

                                                                              generate from make -pn?