1. 30

    In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

    There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

    Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

    In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

    1. 7

      “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

      I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.


        npm has a good general approach to packaging

        The thing is, their general approach isn’t good.

        They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

        They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

        They decided that letting people do weird things with their package registry is the Correct Thing to Do.

        They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

        On and on and on…


          Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

          The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.


            It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

          2. 1

            I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

            1. 8

              You’re not even responding to their argument.


                There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.


                  I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                  In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

              2. 8

                The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.


                  Distributions like Debian are outdated, at least for software dev,

                  That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                  I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                  Often it is possible to install libraries at another location and redirect your software to use that though.

                  It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                  I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                  I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)


                    So like squeak/smalltalk images then? Whats old is new again I suppose.



                      I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                      You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.


                      I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                      Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)


                        I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                        1. 5

                          nobody runs pip install with sudo

                          I’m pretty sure there are quite a few devs doing just that.


                            Sure, I didn’t count :-) The important point is they have a viable option not to.


                            npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                            People just do sudo because Reasons™ :(


                          It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.


                            I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.


                              with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.


                                I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.


                                  A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging


                                  It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                  I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                  I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                1. 31

                                  I don’t disagree with the article. However I also can’t help but to be reminded of the “I only own 1 fork, 2 tshirts, a backpack, and a laptop” people.

                                  “I disable HTML, CSS, Javascript and all that bloat … I only browse the internet with Emacs … ALSO … oh my god did I tell you how much RAM this Electron blasphemy uses? … what is wrong with some good old ugly lightweight Tk GUIs? I mostly use tmux over SSH anyways so who needs GUIs right right? Everything is bloated. Everything is unnecessary. Disable everything. And make sure that your stuff gracefully falls back from 2018 to this authentic vintage record player that I feel like using as my alternative web browser today for extra privacy protection.”

                                  I get it. I don’t even particularly disagree with it. But it’s turning into a bit of a meme.

                                  Also for clarity I don’t mean to imply that the author said those things. The post just reminded me of this theme.

                                  1. 5

                                    I get it. I don’t even particularly disagree with it. But it’s turning into a bit of a meme.

                                    It’s to distinguish yourself, as opposed to those that that run Wordpress and use IDEs. And yes, it relates to the minimalism you mention.

                                    (Did I mention my writing space uses pandoc and a couple of lines of shell ?)

                                    1. 1

                                      I tried that, too, but found that pandoc is actually quite difficult to maintain, since it lives in the haskell ecosystem (which isn’t too available on the non-GNU Linux ecosystem)

                                      Im with the markdown/jekyll stack now, i dont think its less bloated, but at least i can outsource the rendering (and therefore having the stuff installed) to github pages.

                                      1. 1

                                        It has a lot of weird issues, like pandoc output not being really stable, so compiling with a newer version of pandoc leads to a lot of churn in the results.

                                        pandoc releases binaries though, and whenever I’m on non-GNU, I just get the installer and install it.

                                    2. 5

                                      I want to add that minimalism can have an nice payout: reduced resource usage. If you use tmux instead of Xorg, w3m instead of firefox, suddenly an 15 year old Laptop is not scrap metal anymore. They cost like 60$ each on eBay, but you can usually get them for free from relatives.

                                      1. 3

                                        I think it’s good to have both extremes, like that everyone can choose something in the middle.
                                        However, 2016 and 2017 have shown that there’s a trend to not take that middle way.

                                        1. 2

                                          Maybe it gets a meme. There are memes for everything nowadays. There are meme for GUI too, or the GUI was a meme that grew too big to be called a meme, now it is the standard way of computing for most people.

                                          If bulk can be stripped out of the interfaces (graphical, text, any kind!) it is good, but if it becomes yet another meme… Yes, it kills all the fun.

                                        1. 2

                                          Electron bashings remind of Haskell (or similar) programmers bashing PHP.

                                          Just like PHP, Electron has problems, but it has many practical benefits and that’s why it’s popular despite having those problems.

                                          Their popularity (despite their problems) shows just how weak and problematic some of the alternatives are.

                                          A minority of purists will moan while people continue to produce value using these tools.

                                          1. 1

                                            Yes, but there’s another thing: PHP had become extremely popular for historic reasons (shared web hosts where you could not execute custom binaries but had mod_php or so; much simpler to get started programming for than CGI binaries) and then just stayed there, at least to some extend, also for economical reasons: There are vast amounts of people who feel comfortable programming PHP and find Jobs doing it, so they often don’t really see a reason to look into anything else. Because a) it’s popular, b) ‘works’ for them c) heard that the other stuff is more complicated.

                                            With electron, getting started is easy. When you hit performance problems or so, you often have already invested far too much resources in the platform to switch. With QT for example, getting started, is perceived as much more difficult.

                                          1. 3

                                            Businesses tend to derail software development practices because they tend to view it as “hip brand name for how we exert power over employees”.

                                            They will hang on to small pieces of a methodology as a cover for something else.

                                            For example they may weaponise “daily agile standup meeting” for micro-management.

                                            1. 1

                                              Exactly - this is the most dangerous thing about “Agile”. It makes software developers the least important and least powerful people in software development, when really we should be the ones controlling our own field.

                                            1. 5

                                              I support this (for being more complete/explicit).

                                              1. 14

                                                Lacking a bachelors degree effects your career in development in at least one significant way; limiting your salary and promotion potential. Outside “competent” tech companies, Big Dumb Corp (ie the rest of the Forturn 500) HR will always use lack of a BS degree (or only an Associates) as reason to offer less salary up front, and lower raises once you’re on staff, and deny promotion. It’s a check box incompetents use to because they can’t tell who actually contributes. Some of the best developers I’ve worked with have had no degrees, have been self taught. It’s not right, but what I’ve seen where ever I’ve worked.

                                                1. 6

                                                  Another unfortunate but real side effect is many people may be less than thrilled to “work under” you if they have degrees (i.e. self-taught engineer in charge of multiple PhDs).

                                                  The only exception is if you are some god authority figure like Linus Torvalds where no one dares to challenge your expertise.

                                                  1. 4

                                                    That’s a bias too. There is nothing to say that an engineer without a degree cannot do a good job managing a highly credentialed staff. As long as they have humility, know their limits, and are thinking about how to get the best out of someone it should be possible. Lots of research-based organisations don’t have this occurring a lot because the needs of the job (not the people management) require the PhD, but in the tech industry there are lots of PhDs being managed by less credentialed individuals.

                                                    1. 1

                                                      I agree. The thing is it’s common enough that you will not be able to consistently escape it.

                                                  2. 3

                                                    True, startups and most tech companies don’t care. Fortune 500, consultancies etc will be harder.

                                                    1. 1

                                                      I think that is less of a problem outside of the US (And maybe the UK?). I not in those countries and have not been to university. I’m doing ok as a developer. I think you just need other ways to show your skills such as a website/blog/github/experience. Once you get your first job (It’s probably not going to be stellar) then all the companies after that will mainly be looking at your experience in the work force.

                                                    1. 8

                                                      There’s one legitimate use case for password expiration policy that often gets ignored.

                                                      If a password is compromised the attacker can possibly quietly access a system behind a user’s back and siphon out information for years and years.

                                                      For example the CEOs email password has been compromised and a hacker establishes a script to siphon out and archive messages which could last for years.

                                                      A password expiration policy limits this risk by setting a time window.

                                                      If you don’t have that you can potentially get into situations where you can not reliably determine what has been compromised and what hasn’t been.

                                                      Password expiration gives you a frame of reference for when the compromise could and could not have happened.

                                                      1. 7

                                                        The problem with this reasoning is that it ignores the initial attack vector; how did the attacker compromise that password in the first place? It doesn’t just magically fall out of the sky once and then never reappear again.

                                                        Very often you’ll find that the password was obtained by compromising a victim’s computer, breaking into an auxiliary system that shares the same passwords, and so on. In these cases it is absolutely useless to change the password, because the attacker could just as easily obtain the new password and continue their business.

                                                        There’s a small set of cases where a time window might limit exposure, such as reused passwords in password dumps; but 1) these cases are better mitigated by preventing low-entropy passwords, 2) you’re still vulnerable for 3 months or whatever your window is, which is more than enough time to siphon out all information from most networks, 3) you’d be much better protected by just having proper monitoring of user sessions in the first place.

                                                        Is there theoretically a nonzero benefit to password expiry? Yes, but only if your security is already otherwise lacking, and even then it’s not a common case, and at that point it’s absolutely not worth it considering the big downside of forced password expiry: it incentivizes people to pick worse passwords, because remembering complex passwords is a big time investment that’s no longer worth it.

                                                        1. 1

                                                          This would indeed work if you can use a password manager at that point. If it is at the login prompt, you usually can’t use a password manager. If web service X has a policy to change password every Y months, it wouldn’t be much trouble, I just use my password manager. I do that occasionally (manually) anyway for social accounts.

                                                          Also, as the article suggest, for the particular threat you mention, you’d better use 2FA to mitigate that.

                                                        1. 1

                                                          Naming is about communicating from one human to another with an extremely low amount of information (function name) with a high amount of meaning (that function’s behavior).

                                                          Really wish all this code cross-refrence tooling focused on showing documentation & linking it, not code. Texinfo is mediocre, but supports the type of indexing that is useful.

                                                          1. 2

                                                            Agreed. I have compared this to the saying that “sometimes the only way to escape the fire is to run through it”.

                                                            I don’t mean this in a practical sense for doing today in your source code, but as a philosophical concept. It’s better to name something “oldPanda” than “findLastUserUnpaidInvoiceSomethingSomething”.

                                                            In the first instance you just assign a name, a symbol, to a concept. You are not fighting to pack lots of information into a tiny space. Because the symbol is meaningless it can precisely mean what it is representing.

                                                            In the second instance you make an attempt at packing information into somewhere that simply does not fit. Now this incomplete and inaccurate name will become one of your worst enemies for years to come.

                                                            The relation to that saying at the top is that just like running through fire having obscure symbols and names is something we want to naturally avoid so we try to cram meaning into variable names which is perhaps the “obvious” solution like running away from a fire, but it’s not necessary always the best.

                                                          1. 24

                                                            We are excited to continue experimenting with this new editing paradigm.

                                                            That’s fine, but this is not new.

                                                            Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).

                                                            By the 90s they were all but dead, except maybe in niche areas.

                                                            I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.

                                                            Addendum: here’s something from 1971 describing such a system. User engineering principles for interactive systems (Hansen, 1971). I didn’t know about this one until today!

                                                            1. 9

                                                              Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.

                                                              1. 3

                                                                Looks like you’ve been working hard on it. Encouraging!

                                                              2. 7

                                                                This is an interesting relevant video: https://www.youtube.com/watch?v=tSnnfUj1XCQ

                                                                The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.

                                                                However that is not at all a case against structured editing. That is a case for making better structured editors.

                                                                That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.

                                                                1. 4

                                                                  Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.

                                                                  To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.

                                                                  As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.

                                                                  (*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.

                                                                  1. 1

                                                                    I mentioned this in the previous discussion about isomorf.

                                                                    Here is what I consider an AST editor done about as right as can be done, in terms of “getting out of my way”

                                                                    Friend of mine Rik Arends demoing his real-time WebGL system MakePad at AmsterdamJS this year

                                                                  2. 4

                                                                    Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.

                                                                    I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.

                                                                    You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.

                                                                    I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.

                                                                    Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.

                                                                    Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.

                                                                    There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.

                                                                    My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).

                                                                    I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.

                                                                    Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.

                                                                    But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.

                                                                    1. 6

                                                                      From the first article, fourth page:

                                                                      The case of Lisp is interesting though because though this language has a well defined syntax with parenthesis (ignoring the problem of macro-characters), this syntax is too trivial to be more useful than the structuring of a text as a string of characters, and it does not reflect the semantics of the language. Lisp does have a better structured syntax, but it is hidden under the parenthesis.

                                                                      KILL THE INFIDEL!!!

                                                                      1. 2

                                                                        Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.

                                                                      1. 2

                                                                        This is at least partially the “craft” part of programming. There’s a special sense and taste that you develop after years of programming that you can not put into words and you can not learn from the books.

                                                                        It’s not at all unique to programming either. It’s like becoming a KungFu master or master chef and things like that.

                                                                        1. 2

                                                                          I do not think that it cant be put into words. But it is devilishly hard to do so. Its also very worthwhile. I often find that If I fight to get one of those hard to describe things on to paper. I learn something very valuable in the process. and I understand what it is I subconsciously know better.