1. 11

    After helping polish up a few things with the Fennel lisp compiler (https://github.com/bakpakin/Fennel) I’ve created an Emacs mode for it (https://gitlab.com/technomancy/fennel-mode) and am hoping to try sketching out a few simple games in the Love2d game engine to put it thru the paces.

    1.  

      Is there a reason why Fennel is hosted on GitHub and Emacs mode on GitLab. Why not host them both on either GitLab or GitHub?

      1.  

        Fennel is not my own project; it was started in 2016 by Calvin Rose, but he only worked on it a couple weeks before moving on. I discovered it last week and started submitting patches, and now he’s picked it back up. I just created the Emacs mode yesterday, and I host all my own projects on GitLab.

    1. 11
      • Semver allows me to break compatibility in major releases. The author recommends not to, and to support old interfaces for years. The latter is out of the question for hobby projects. The package manager of the language I use (e.g. Cargo, npm) is able to deal with this just fine. Apt is not.
      • Cargo or npm allow me to lock all versions in my dependency tree and to restrict version ranges. Especially the latter is necessary for API breakages (in compliance with semver) to not be a problem. Again, it’s only apt that is not able to deal with this. Cargo or npm can resolve this conflict automatically and find appropriate versions, or do version splits if necessary. In Debian packaging, version splits are a manual task done by a person, which, to me, is the actual friction here.

      As an application developer I don’t see why I should need to work around the deficiencies of distribution’s package management systems, especially when those deficiencies are admitted by the author. I am not convinced that I, as developer, should bother with Debian. All of the arguments in “why Debian?” are from the perspective of an end user.

      The existence of Debian “stable” is actually a reason why I avoid Debian as a developer. Because what Debian considers “stable” comes at a price I don’t want to pay.

      In that sense I reasonate with the post by Joey Hess much more than with this one. Because I really think it’s the distros who have a problem here. Not me. I can just statically link everything, put the resulting binary in a PPA and never need to bother with any of this.

      1. 11

        Everything made a lot more sense to me when I started to think about the fact that the npm model assumes that code is being deployed by a team of full-time developers who are paid to stay on the upgrade treadmill and work thru all the integration issues of pulling in different pieces that have never been tested together. In this context, you can’t afford to wait for a stable release of a whole distro; you’ve got the bandwidth and expertise and test infrastructure to handle making it work with just the pieces you know you need. But forcing the end-user to be responsible for that kind of integration would be a nightmare.

        1.  

          The npm model works because it’s being deployed on top of a stable Debian system.

        2.  

          The package manager of the language I use (e.g. Cargo, npm) is able to deal with this just fine. Apt is not.

          Apt is perfectly able to cope with complex versioned dependencies, including “not compatible with libs < this version” and “this random point release actually changed the API so all the dependencies need to care about it, even though the developer claims otherwise”.

          Exactly what feature do you think apt is missing?

          1.  

            Version range restrictions (particularly upper bounds) are the default in Cargo and npm, while in apt they are only used if actually necessary. They’re not necessary if no breakage happens. That is the friction in apt for actually using semver to its full extent.

            It’s more of a policy or best practice question than a technical one, but it doesn’t matter.

        1. 2

          I wish more folks involved in packaging for Linux distros were familiar with Homebrew. Obviously not everything Homebrew does is applicable to Debian, but the ability for folks to show up and easily contribute new versions with a simple PR is game changing. Last night I noticed that the python-paramiko package in Debian is severely out of date, but the thought of trying to learn the various intricacies of contributing to Debian well enough to update it is turns me right off.

          1. 13

            As an upstream dev of code that’s packaged with Homebrew, I have noticed that Homebrew is by far the sloppiest of any packagers; there is basically no QA, and often the packagers don’t even read the instructions I’ve provided for them. I’ve never tried it myself, but it’s caused me a lot of headaches all the same.

            1. 2

              I just looked at the packaging information for paramiko and I have more questions than before:

              How does this setup even work in case of a security vulnerability?

              1. 2

                How does this setup even work in case of a security vulnerability?

                Bugs tagged as security problems (esp. if also tagged with a CVE) get extra attention from the security team. How that plays out depends on the package/bug, but it can range from someone from the security team prodding the maintainer, all the way to directly uploading a fix themselves (as a non-maintainer upload).

                But yeah in general most Debian packages have 1-2 maintainers, which can be a bottleneck if the maintainer loses interest or gets busy. For packages with a lot of interest, such a maintainer will end up replaced by someone else. For more obscure packages it might just languish unmaintained until someone removes the package from Debian for having unfixed major issues.

                1.  

                  Unfortunately, Debian has still a strong ownership model. Unless a package is team-maintained, an unwilling maintainer can stall any effort to update a package, sometimes actively, sometimes passively. In the particular case of Paramiko, the maintainer has very strong opinions on this matter (I know that first hand).

                  1.  

                    Strong opinions are not necessarily bad. Does he believe paramiko should not be updated?

              1. 16

                A major reason I use Debian is that, as a user, I consider 90% of software lifecycles to be utterly insane and actively hostile to me, and Debian forces them into some semblance of a reasonable, manageable, release pattern (namely, Debian’s). If I get the option to choose between upstream and a Debian package, I will take the latter every single time, because it immediately has a bunch of policy guarantees that make it friendlier to me as a user. And if I don’t get the option, I will avoid the software if I possibly can.

                (Firefox is the only major exception, and its excessively fast release cadence and short support windows are by far my biggest issue with it as a piece of software.)

                1. 3

                  I never really understood why short release cycles is a problem for people, but then I don’t use Debian because of their too long ones. For example, the majority of Firefox’s releases don’t contain user-visible changes.

                  Could you elaborate what your problems with Firefox on Debian are? Or why software lifecycles can even be hostile to you?

                  1. 7

                    Every time a major update happens to a piece of software, I need to spend a bunch of time figuring out and adapting to the changes. As a user, my goal is to use software, rather than learn how to use it, so that time is almost invariably wasted. If I can minimize the frequency, and ideally do all my major updates at the same time, that at least constrains the pain.

                    I’ve ranted about this in a more restricted context before.

                    My problem with Firefox on Debian is that due to sheer code volume and complexity, third-party security support is impossible; its upstream release and support windows are incompatible with Debian’s; and it’s too important to be dropped from the distro. Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.

                    1. 3

                      Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.

                      I had this happen with Chromium; they replaced the renderer in upstream, and a security flaw was found which couldn’t be backported due to how insanely complicated the codebase is and the fact that Chromium doesn’t have a proper stable branch, so one day I woke up and suddenly I couldn’t run Chromium over X forwarding any more, which was literally the only thing I was using it for.

                      1.  

                        Ha, now I understand why I use emacs. It hasn’t changed the UX in years, if not decades.

                      2. 3

                        Because you need to invest into upgrading too much of your time. I maintain 4 personal devices with Fedora and I almost manage to upgrade yearly. I am very happy for RHEL at work. 150 servers would be insane. Even with automation. Just the investment into decent ops is years.

                        1. 3

                          I’m with you. I update my personal devices ~weekly via a rolling release model (going on 10 years now), and I virtually never run into problems. The policies employed by Debian stable provide literally no advantage to me because of that. Maybe the calculus changes in a production environment with more machines to manage, but as far as personal devices go, Debian stable’s policies would lead to a net drain on my time because I’d be constantly pushing against the grain to figure out how to update my software to the latest version provided by upstream.

                          1. 3

                            I’ve had quite a few problems myself, mostly around language-specific package managers that break something under me. This is probably partly my fault because I have a lot of one-off scripts with unversioned dependencies, but at least in the languages I use most (Python, Perl, R, shell, etc.), those kinds of unversioned dependencies seem to be the norm. Most recent example: an update to R on my Mac somehow broke some of my data-visualization scripts while I was working on a paper (seemingly due to a change in ggplot, which was managed through R’s own package manager). Not very convenient timing.

                            For a desktop I mostly put up with that anyway, but for a server I prefer Debian stable because I can leave it unattended with auto-updates on, not having to worry that something is going to break. For example I have some old Perl CGI stuff lying around, and have been happy that if I manage dependencies via Debian stable’s libdevel-xxx-perl packages instead of CPAN, I can auto-update and pull in security updates without my scripts breaking. I also like major Postfix upgrades (which sometimes require manual intervention) to be scheduled rather than rolling.

                            1. 1

                              Yeah I don’t deal with R myself, but based on what my wife tells me (she works with R a lot), I’m not at all surprised that it would be a headache to deal with!

                          2. 1

                            For me there is an equivalence between Debian stable releases and Ubuntu LTE ones, they both run at around 2 years.

                            But the advantage (in my eyes) that Debian has is the rolling update process for the “testing” distribution, which gets a good balance between stability and movement.

                            We are currently switching our servers from Ubuntu LTE to Debian stable. Driven mostly by lack of confidence in the future trajectory of Ubuntu.

                        1. 1

                          The language has been renamed to “Fennel”: https://github.com/bakpakin/Fennel/issues/13

                          1. 2

                            after a couple of unsuccessful attempts to get DJI to release the source for the GPL-licensed software it ships, he had created and published a root exploit for the company’s drones in retaliation.

                            Hahaha I love this.

                            1. 7

                              I mostly agree with this article, but I’m skeptical that you can do functional programming in a language that doesn’t have first-class functions. I think there’s a minimum bar for supporting FP style, and though you could jump thru hoops to implement first-class functions in a language that doesn’t have them, by that point you’re just creating a new language hosted inside another.

                              But I also don’t think “functional programming language” is a great term, except when understood as “language which encourages functional programs to be written”. Like speed, “functionalness” is a property which can only be applied to individual programs, not whole languages. And like speed, a language can shape both the bounds of how functional the programs written in it can be as well as how much effort it takes to achieve a certain level of speed or functionalness.

                              1. 3

                                I’m skeptical that you can do functional programming in a language that doesn’t have first-class functions.

                                Yeah. If I were to rewrite/revise this article, I’d probably mention that. You need lexical closures, or some equivalent. I actually believe even subroutines (code with effects) need to be first class as well.

                              1. 20

                                I’ve used C for a while (and I now work on a C/C++ compiler) and I see it this way: you should be very reluctant to start a project in C, but there’s no question you should learn it. It mainly boils down to point 3 in the article: “C helps you think like a computer.”

                                But really, it helps you think like most computers you’re going to use. This is why most operating systems are written in C: they are reasonably “close to the metal” on current architectures. It’s not so much that this affords you the opportunity for speed (it does, since the OS or even the CPU is your runtime library), but because you’re not that far removed from the machine as an API. Need to place values in a specific memory location? That’s easy in C. Need to mix in some assembly? Also pretty easy. Need to explicitly manage memory? Also not hard (to do it well is another matter). Sure, it’s possible in other languages, but it’s almost natural in C. (And yes, not all I’ve mentioned is strict C, but it’s supported in nearly all compilers.)

                                All this doesn’t mean I like it, but that’s the reality. I’d rather see more variety in computer architectures such that something safer than C were the default. I’m always looking for kind of machine that essentially rejects the C model so much so that C would actually be awful to use. Unfortunately, those things tend to not have hardware.

                                1. 10

                                  I found that learning C was not very helpful in this regard (though I have no doubt this is partly because I was badly taught in university). What finally made it click was learning Forth. C’s attempt at a type system makes it easy to imagine that things other than bytes have reified form at runtime, whereas Forth gives you no such illusion; all that exists is numbers.

                                  When I came back to C afterwards, everything made so much more sense. Things that used to be insane quirks became obvious implications of the “thin layer of semantic sugar over a great big array of bytes” model.

                                  1. 6

                                    I had this same problem, but for me the thing that made everything to click together was using assembler. Pointers (and other C types to some extend) are really wonderful abstraction, even though they are “a bit” thin. And that power of abstraction hides all the machine details if one does not yet know how to look past it.

                                  2. 5

                                    The reasons you mention are really why I still use it at all. It comes with me to almost any device I feel like programming, and I do think it sometimes makes sense.

                                    For example, programming the GBA is quite easy in C, and it doesn’t really matter if someone breaks my game by entering a really long name or whatever (in fact, I love things like that.)

                                    I hope Rust will one day be my trusty companion but it’s not quite there yet.

                                    1. 3

                                      Well Rust is getting there. I sometimes think it would be fun to program such systems but when I think about using C for that I always cringe, so Rust might be a viable option in the future.

                                  1. 10

                                    Huh. I didn’t realize Java is going the Firefox/Chrome model of releases.

                                    Overall if you have good unit tests in your software, this shouldn’t be a big deal. Update to Java x, run sbt test or gradel test or whatever, update your test-running CI container to java x, let it run there, update your production Dockerfiles to java x, deploy and check your integration tests.

                                    Oh you don’t have a lot of unit tests? .. wait, you don’t have any unit tests?! … Well it will probably just work .. have fun!

                                    1. 5

                                      I don’t think it’s that straightforward for everyone. It’s hard to measure the performance impact of changes to the JVM, as well as potential obscure bugs, from just unit testing. I think most big deployments and libraries will stick to LTS releases as a result, which isn’t that bad given it’s about the old pace of updates anyway.

                                      1. 6

                                        To support this point, for a specific example of a more obscure change in a JDK that caused programs to fail, see http://www.oracle.com/technetwork/java/javase/8u20-relnotes-2257729.html - it’s a long list but note this

                                        Collection.sort defers now defers to List.sort

                                        Previously Collection.sort copied the elements of the list to sort into an array, sorted that array, then updated list, in place, with those elements in the array, and the default method List.sort deferred to Collection.sort. This was a non-optimal arrangement.

                                        The consequence of changing to sorting in place (the optimal arrangement), is that programs which sorted in one thread and concurrently iterated in another are more likely to crash with this JVM than previously. Might be hard to test for that even in an integration test!

                                        Unit testing is dangerous because it gives inexperienced coders false confidence that changes are good.

                                      2. 2

                                        Huh. I didn’t realize Java is going the Firefox/Chrome model of releases.

                                        Well, at least Firefox has train releases + a long term release. Java doesn’t seem to have that.

                                        1. 11

                                          Didn’t the article mention Java 8 being a long term release?

                                          1. 13

                                            Yes, Java has LTS releases, currently 8 and then 11. http://www.oracle.com/technetwork/java/eol-135779.html

                                            1. 4

                                              Ah, sorry, I had missed the precise scheme. I thought 8 was LTS, as it was the last “old-fashioned” release.

                                              1. 1

                                                Note that Red Hat will support OpenJDK 8 after Oracle discontinues their support as they have with previous releases in the past; they commit to it up to October 2020: https://access.redhat.com/articles/1299013

                                        1. 2

                                          Has anyone here tried using Godot in anger yet? I’m tempted to use this instead of Unity for things, but a bit unsure of how difficult it might be to use Godot for simple prototyping (namely whether docs are complete enough).

                                          Would love to hear the pros of this in terms of usability.

                                          1. 2

                                            Godot is great. I’ve used Unity a bit but honestly I don’t think I’d ever choose it over Godot (except for maybe non-technical reasons, like the asset store).

                                            As for the docs, I found them to be pretty good, and things are reasonably discoverable in-editor too.

                                            1. 2

                                              I quit Unity years ago and switched to Godot. The docs aren’t great (they seem to be working on that) but the builtin stuff is amazing, it’s kinda weird at times but it always seems like it has the one specific thing you want. I spent days implementing things that I usually don’t find in editors just to find out that they were already implemented in Godot, just a bit hidden.

                                              GDScript is acquired taste, I still don’t love it, but it’s grown on me enough that I can use it comfortably. For all its quirkiness (tries to be python but fails) it hasn’t given me a single issue or unwanted behavior, unlike the spotty C# in Unity (although I’ve heard it’s getting better?).

                                              1. 4

                                                Looks like they support C# now via Mono. Looking forward to someone writing a wrapper for F#.

                                                1. 2

                                                  The main reason I never took Godot seriously is that they decided to invent their own language because “none of the existing ones were good enough” which tells me that the project leadership at the time were not very sensible. It’s a good sign that they’ve realized that was a mistake.

                                                  1. 2

                                                    I think that was nearly ubiquitous in game engines prior to, say, 2009–2010 or so. Unreal has its own language, UnrealScript, based on a previous in-house language, ZZT-oop. And Unity started with a DIY language, UnityScript, which had Javascript-like syntax and was sometimes called “Javascript” in the docs, but wasn’t really JS, and was finally axed just a few months ago. So it’s not that surprising Godot would also have one, even if it started a little later than those two engines.

                                                    I’m not 100% sure on the timeline, but I think Lua was one of the first third-party languages to be widely picked up as a game scripting language. At the time it was seen as necessary for game-scripting languages that they be lightweight, small implementations that are easily embeddable, and ideally permissively licensed, which Lua fit the bill. Though now things have moved on to where embedding Mono isn’t a dealbreaker.

                                            1. 4

                                              This seems like a recreation of cardboard, at best gearvr, but the authors seem very very young. Pretty impressive and good writeup. Obviously once you’ve gotten to the point of an existing tech you can still go further. I hope these people keep exploring.

                                              1. 2

                                                The screen they’ve listed in the readme is a higher-resolution than most mobiles used in Cardboard.

                                              1. 1

                                                Another big feature landing with Firefox 59 is the ability to register content scripts at runtime.

                                                How sad is it that this hasn’t been true from day 1?

                                                1. 7

                                                  At that time, when you turned on your computer, you immediately had programming language available. Even in 90’s, there was QBasic installed on almost all PCs. Interpreter and editor in one, so it was very easy to enter the world of programming. Kids could learn it themselves with cheap books and magazines with lot of BASIC program listings. And I think the most important thing - kids were curious about computers. I can see that today, the role of BASIC is taken by Minecraft. I wouldn’t underestimate it as a trigger for a new generation of engineers and developers. Add more physics and more logic into it and it will be excellent playground like BASIC was in 80s.

                                                  1. 5

                                                    Now we have the raspberry pi, arduino, python, scratch and so many other ways kids can get started.

                                                    1. 10

                                                      Right, but at the beginning you have to spend a lot of time more to show kid how to setup everything properly. I admire that it itself is fun, but in 80’s you just turned computer on with one switch and environment was literally READY :)

                                                      1. 7

                                                        I think the problem is that back then there was much less competition for kids attention. The biggest draw was TV. TV – that played certain shows on a particular schedule, with lots of re-runs. If there was nothing on, but you had a computer nearby, you could escape and unleash your creativity there.

                                                        Today – there’s perpetual phones/tablets/computers and mega-society level connectivity. There’s no time during which they can’t find out what their friends are up to.

                                                        Even for me – to immerse myself in a computer, exploring programming – it’s harder to do than it was ten years ago.

                                                        1. 5

                                                          I admire that it itself is fun, but in 80’s you just turned computer on with one switch and environment was literally READY :)

                                                          We must be using some fairly narrow definition of “the ‘80s”, because this is a seriously rose-tinted description of learning to program at the time. By the late 80’s, with the rise of the Mac and Windows, the only way to learn to program involved buying a commercial compiler.

                                                          I had to beg for a copy of “Just Enough Pascal” in 1988, which came with a floppy containing a copy of Think’s Lightspeed Pascal compiler, and retailed for the equivalent of $155.

                                                          Kids these days have it comparatively easy – all the tools are free.

                                                          1. 1

                                                            Windows still shipped with QBasic well into the 90s, and Macs shipped with HyperCard. It wasn’t quite one-click hacking, but it was still far more accessible than today.

                                                          2. 4

                                                            Just open the web-tools in your browser, you’ll have an already configured javascript development environment.

                                                            I entirely agree with you on

                                                            And I think the most important thing - kids were curious about computers.

                                                            You don’t need to understand how a computer program is made to use it anymore; which is not necessary something bad.

                                                            1. 4

                                                              That’s still not the same. kred is saying it was first thing you see with you immediately able to use it. It was also a simple language designed to be easy to learn. Whereas, you have to go out of your way to get to JS development environment on top of learning complex language and concepts. More complexity. More friction. Less uptake.

                                                              The other issue that’s not addressed enough in these write-ups is that modern platforms have tons of games that treat people as consumers with psychological techniques to keep them addicted. They also build boxes around their mind where they can feel like they’re creating stuff without learning much in useful, reusable skill versus prior generation’s toys. Kids can get the consumer and creator high without doing real creation. So, now they have to ignore that to do the high-friction stuff above to get to the basics of creating that existed for old generation. Most won’t want to do it because it’s not as fun as their apps and games.

                                                              1. 1

                                                                There is no shortage of programmers now. We are not facing any issues with not enough kids learning programming.

                                                                1. 2

                                                                  I didnt say there was a shortage of programmers. I said most kids were learning computers in a way that trained them to be consumers vs creators. You’d have to compare what people do in consumer platforms versus things like Scratch to get an idea of what we’re missing out on.

                                                          3. 4

                                                            All of those require a lot more setup than older machines where you flipped a switch and got dropped into a dev environment.

                                                            The Arduino is useless if you don’t have a project, a computer already configured for development, and electronics breadboarding to talk to it. The Raspberry pi is a weird little circuit board that, until you dismantle your existing computer and hook everything up, can’t do anything–and when you do get it hooked up, you’re greeted with Linux. Python is large and hard to put images to on the screen or make noises with in a few lines of code.

                                                            Scratch is maybe the closest, but it still has the “what programmers doing education think is simple” problem instead of the “simple tools for programming in a barebones environment that learners can manage”.

                                                            The field of programming education is broken in this way. It’s a systemic worldview problem.

                                                            1. 1

                                                              Those aren’t even close in terms of ease of use.

                                                              My elementary school circa 1988 had a lab full of these Apple IIe systems, and my recollection (I was about 6 years old at the time, so I may be misremembering) is that by default they booted into a BASIC REPL.

                                                              Raspberry Pis and Arduinos are fun, but they’re a lot more complex and difficult to work with.

                                                            2. 3

                                                              I don’t think kids are less curious today, but it’s important to notice that back then, making a really polished program that felt professional only needed a small amount of comparatively simple work - things like prompting for all your inputs explicitly rather than hard-coding them, and making sure your colored backgrounds were redrawn properly after editing.

                                                              To make a polished GUI app today is prohibitive in terms of time expenditure and diversity of knowledge needed. The web is a little better, but not by much. So beginners are often left with a feeling that their work is inadequate and not worth sharing. The ones who decide to be okay with that and talk about what they’ve done anyway show remarkable courage - and they’re pretty rare.

                                                              Also, of course, back then there was no choice of which of the many available on-ramps to start with. You learned the language that came with your computer, and if you got good enough maybe you learned assembly or asked your parents to save up and buy you a compiler. Today, as you say, things like Minecraft are among the options. As common starting points I’d also like to mention Node and PHP, both ecosystems which owe a lot of their popularity to their efforts to reduce the breadth of knowledge needed to build end-to-end systems.

                                                              But in addition to being good starting points, those ecosystems have something else in common - there are lots of people who viscerally hate them and aren’t shy about saying so. A child just starting out is going to be highly intimidated by that, and feel that they have no way to navigate whether the technical considerations the adults are yelling about are really that important or not. In a past life, I taught middle-school, and it gave me an opportunity to watch young people being pushed away by cultural factors despite their determination to learn. It was really disheartening.

                                                              Navigating the complicated choices of where to start learning is really challenging, no matter what age you are. But for children, it’s often impossible, or too frightening to try.

                                                              I agree with what I took to be your main point, that if those of us who learned young care about helping the next generation to follow in our footsteps, we should meet them where they are and make sure to build playgrounds that they can enjoy with or without a technical understanding. But my real prediction is that the cultural factors are going to continue to be a blocker, and programming is unlikely to again be a thing that children have widespread mastery of in the way that it was in the 80s. It’s really very saddening.

                                                            1. 24

                                                              “There are a lot of CAs and therefore there is no security in the TLS CA model” is such a worn out trope.

                                                              The Mozilla and Google CA teams work tirelessly to improve standards for CAs and expand technical enforcement. We remove CAs determined to be negligent and raise the bar for the rest. There seems to be an underlying implication that there are trusted CAs who will happily issue you a google.com certificate: NO. Any CA discovered to be doing something like this gets removed with incredible haste.

                                                              If they’re really concerned about the CA ecosystem, requiring Signed Certificate Timestamps (part of the Certificate Transparency ecosystem) for TLS connections provides evidence that the certificate is publicly auditable, making it possible to detect attacks.

                                                              Finally, TLS provides good defense in depth against things like CVE-2016-1252.

                                                              1. 13

                                                                Any CA discovered to be doing something like this gets removed with incredible haste.

                                                                WoSign got dropped by Mozilla and Google last year after it came to light that they were issuing fraudulent certificates, but afaict there was a gap of unknown duration between when they started allowing fraudulent certs to be issued and when it was discovered that they were doing so. And it still took over six months before the certificate was phased out; I wouldn’t call that “incredible haste”.

                                                                1. 2

                                                                  I’m not sure where the process is, but if certificate transparency becomes more standard, I think that would help with this problem.

                                                                2. 5

                                                                  TLS provides good defense in depth against things like CVE-2016-1252.

                                                                  Defense in depth can do more harm than good if it blurs where the actual security boundaries are. It might be better to distribute packages in a way that makes it very clear they’re untrusted than to additionally verify the packages if that additional verification doesn’t actually form a hard security boundary (e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS. So a developer who assumed that packages fed into apt had already been validated by the TLS CA ecosystem would be dangerously mislead)

                                                                  1. 5

                                                                    This is partly why browsers are trying to move from https being labeled “secure” to http being labeled “insecure” and displaying no specific indicators for https.

                                                                    1. 1

                                                                      e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS

                                                                      If you have this additional complexity in the supply chain then you are going to need additional measures. At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                                                                      1. 5

                                                                        If you have this additional complexity in the supply chain then you are going to need additional measures.

                                                                        Only if you need the measures at all. Does GPG signing provide an adequate guarantee of package integrity on its own? IMO it does, and our efforts would be better spent on improving the existing security boundary (e.g. by auditing all the apt code that happens before signature verification) than trying to introduce “defence in depth”.

                                                                        At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                                                                        Some kind of alternative to HTTPS for obtaining packages is vital, given how easy it is to break your TLS libraries on a linux system through relatively minor sysadmin mistakes.

                                                                  1. 14

                                                                    “A perfect keyboard would look something like this”

                                                                    [an image of a keyboard with a spacebar that’s 6x as big as every other key]

                                                                    I think … maybe we could do better than that?

                                                                    1. 4

                                                                      My 1u space key suggests you might be right.

                                                                    1. 6

                                                                      I like the distinction of “tool” and “place”. It feels like a useful mental concept.

                                                                      I can see a relation to the economics of software development. Companies want their products to be places, so they capture a slice of your attention and deepen awareness of their brand. Nvidia is an example: It is just a single part of the computer hardware, yet it comes with a GUI tool and often demands attention.

                                                                      Free software can afford to become a tool. Imagine if you boot a Linux desktop and various involved projects show you a series of splash screens first: This desktop experience brought to you by systemd, dbus, dnsmasq, CUPS, NetworkManager, PulseAudio, Gnome, Mozilla Firefox, Gnome Keyring Daemon, gvfsd, and bash.

                                                                      1. 4

                                                                        It’s an interesting dichotomy, and drastically more interesting than the same old “minimalism” tirade I was expecting from reading the title of the post.

                                                                      1. 6

                                                                        Depending on how strict one is about banishing the browser, it’s possible to treat both Firefox and Chrome more or less this way and still have fast-starting browser windows. The trick is that both browsers can rapidly spawn new windows from a running instance, so if you park a minimized empty ‘master’ window for the browser off in some dark corner you can then rapidly open new windows, interact with them for however long you need them, and then close them again.

                                                                        (I have a relatively complicated setup for this to make it very low-effort to go from an URL to new window, complete with Google searches on demand and various other special URL features.)

                                                                        1. 2

                                                                          I’ve done this (for a different reason; I originally started it because I wanted to use my window manager to track open pages, which is far more effective than the browser’s pitiful tab bar) but I’ve noticed similar benefits of making the browser seem like less of a “place”.

                                                                          1. 4

                                                                            Having never worked remote (yet!), but having a 70+ mile round trip commute each day really sucks. Since I’ve moved to the new office I sit about 3 feet from the restrooms, so I can hear everything. Sadly people like it warm in the office, so the thermostat is normally set to around 77-80F. Most recently speakers were installed in the ceiling, and constantly stream Pandora Business (~$30 a month).

                                                                            I haven’t heard of any horrible remote work experiences, but everyone that I know who has tried it loves it. Hopefully I am able to at some point as well!

                                                                            1. 11

                                                                              I haven’t heard of any horrible remote work experiences

                                                                              The linked article seems to be a comprehensive list of all the things you can possibly do (or have done to you) to make remote working not work. But even then a lot of it doesn’t even seem that specific to remote working; like having three different people trying to talk to you at once–if you don’t know how to stand up and say “no” when unrealistic demands are placed on your time, you’re going to have problems whether you’re remote or in an office.

                                                                              A lot of the things in the article also come down to “people in my company don’t know how to deal with the fact that not everyone is in the office” which you can’t really do anything to fix other than “take care when deciding where to work”. If your company uses “how responsive are you to chats” as a measure of how productive you are, that’s a huge red flag and indicates some serious dysfunction in the organization.

                                                                              1. 2

                                                                                A lot of the things in the article also come down to “people in my company don’t know how to deal with the fact that not everyone is in the office” which you can’t really do anything to fix other than “take care when deciding where to work”.

                                                                                I don’t think you have to surrender to the dysfunction. You can try to explain to your manager and / or colleagues how things could be better for remote team members (over video chat, not email, so they can see and hear that that you’re being constructive, not whinging).

                                                                                If you try that a few times and it doesn’t work, then, yeah… I guess one potential joy of working remote for one company is you can work remote for another :)

                                                                            1. 20

                                                                              Desktop GUI. That will come from the community too. Nowadays 90% of GUI are web or mobile. Desktop GUI is now a niche with plenty of solutions.

                                                                              Again, that “desktop is dead” song from Medium bloggers. Yes, GUI libraries should come from community, not creators of language, but language for doing mobile GUIs are mostly defined by Apple and Google. Almost all of these 90% of world’s GUIs are made by them too + Facebook + Uber + Amazon. Web GUI is DOM+js, also no place for Rust.

                                                                              a niche with plenty of solutions

                                                                              Mostly C/C++. Rust might be very useful here. Much more useful than replacement for Ruby and Python for web backends (see “No good Database Abstraction Layer”).

                                                                              BTW, web apps like those written in Ruby and Python are niche too, because 90% of websites are made on CMSes like Wordpress and Drupal.

                                                                              1. 16

                                                                                Well, this is just some random suggestions from someone who has been vaguely following rust for 2 years but hasn’t used it for anything nontrivial, so I wouldn’t read too much into it. Honestly not sure why it’s on the front page.

                                                                                1. 3

                                                                                  To have the language more widely used it’s important to reach out to people who don’t use it. The Rust community tries to make opinions of those folks heard too (e.g. with the survey). People can have valuable opinions even coming from a totally different background.

                                                                                  1. 5

                                                                                    Sure, but technomancy talks about lobste.rs :).

                                                                                    Speaking from a Rust community perspective, this is a very worthwhile post.

                                                                                  2. 3

                                                                                    I’m a bit surprised that it’s exactly this one. FWIW, we’re currently got a call for community blogposts up, this one is part of it:

                                                                                    https://blog.rust-lang.org/2018/01/03/new-years-rust-a-call-for-community-blogposts.html