1. 1

    The RSS feed should be linked on the index page of the podcast. For a moment I thougth there wasn’t one and was about to leave the page.

    1. 1

      For a moment I thougth there wasn’t one and was about to leave the page.

      I somehow managed to miss it, where did you find it?

      1. 2

        If you click through to https://haskellweekly.news/podcast/ (first link in body of announcement), The RSS feed is linked at the bottom of the page (https://haskellweekly.news/podcast/feed.rss)

        1. 1

          Ah right, didn’t even see that. I only found the link on the pages for the individual episodes.

          1. 1

            Yes, I’m not sure why it’s hidden down the bottom instead of up the top with the other two subscription links.

    1. 7

      Oh, one more question about Guix, that I was reminded of by this article: does Guix have some solution for managing “secrets”? (E.g. stuff like passwords, including WiFi ones, etc.) AFAIK Nix/NixOS hasn’t really implemented one yet, and it proves to be especially tricky and non-trivial given some fundamental design decisions.

      1. 3

        NixOps has a good mechanism to handle secrets: https://nixos.org/nixops/manual/#idm140737322338896

        1. 1

          Stupid question but why do you want secrets storage support in the operating system, aren’t there are zillion solutions for that out there already?

          1. 3

            WiFi passwords, user passwords (the shadow file), passwords/tokens to various remote services (like Dropbox, or to stay more GNU, say Syncthing), and maaaany more.

            Not sure how much you know about Nix/NixOS and Guix, but they’re not only an OS per se; they also (or rather, primarily) give you the ability to reproducibly specify a full configuration of a host in a single file. So that in case you lose this host, you can just re-run the config on a different one, and have a nearly perfect mirror (except your actual data, which you still have to back up another way). Also, they allow easy tweaking of this config in a “live” manner.

            Thus, I’d totally want to use Guix to fully configure users, with their passwords; also WiFi passwords, and various other services running “in the background”. In somewhat different words: if there are zillion other solutions for that, I’m interested in one that would work seamlessly with Guix (or NixOS, but we’re talking Guix in this thread).

        1. 7

          There is something worth investigating here, although I don’t understand people well enough to articulate it myself.

          I have only seen Mailpile before in the context of NixOS. nixpkgs had carried an older version of Mailpile, but when a community contributor attempted to update it, the Mailpile author requested that no package distribution carry Mailpile until its official release. After official release, when another community member requested Mailpile be included in nixpkgs, the Mailpile author requested that NixOS only ship upstream-provided packages, and that Mailpile should have control over the package distribution channel. Finally, a third community member updated Mailpile without talking to the Mailpile author.

          I wonder whether the Mailpile author is trying too hard to exert control over the projects of other communities, and in the process, becoming burnt out by the failure of others to agree with their preferences. I also wonder about the wider context of package management; as systems like Nix become more prevalent, we must start to embrace a serious notion of package-capability systems, where the ambient authority of FHS is removed entirely in favor of explicit dependencies, and recursive calls to dynamically load new packages are clearly visible.

          1. 3

            agree totally - the maintainer might not like the state of the software - but if others find it useful then he should accept that.

            a big part of one developer (or in this case 3 developer) projects is that sometimes you just have to accept that the software is not going to be in the state you want it to be, for a long time, if ever. until that happens if people find some use of your software you shouldnt stand in the way of that.

            the flipside is that said users need to be keenly aware of when they are using software from understaffed projects and manage their expectations thusly.

            1. 33

              It’s one developer, me, there haven’t been three of us for over four years now.

              The version Nix was carrying was buggy to the point of being unusable, and we were getting the bug reports and their users weren’t getting updates. Their package was also not in line with how Mailpile should be installed, they’re installing it as if it were a multi-user daemon, not a single-user MUA. Imagine installing Thunderbird or Firefox or LibreOffice with the assumption that only one user could use them! That’s how Mailpile was integrated into Nix. This was not a situation that benefited anyone. So I asked them to stop.

              They didn’t, they’re still many releases behind even our glacially-moving RC branch. If I’m reading their repo right, their integration still shows that whoever packaged it still persists in doing it wrong, so any documentation or support Mailpile itself provides WILL be wrong for Nix users.

              Of course, Nix won’t get the blame. Mailpile will, the project’s reputation will suffer because some random packager did things “his way”. But it’s still my fault. Ultimately, everything in this project is. Thus, burnout.

              1. 15

                This reminds me of the situation with xscreensaver. Eventually the author just included a warning that popped to explain that they shouldn’t be using the packaged version: https://www.jwz.org/blog/2016/04/i-would-like-debian-to-stop-shipping-xscreensaver/

                1. 8

                  That’s not the first case when people packaging software mess around with software and break it in subtle ways. Blaming the software vendor (“Mailpile author is trying too hard to exert control over the projects of other communities”) is not only counter-productive but also misses the point.

                  Software vendors are interested in their users having the best experience with their software. That’s why they want to “exert control” - to ensure high quality. If package manager for system X lowers perceived quality of the software on system X but software works well on other systems it’s system’s X packager’s fault.

                  Tangential: package managers in Linux are a bizarre concept for me. It’s nice that the entire system updates including installed software, but the fact that the software doesn’t come from the vendor always surprised me (maybe because I used Windows, MacOS and Linux). Just compare installation method for Windows and MacOS for example here and then Linux installation instructions for the same software.

                  1. 4

                    There are important social reasons why package maintainers exist, see http://kmkeen.com/maintainers-matter/

                    1. 1

                      With a caveat that this is written by Arch package maintainer and packages are contrasted mainly with App Stores, not exactly the ideal model I’d have in mind. I found a new keyword “Linux Universal Packages” though… interesting…

                    2. 3

                      I wonder how to best communicate the nature of Nix. I’ve been using the phrase “package-capability”, and the Nix documentation itself has an excellent overview, but neither of them highlight the largest problematic nature of Nix: Almost all existing software will have some differences in behavior when built by Nix or running on NixOS, by design, in ways that software authors have historically disliked.

                      There are two layers of giving up control here, and I feel that it is worthwhile to distinguish them. First, there is the inversion of control created by nixpkgs itself, because nixpkgs has the control structure of a ports tree, which means that we build a package for a target system by interpreting the tree’s description of that package. This is an inversion of control from a self-installing package delivered by a vendor. Once inverted, this change in control means that the package manager, not the vendor, controls build configuration and installation directories.

                      In this way, NixOS is in the lineage of Gentoo, Arch, and other distros whose ports trees have enabled them to ingest large numbers of otherwise-misfit packages into a single coherent presentation. I recall that in my years of using Gentoo, there were folks who owned upstream packages which they felt were improperly packaged, but who also said that their preference would be for their packages to not be shipped at all. It is not in the nature of a ports tree to agree to such a self-defeating deal. Of course, when those upstreams then turned around and denied support and triage to those Gentoo users, they were wholly within rights to do so.

                      However, there is a second layer, as well. Packages often depend upon other packages. Nix requires that such dependencies be explicitly coded. This humble desire is unyielding, and so it is the case that many packages included in nixpkgs are only included under protest: Source must be patched, tests must be avoided, network traffic must be blocked, file metadata must be destroyed, and even ELF headers must be rewritten. The worst offenders are either emulated or placed in chroots.

                      None of that is necessary for Mailpile. Its current Nix expression is quite modest, with few modifications and nothing not typical for Python applications. It indicates both of upstream’s wishes, in that the package will not build locally without either a modification to nixpkgs or to local user configuration, and that a warning is emitted in any case.

                      1. 5

                        Dependency management is only one of the things a packager is supposed to do. All this you describe is the prerequisite for getting the software to run at all, but none of it guarantees it will run well.

                        System integration also matters. Making sure packages play nice with each-other. Firewalls, modular configurations, SELinux (or AppArmour or whatever the flavour if the month is), desktop “start” menus, etc. etc.

                        And updates matter a great deal, especially for security-critical software. Which Mailpile is - the web UI aside, e-mail clients in general are a major vector for malware and one of the few “server type” apps on a desktop system where remote attackers can just push their attacks into your system and hope you get pwned.

                        My reservations about Nix pertain to integration and updates. In your discussion of “the nature of Nix”, you mention neither. I dunno if it’s a coincidence, or whether this emphasis accurately characterises Nix, but those areas are exactly where Nix is failing Mailpile, IMO. If they’re not even on the radar, then that doesn’t bode well.

                        1. 2

                          From my view, nix is a bit of a wild west when it comes to package quality, any given package is only as good as the last person who used the package and cared enough about making it work well. There seems to be little control in regards to ensuring maintainers are active and keeping things up to date.

                          On the flip side, it is very easy for someone to take control of how their own application is packaged in nixos, simply by making a pull request on github.

                          1. 5

                            Saying it’s “very easy” doesn’t make it so, there is a learning curve and time investment involved which means less time for doing other things. Fixing all of Mailpile’s bugs is also easy! Simply make a pull request on Github… :-P

                            1. 4

                              Ok, you are right - I think I should have said “relatively easy”, relative to some other linux distributions.

                          2. 2

                            It is tempting to describe NixOS as that set of Nix expressions which provides an integrated system. NixOS modules are written in plain Nix but provide firewalls, modular configurations, a hardened system profile, XDG paths, etc. However, this is not the entire story.

                            An update of a Nix package is initiated by the user in control of the Nix daemon. As a result, packagers do not have direct control over which package versions are visible to users. Instead, users must decide for themselves when to update. Rather than considering certain packages to be important to security, we instead decide that all packages are important to security, and there is a contact page in case of emergency. Upstream package authors should use common reporting pathways like CVE instead; CVEs might be annoying, but they embody the reality of upstreams not knowing how to contact every downstream.

                            Without the experience of using Nix or being steeped in capability-aware security principles (POLA, confused deputies, etc.) it could be hard to grok why we use Nix.

                            Returning to Mailpile, and looking at its Repology page, it would seem that no distros are packaging the latest release candidate. Also, it would seem that every packaging distro is a ports tree. This is roughly as good as the versioning will ever get for Mailpile in the wild; compare and contrast with another Python package with a frustrated upstream, Quod Libet, whose Repology page contains many distros with good reputations and outdated versions.

                        2. 1

                          I’m not a primary Linux user. But this package manager awkwardness… is there any solution in sight?

                          1. 3

                            Linux package management works this way due to pervasive use of shared libraries, where 20 different packages may depend on a single shared library L. There’s only one copy of L installed, but in order to correctly run the latest version of all those packages, you might need up to 20 different versions of L installed. I won’t go into details, but the resulting mess is called “dependency hell”.

                            The solution is for each application to bundle its own preferred versions of each of its dependencies. There are 3 different Linux app distribution formats that support this: FlatPack, AppImage, and Snap.

                            I’ve never used Nix, but I’ve heard that Nix is supposed to fix dependency hell in a different way. However, this story is about a Nix packager breaking an app, so I guess it isn’t perfect. By contrast, FlatPack, AppImage and Snap allow the original app developer to build universal Linux binaries that run on any modern Linux system and distribute those binaries directly to customers without a middleman repackaging and compiling their software and possibly breaking it.

                            1. 1

                              Wouldn’t static linking help with this? Or is that irrelevant/impractical for some reason?

                              Also, regarding the three package managers you mentioned: aren’t those geared towards apps? Is it universally true that the only package managers that let developers bundle their own versions, are the ones geared toward applications? (I guess this makes sense now in hindsight…)

                              1. 3

                                The open source project I’m working on, Curv, cannot be statically linked, both because it uses OpenGL, and because it dynamically loads shared objects at runtime. There are lots of reasons why static linking can be impossible on Linux.

                                1. 1

                                  There are lots of reasons why static linking can be impossible on Linux.

                                  What are some other reasons?

                                  1. 1

                                    I mean that there are lots of dependencies that can make static linking impossible. All you need is one of those dependencies.

                                    The fundamental issue is that if you call the dlopen function from the GNU C library, then static linking is impossible. Any dependency that calls this function will prevent you from creating an ELF statically linked executable.

                                  2. 1

                                    I’m not sure I understand enough about this. As far as I was aware, virtually anything can be treated as a compile-time constant and baked into the executable. Do you know of any good resources on the topic, or would you otherwise care to explain?

                                    1. 1

                                      if you call the dlopen function from the GNU C library, directly or via one of your dependencies, then static linking is impossible. If you use dlopen, you can still statically link static libraries into your executable, but you cannot create an ELF Statically Linked Executable file.

                                      Here is some output from my bash shell to illustrate the difference (curv is dynamically linked, curvc is statically linked):

                                      $ file curv curvc
                                      curv:  ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=bfd93bf852da5ac346c8dc4848446743d0f8e14c, not stripped
                                      curvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID[sha1]=5bd2494d8bd65078e53edab7ee04ab72e194c1cf, not stripped
                                      
                            2. 1

                              The free software ecosystem is highly decentralized. Most of the software available in any Linux distribution is not even specific to Linux, but can also work on various BSDs, and probably Windows and MacOS too. But compiling and installing software works somewhat differently in all of these targets, especially so for things like persistent services, boot dependencies, default configurations, etc. So the upstream typically only provides a flexible, portable source code repository (the “tarball”, as it were), and then various communities take care of curating, vetting, configuring, and packaging the upstream source into artifacts more suitable for a particular system.

                              This arrangement is an old tradition and has never been perfect. But free software as a whole can’t just decide on one ultimate way of doing things. It’s always a sprawling decentralized ecosystem. That’s like a basic political reality. The whole thing is like a continuous experiment with no single organization in charge—and so no central decision about how to package software. Network effects, technical advantages, and capitalization give rise to some leaders, like Ubuntu, Red Hat, etc, but there’s always competition and alternatives.

                              Nix itself is supposed to be a significant development in software packaging and deployment. It started as a research project in the software technology group at the university of Utrecht. The introductory chapter of Eelco Dolstra’s 2006 Ph.D. thesis explains the context and motivation. The free software ecosystem is a good fit for research like this, and it’s no accident that Nix developed in that context rather than Windows or MacOS.

                              It seems likely that Nix’s novel design will influence the future of software deployment. I can easily imagine a similar system becoming the standard way to create containers, for example. It has already led to at least one new independent project with the same basic principles, namely Guix and GuixSD—which seem positioned to become the official GNU project standards.

                              The free software ecosystem is a vast commons with enormous amounts of different people, organizations, intentions, etc. It’s not a single commercial enterprise and so it can never have the unitary characteristics of Windows or MacOS.

                              1. 1

                                But free software as a whole can’t just decide on one ultimate way of doing things. It’s always a sprawling decentralized ecosystem.

                                What do you think about standards organizations such as IETF? I’m glad that there are documents such as RFC 7540 and that things such as HTTP can be implemented on any OS in any language and framework and best of all they all interconnect. I did implement HTTP server years ago and Firefox developers didn’t have to tweak my server to work with their browser nor tweak their browser to connect to my server. It just worked and I didn’t have to ask anyone for permission.

                            3. 1

                              I think you are being too hard on yourself.

                              youve obviously made something people like - as the repo has thousands of stars - you just have to decide for yourself where your limit is - and youve done that. kudos to you as that can be difficult

                              however another side of that is learning to accept what you cant control - you cant control is dumb people package your software dumbly. best you can do it put a FAQ or README somewhere prominent explaining the situation and point everyone to that - as long as youve documented your side of the story smart people will read it and understand you arent at fault

                              dumb people wont read anything and blame you - but you cant do anything about that except archive the repo - and i know you dont want to do that

                              1. 14

                                Thanks. One thing: I won’t go so far as to say they’re dumb or doing things dumbly… they are by and large volunteers, doing their best. I do think they made some incorrect assumptions, and I lacked the bandwidth/energy/time to correct that. It’s just hard.

                        1. 9

                          While not self hosted, I have been really happy using Pushover for simple and OS vendor decoupled alerting.

                          1. 4

                            Funnily enough, the author of pushover is the same guy that originally built lobste.rs.

                            I was going to go for it but I probably need about one or two notifications a month and I couldn’t find a copy of the app on f-droid. If I could’ve got the app on f-droid I’d have probably gone for it.

                            1. 2

                              I don’t think that the app is free software, so it can’t go into the main repository of F-Droid.

                            2. 2

                              I love pushover too! It’s silly easy to use which makes it dead useful. For example, if I’m running a long command I like to wander from my desk to stretch my legs, so I have my shell notify me when any command running for longer than 2 minutes finishes. And I’ve used it for “monitoring” non-prod stuff, when I don’t want to bother setting up a full blown logging / alerting stack (which is often).

                            1. 17

                              I heard a similar story with a voice service company in Montreal that had a Lisp codebase. They had high productivity with a small team, but when they had to grow they decided to switch to Java, and that decision was not made only by management, but by the system’s authors as well. The reason one of the lead developer gave me was that it was hard to hire and train and that the use of macros made it too specific and then hard to get into. This article echoes these points almost exactly. At the studio we’re using inhouse languages, and it also only works because we’re a small, stable team.

                              1. 21

                                As the size of a team and codebase grows, the ratio of time spent writing code to reading other people’s code drops. Languages that make it easy to be productive while writing code don’t necessarily make it easy to read other people’s code. If you want to have a large team working on a large codebase then you want a language optimized for reading not writing.

                                1. 4

                                  So what are those? Java doesn’t seem to be ‘optimised for reading’ to me.

                                  1. 2

                                    It’s not. Go seems to be pretty easy to read, although perhaps there are other languages even more readable. Both Lua & Elixir strike me as contenders based on their low keyword count, although I haven’t actually used them so I can’t vouch for that based on experience. See https://medium.com/@richardeng/how-to-measure-programming-language-complexity-afe4f7e75786

                                    1. 4

                                      This is probably Go’s greatest strength. If you can get used to the Haskel-like syntax, Elm is quite possibly the most readable language currently in existence, though it sacrifices a lot to get there. And it’s no accident that these two languages are so readable. Go and Elm were both designed to be easy to get started with.

                                      I personally find Rust to be very readable compared to C, C++, Java, Python, Ruby, and any language that promotes traditional OOP practices (i.e. inheritance hierarchies), primarily thanks to its abstract data types (ADTs) (i.e. Rust enums), match expressions, and trait system, but also because the borrow checker strongly encourages ownership and control-flow resembling an directed acyclic graph rather than the tangled mess you often end up with in OOP-heavy languages.

                                      1. 2

                                        Don’t you mean algebraic data types? I always thought that was what Rust enums resemble.

                                        1. 1

                                          Yes, thanks for the correction. I often confuse the two names.

                                    2. 1

                                      I would imagine it’s directly tied to the amount of ambient magic in the system. Things that provide functionality for X, but are left out of the code for X “because boilerplate”.

                                  2. 8

                                    “Stable” is a key and often ignored point. In today’s world, people don’t stay in companies for longer than three of four years. This is specially true when engineers hit their late 20s and can command “senior” titles and salaries by switching jobs. That means that the language and stack you choose to use should make both hiring and onboarding as quick and easy as possible.

                                  1. 5

                                    I find the difference in quality of posts submitted to lobste.rs astonishing. On the one hand, there are great and insightful articles about PLT, cool new concepts like NixOS, pijul, formal methods and so many other interesting things. On the other hand there is this, where a startup CEO gives a big rant on hackernoon that only states the obvious (an attacker can do pretty much if he compromises your computer, duh) and gives harmful security advice.

                                    1. 4

                                      Honest question: Why are C programmers so keen about libraries being within one single source file? I guess it’s great if a library is simple and small, but a single file can also be very big…

                                      1. 3

                                        Probably because of the lack of a package and dependency management system.

                                        1. 2

                                          After having experience with old software and pip (Specifically trying (and failing) to get OsChameleon up and running) I feel safer knowing that a tarball of my code will build and work forever given POSIX 2008 is supported, and a good C compiler. Introducing package management systems into a programming language to me feels like a disgusting half-baked replication of a problem that is already ideally solved for 99% of linux systems.

                                      1. 2

                                        Great to hear they’re making progress on the e2e-encryption front! The last big thing that is missing is device cross-signing (see https://github.com/vector-im/riot-web/issues/6779). Really looking forward to the point when e2e-encryption will be default for all private rooms :)

                                        1. 3

                                          With TypeScript you can use ReadonlyArray to prevent any mutation on arrays.

                                          1. 5

                                            Not trying to downplay the authors workflow but external DSLs based on YAML are mostly an evolutionary dead end. It’s always better to use an embedded DSL because an embedded DSL doesn’t throw away decades of work and tool development for writing software. YAML was never meant to be a programming language or configuration format and most tools based on it create more complexity in the long run.

                                            For some evidence of why YAML is a terrible way to configure systems I recommend folks take a look at Helm and how it layers a templating system on top of k8s YAML files. For the next revision they’re even thinking of adding Lua as a script engine because lack of programmability is limiting what people want to do with k8s.

                                            Helm has an embedded Lua engine for scripting some event handlers. Scripts are stored in charts.

                                            https://github.com/helm/community/blob/master/helm-v3/000-helm-v3.md

                                            1. 3

                                              Not trying to downplay the authors workflow but external DSLs based on YAML are mostly an evolutionary dead end.

                                              I think I agree. I’ve been waiting for someone to do a configuration management system (e.g. a replacement for chef/ansible/saltstack) with a syntax more like terraform.

                                              1. 2

                                                I don’t know Terraform, but NixOS looks like a great replacement for these tools. It uses a real programming language, Nix, for configuration specification which is completely declarative. And with NixOps you can distribute the configurations onto remote servers.

                                                1. 4

                                                  NixOS and Nix needs way more accessible documentation before being in a position to replace anything.

                                                  Case in point, “Nix Pills” is often given when somebody asks for a tutorial. https://nixos.org/nixos/nix-pills/

                                              2. 3

                                                external DSLs based on YAML are mostly an evolutionary dead end. It’s always better to use an embedded DSL

                                                This is just nonsense. There are heaps of cases where it’s way better to use a non-embedded DSL.

                                                1. 2

                                                  I agree and think projects like https://github.com/stripe/skycfg are an interesting compromise. A python-like DSL to create configuration ASTs like for kubernetes :)

                                                1. 6

                                                  I can’t read this article without somehow getting redirected to a “club” offering free Walmart gift cards.

                                                  1. 6

                                                    I’d suggest installing an ad blocker. Not to read this particular blog post so much as because this is a good illustration of just how rampant malicious ads are on ad networks these days.

                                                    1. 1

                                                      Yeah, I tried on my phone and got lucky.

                                                    2. 2

                                                      Ugh, sorry about that. I guess that’s just wordpress? I don’t know of a better place to have my blog at.

                                                      1. 5

                                                        github pages seems to be trustworthy still

                                                        1. 2

                                                          Use GitHub Pages. Or DEV. Or NeoCities. Or, heck, even Medium (Medium has four different analytics packages, but even they don’t use an ad network).

                                                          1. 2

                                                            I like GitLab Pages

                                                            1. 2

                                                              I self-host Wordpress on my friend’s Dreamhost instance. Could open up a spot for you. :-)

                                                              1. 1

                                                                SDF offers comparatively cheap hosting: https://sdf.org/?tutorials#web

                                                            1. 9

                                                              This is good to know, just up until a few weeks ago I didn’t know they are going for a Chrome/Chromium-like model of software distribution, either. This is by no means obvious and I think their wording in this case is quite deceiving. Thankfully distros like Arch Linux are already distributing a free software build in their official package repositories [1]. I assume this will be the default in the future, just like everyone uses Chromium under Linux.

                                                              [1] https://www.archlinux.org/packages/community/x86_64/code/

                                                              1. 3

                                                                I think the goal with federated systems should be that every single person has their own home server. It can be a cheap computer, like a Raspberry Pi, that is always available and stores your data in one place. You can then access the service with your PC, Laptop, Smartphone, etc. Everything is under your control and you get good availability and performance. Federated systems like Mastodon or Matrix.org make this possible, so I really wouldn’t say that federation is always the „worst of all worlds“.

                                                                1. 4

                                                                  I have been using Matrix/Riot for a few months now and had about the same experience (I never used E2E though). I know that Riot still has some pain points that are quite annoying, but I love the idea behind the project and would really like to see it become more popular in the future.

                                                                  FYI, they are currently working on device key cross-signing so that you don’t have to verify every single device.