1. 9

    I decided to create a docker image that contains Zola. So, I now have a portable means to generate the site.

    As far as I remember, Zola provides official static binaries. In that light it looks like a classic case of dockeritis to me. ;)

    1. 6

      Sadly this is not really the case. They are binaries but they are not static and this has actually bitten me when I tried to run Zola on Centos 7. Relevant issue is here. It boils down to needing the C-based sassc compiler. Native sass compilers are in the works and coming along nicely but they are not quite there yet.

      For the time being, using Docker for this case is not so misled after all.

      1. 1

        ugh, sass tends to be the most annoying dependency, it’s also a big pain for the rust-Lang website

        1. 1

          Oh, I see. Yeah, “compile against the oldest glibc you want to support” is a hit and miss approach for sure. Until a truly static binary is made, a container makes sense.

        2. 1

          Hey, thanks for this comment. I hadn’t even realized that there are binaries of Zola available. You are probably right, for my current use case I could just use one of the binaries.

          However, I think I still prefer my container-ized solution for a couple reasons:

          1. In general, I prefer to build (non-os layer) things from source
          2. I don’t think I made this super clear in the blog post: In the future, I intend to replace the “pre-push” hook with a CD system that will automatically build and publish the site for every commit. The CD system I intend to use will require running a container. So, it makes sense to match my development environment with the CD environment.
        1. 10

          Maybe so, but if after several decades still no unbroken client emerges then there’s still something fishy going on. It might not be the protocol (though I hear that IMAP isn’t exactly the nicest thing to work with) but maybe we just haven’t figured out what a good email workflow would look like.

          Of course that’s coupled with the problem that everyone uses email differently.

          1. 4

            I’ve been working on an “unbroken client” for a while now. From my experience, email just has so much stuff going on. Many bugs emerge when different components of a system interact in unforeseen ways, and an email client has to have many components.

            1. 1

              IMAP:

              • Good: Central principle of one command to search taking parameters giving back uids, one command to fetch taking UIDs giving back parts of listed mails you asked: this is simple and nicely designed, You can now search and do something with the result like fetching it.
              • Bad: IMHO, he most useful feature of IMAP after searching mails is to have notifications of email: that is why there is the effort of keeping a TCP connexion open. But this is brought through extensions.
              • Ugly: many small quirks, like syntax with many edge cases, a select command that gets in the way (and prevents searching a string in everything for instance), mail IDs are at most int 64: too short to keep them immutable.
              1. 2

                I’m no big defender of IMAP, but there’s a lot of work being done to shave off some of the rough edges lately, see https://datatracker.ietf.org/wg/extra/documents/. In particular, the “64-bit mail ids” thing you mention is largely resolved by IMAP ObjectID (RFC8474).

                1. 1

                  I was wondering about this, looks like more room to use things like filesystem inodes or other methods to avoid a lookup to an index for every mail.

                  Now it is all about adoption ! :)

              1. 2

                Thanks for mentioning miniserve :D I’m currently rewriting it to work on stable rust and to upgrade its version of actix-web.

                1. 1

                  I looked at that one while researching prior art, it seems good. More featureful than my personal liking though, for example, it lets you download a tarball which is nice in principle, but I’d just use the CLI for the rare occasion I’d personally need that, rather than embed all that extra code + complexity into srv.

                  1. 1

                    For science, I used ab -n 1000 -c 10 http://localhost:${PORT}/ against the latest miniserve from Homebrew and srv from GH Releases. I used default options for each serving up the content of my ~/Downloads directory.

                    miniserve:

                    Server Software:
                    Server Hostname:        localhost
                    Server Port:            8080
                    
                    Document Path:          /
                    Document Length:        606474 bytes
                    
                    Concurrency Level:      10
                    Time taken for tests:   14.784 seconds
                    Complete requests:      1000
                    Failed requests:        0
                    Total transferred:      606594000 bytes
                    HTML transferred:       606474000 bytes
                    Requests per second:    67.64 [#/sec] (mean)
                    Time per request:       147.842 [ms] (mean)
                    Time per request:       14.784 [ms] (mean, across all concurrent requests)
                    Transfer rate:          40068.29 [Kbytes/sec] received
                    
                    Connection Times (ms)
                                  min  mean[+/-sd] median   max
                    Connect:        0    0   0.2      0       6
                    Processing:    43  147  36.4    139     279
                    Waiting:       42  135  26.9    130     233
                    Total:         43  147  36.4    140     279
                    
                    Percentage of the requests served within a certain time (ms)
                      50%    140
                      66%    164
                      75%    171
                      80%    176
                      90%    196
                      95%    220
                      98%    232
                      99%    239
                     100%    279 (longest request)
                    

                    srv:

                    Server Software:
                    Server Hostname:        127.0.0.1
                    Server Port:            8000
                    
                    Document Path:          /
                    Document Length:        197365 bytes
                    
                    Concurrency Level:      10
                    Time taken for tests:   10.032 seconds
                    Complete requests:      1000
                    Failed requests:        0
                    Total transferred:      197486000 bytes
                    HTML transferred:       197365000 bytes
                    Requests per second:    99.68 [#/sec] (mean)
                    Time per request:       100.323 [ms] (mean)
                    Time per request:       10.032 [ms] (mean, across all concurrent requests)
                    Transfer rate:          19223.65 [Kbytes/sec] received
                    
                    Connection Times (ms)
                                  min  mean[+/-sd] median   max
                    Connect:        0    0   0.3      0       6
                    Processing:    31  100  48.4     92     566
                    Waiting:       26   86  42.7     79     545
                    Total:         31  100  48.4     92     566
                    
                    Percentage of the requests served within a certain time (ms)
                      50%     92
                      66%    107
                      75%    118
                      80%    126
                      90%    146
                      95%    170
                      98%    241
                      99%    291
                     100%    566 (longest request)
                    

                    srv when I turned off console logging:

                    Server Software:
                    Server Hostname:        127.0.0.1
                    Server Port:            8000
                    
                    Document Path:          /
                    Document Length:        197365 bytes
                    
                    Concurrency Level:      10
                    Time taken for tests:   8.793 seconds
                    Complete requests:      1000
                    Failed requests:        0
                    Total transferred:      197486000 bytes
                    HTML transferred:       197365000 bytes
                    Requests per second:    113.73 [#/sec] (mean)
                    Time per request:       87.931 [ms] (mean)
                    Time per request:       8.793 [ms] (mean, across all concurrent requests)
                    Transfer rate:          21932.85 [Kbytes/sec] received
                    
                    Connection Times (ms)
                                  min  mean[+/-sd] median   max
                    Connect:        0    0   0.2      0       4
                    Processing:    21   87  32.3     82     257
                    Waiting:       15   75  29.3     71     223
                    Total:         21   87  32.3     82     257
                    
                    Percentage of the requests served within a certain time (ms)
                      50%     82
                      66%     96
                      75%    105
                      80%    111
                      90%    132
                      95%    150
                      98%    170
                      99%    187
                     100%    257 (longest request)
                    

                    And then I tested with a ~130 MB file ab -n 100 -c 10 http://127.0.0.1:${PORT}/Youre_Not_Alone.zip:

                    | metric                                      | miniserv   | srv        |
                    |---------------------------------------------|------------|------------|
                    | Requests per second [#/sec] (mean)          | 11.37      | 17.16      |
                    | Time per request [ms] (mean)                | 879.423    | 582.788    |
                    | Time per request [ms] (mean, all con. req.) | 87.942     | 58.279     |
                    | Transfer rate [Kbytes/sec] received         | 1441324.06 | 2174943.64 |
                    
                    

                    Cool!

                    1. 1

                      Does miniserve not log to console? [Comparing them with default options shows miniserve as having better tail latency unless you disable console logging for srv.]

                      1. 1

                        It doesn’t produce logs at all, IIRC.

                        1. 1

                          It doesn’t produce logs at all, IIRC.

                        2. 1

                          srv’s higher RPS is definitely due to the fact that I’m generating a very small amount of HTML. I bet if someone rewrote it in Rust it might get a tiny bit faster.

                          1. 2

                            Certainly true. Look at the transfer rate, though, especially for the single file test.

                            1. 1

                              Especially? It doesn’t matter for the rest because the size per response is much higher for miniserve.

                              Anyways, that’s pretty surprising. You have a pretty fast disk, I’m jealous. Maybe srv is using better syscalls? Or maybe you ran miniserv first, and that warmed up all the I/O cache.

                    1. 8

                      This might be a good place to ask:

                      • What is the BSD equivalent of the ArchWiki?
                      • What is the usability tradeoff between Docker and Jails?
                      • In what ways (if at all) can users contribute their own ports and make them available to other users?
                      • How is BSD for gaming these days?

                      These are genuine questions because I have pretty little clue about the BSD world. Would be cool if somebody with experience could share some insight. :)

                      1. 6

                        What is the BSD equivalent of the ArchWiki?

                        The handbook (which, incidentally, is very good).

                        In what ways (if at all) can users contribute their own ports and make them available to other users?

                        There’s not terribly good tooling for unofficial ports. They can be done, and have been done, but generally this will take the form of a whole alternate ports tree with a couple of changes.

                        How is BSD for gaming these days?

                        The main person working on this is myfreeweb (actually, I think he uses lobsters, so maybe he can say better than I can)–see here. The answer is ‘not great’, but also close to ‘quite good’. There is excellent driver support for nvidia and AMD GPUs. You can run most emulators natively. However, if you want to use wine, you will probably have to compile it yourself, because the versions in ports come in 32-bit-only and 64-bit-only varieties (no, they can’t co-exist), and you almost certainly want the version that can run both 32-bit and 64-bit apps. There is a linux emulator, but it can’t run steam (I did some work to try to get it running a while back, but it needs some work on the kernel side, which is too much commitment so I gave up on it), limiting its usefulness.

                        1. 2

                          Thanks! Do you know if there’s an easier way to contribute to the handbook than to write suggestions to the mailing list? Do you know if small user-to-user tips (for instance for rather specific hardware fixes) are allowed on the handbook? If not, where would those end up?

                          1. 2

                            @myfreeweb tags them with an email notification. If you’re replying to that person, leave off the @ so they don’t get hit with two emails for reply and @ mention notifications.

                            1. 1

                              I’m not really working on gaming all that much, the last really “gaming” thing I did was a RetroArch update (that still didn’t land in upstream ports…) For gaming, I usually just reboot into Windows.

                            2. 2

                              How is BSD for gaming these days?

                              I’d say the biggest effort is being undertaken by the openbsd_gaming community. A good starting point is the subreddit, then you can follow the most active members on Twitter or Mastodon to get more updates

                              1. 2

                                In what ways (if at all) can users contribute their own ports and make them available to other users?

                                1. You can submit a new port, e.g. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238584
                                2. You can update an existing port, e.g. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=241769
                                3. You can set up a custom category that contains your additions to the ports tree (I have not tried this…), e.g. https://www.amoradi.org/2019/11/12/maintaining-port-modifications-in-freebsd.html
                                4. You can set up a poudriere build system to build your own packages from the ports tree, e.g. with non-standard settings (it’s harder to pronounce than do…), e.g. https://www.digitalocean.com/community/tutorials/how-to-set-up-a-poudriere-build-system-to-create-packages-for-your-freebsd-servers then use portshaker to merge it with the standard tree within the poudriere hierarchy, e.g. https://github.com/hartzell/freebsd-ports.

                                I’ve found building packages for the ports that I want, the way that I want and then installing/upgrading them with pkg is a much smoother experience than trying to install directly from ports and then upgrade using one of the ports management tools.

                                1. 1

                                  Thanks for the answer. You seem quite knowledgeable. How do people share build scripts for software that may not be shipped in binary form but that you can build yourself locally if you have the data? I’m thinking about some NVIDIA projects (like OptiX) or some games. Basically, is there an AUR for FreeBSD anywhere? I checked your links and obviously ports can be shared amongst users but I’m just curious whether there’s an index for those user-contributed ports anywhere.

                                  1. 2

                                    Sorry for the delay, missed your reply/question.

                                    I’m not familiar with AUR, but assume you mean the Archlinux User Repository.

                                    I don’t know of anything similar in the BSD world. At the least-automatically-shared end, people write software that works on the BSD’s and distribute it with instructions on what it needs. At the most-automatically-shared end, people contribute “ports” to the FreeBSD Ports tree. Automation oriented folks like me end up with their own collection of ports that explicitly rely on the FreeBSD Ports tree. I don’t know of anything that formalizes either the discovery of, or dependence on, other people’s personal ports. It hasn’t ever been an issue for me.

                                    1. 1

                                      Alright, makes sense. Thanks

                                2. 1

                                  I ran into a thing just yesterday - jails (can) get an IP of their own, which seems to be automatically added to the host’s interface, but they do not (and afaik cannot) get their own MAC. This is FreeNAS for me (with iocage) and this is a little annoying because my fritzbox router seems to have a problem with port forwards now. But maybe I’m wrong and just haven’t solved in properly.

                                  In this case with docker at least would be possible to just use PORTS/EXPOSE and use the host’s main ip.

                                  Apart from that I’ve never encountered problems with jails and found them really smooth to work with.

                                  1. 2

                                    You can give a jail a whole virtual network interface (epair/vnet) and then bridge it or whatever. You can also just use the host’s networking if you don’t need to isolate networking at all for that jail.

                                    1. 1

                                      thanks, that’s a good term to search for. I’m just a little surprised it (suddenly) doesn’t work anymore in my setup. My research so far has proven inconclusive with a lot of people saying that it can’t be done (in version X)

                                1. 5

                                  Seems like docker is still considered problematic on FreeBSD and also there doesn’t seem to be CUDA (or at least I wasn’t able to find it) which might make ML quite a bit harder. Also, most CIs do not appear to have BSD support which will make it harder to ensure software works on that platform and likewise, if you use BSD, more software will be untested.

                                  I want to like BSD stuff but I don’t feel like I’d be getting any benefits from switching. I like running the very newest hardware and using the current stable upstream releases and from a quick look at ports, it doesn’t seem to be supporting this style of using computers. I guess I’m gonna stay on Arch Linux for the foreseeable future. :)

                                  1. 3

                                    It’s a pity Linux (“is not Unix”) is such an island in Unix-land. A lot of software is developed on Linux and it is getting harder and harder to port that to other Unixes like FreeBSD and OpenBSD.

                                    1. 2

                                      Linux is more like a continent, and the *BSDs are small islands…

                                      At least when I looked at FreeBSD almost a decade ago, their preference was to have a good binary interface to Linux software, so that FreeBSD could profit from it.

                                      1. 1

                                        ok, but I use linux, and I would like to be able to use the parts of linux that aren’t portable to eg BSD. People complain that systemd is linux only, but that’s because it uses cgroups, and it uses cgroups for a really good reason.

                                    1. 13

                                      The article fails to account for reserved instance pricing, the sustained use discount, the free tier, and spot or pre-emptable instances.

                                      Pricing on AWS/GCP is complex, but you can save a lot of money if you’re careful.

                                      Though to be fair that complexity is one way they make money. You could save a lot of money, but it’s all too easy to overlook something.

                                      1. 21

                                        Hi, OP here.

                                        • I believe I am taking Google’s sustained use discount into account
                                        • I haven’t included the free tier because it is marginal and I think most organisations will exhaust it fairly quickly
                                        • I think spot and preemptable instances are not a general product but a specialist one: only some applications of virtual machines can tolerate getting evicted

                                        I do discuss the issue of complexity later on. I don’t think it normally works to the advantage of the customer.

                                        My intuition (and experience!) is that most real world AWS customers get bamboozled by the incredible complexity of pricing (especially when it’s presented in non-human readable units like “0.034773 per vCPU hour”) and wind up paying far, far over what the going rate of renting a computer should be.

                                        1. 5

                                          Hey OP, could you add Hetzner Cloud servers? Should be a whole lot cheaper than anything else you’ve got on there if I’m seeing this correctly.

                                          1. 2

                                            There’s also a built-in Terraform provider https://www.terraform.io/docs/providers/hcloud/index.html

                                            1. 2

                                              Agree, it seems to be 10 Euro/month for 8GB in the cloud plan.

                                              I’m running a root server with 32GB of memory and 2TB hard disk at Hetzner for ~30 Euro / month (from the Serverbörse). I do not know about their support at all, but I am quite sure that from a US IT company I could only expect automated mails, anyway. So Hetzner cannot be any worse there.

                                              Of course, root server and cloud hosting are two totally different beasts, but in my humble opinion it’s a choice the US-centric tech community too often does not even consider. The mantra is always the application has to be horizontally scalable.

                                              1. 1

                                                It should just be noted that Serverbörse is usually based on dekstop machines and the like, often older CPUs and servers, so you might not want to rely on that if your application stack is considered mission-critical.

                                                As for the cloud vs classic servers, it’s a different beast completely, yes. A lot of internet wouldn’t be alive if you had to pay a linux admin to configure your servers, deploy your apps and pay attention to traffic, script kiddies etc. But not having a lot of internet online could perhaps be considered a good thing, eh?

                                            2. 5

                                              On preemptible/spot they both provide /liberal/ shutdown warnings, it is possible to run almost anything aside from long life connection hosts (e.g. websockets) or extremely stateful applications like databases. Use cases that don’t fit spot are approaching minority in 2020 with current infrastructure trends.

                                              Re: DigitalOcean, I did a migration a few years back where AWS came out vastly cheaper than the equivalent configuration on DO mostly due to AWS instance reservations, which are a trivial factor to plan for when you’re already planning a large migration.

                                              The one area I couldn’t possibly defend the cloud providers is bandwidth pricing. All other costs are a footnote compared to it for any account doing high traffic serving

                                              1. 10

                                                Not an expert on this, but while it seems it is possible to run lots of things on hosts that may shut themselves down automatically, actually doing so will cost you more developer and devops time instead of just paying a little more for hosting. It seems likely that this is time you want to spend anyway, as part of making an application more resilient against failure, but it still makes the situation yet more complicated, and complexity usually serves Amazon more than the customer. (And I have a hard time believing that databases are approaching a minority use case with current infrastructure trends. ;-)

                                                1. 3

                                                  Bandwidth pricing is the primary lock-in mechanism cloud providers have. It should be seen as anti-competitive.

                                                  1. 4

                                                    I don’t understand what you mean. Are you saying bandwidth costs of migrating data to another cloud would be prohibitive? Or something else?

                                                    1. 3

                                                      Personal example: I started to develop an application with AWS services (Lambda, SQS, EC2, S3). Later I changed it to an application for a “normal” server. I still wanted to store data to S3, but the cost to download it from there for analysis is just ridiculous. So the choice was to store to S3 and run on EC2 or not to store to S3. (I decided against S3).

                                                      1. 4

                                                        What I mean is that data transfers between services in the same cloud x region are much cheaper than data transfers between clouds. So it’s more expensive to store logs in AWS and analyze them with GCP, compared to just analyzing them in AWS. You can’t take advantage of the best tools in each cloud, but are forced to live in one cloud, creating a lock-in effect.

                                                        If there was a rule that bandwidth prices must be based on distance, not whether the endpoints are within the same cloud, we’d see more competition in cloud tools. Any startup could create a really good logs-analysis tool and be viable, for example. This rule runs into some legitimate issues though. For example, if a cloud provider has custom network hardware and fiber between their own data centers, the cost of moving data between their zones might be much cheaper than sending it over the public internet to another cloud provider. Moreover, many different cloud services are co-located in the same data center. So it’s much cheaper to analyze logs using a service that already exists where the data is than to ship it off to another cloud.

                                                        The problem is big cloud vendors have little incentive to let users take their data out to another cloud. It’s going to be a market where only a few big players have significant market share, at this rate.

                                                        1. 2

                                                          Okay I see what you’re saying now. And when bandwidth costs encourage using more services in one cloud, you become more entrenched and entangled to the services of that particular cloud, locking you in even more.

                                                          1. 1

                                                            I agree completely on the bandwidth pricing. At this point, I think this should be considered a common public infrastructure, like roads etc. Yes, I understand that there are costs to providing it all, that some companies have invested in infrastructure privately, all I’m saying is that the traffic should be “free” for the consumers (and even for the business sector that the article OP is mentioning, companies hosting wordpress or timesheet or some similar small apps like that without major engineering teams).

                                                      2. 2

                                                        Yep, it’s definitely true for existing apps. Converting a large stateful app to new world is a nightmare, but you get so many benefits, not least the problem of preemptibility and autoscaling are basically identical. The big Django app used to require 16 vCPUs to handle peak, so that’s how it was always deployed. Now it spends evenings and non-business days idling on a single t2.micro

                                                        In the case of a typical Django app though, if you’re already using RDS then the single most typical change is moving its media uploads to S3. It’s a 15 minute task to configure the plugin once you’ve got the hang of it, but yep, for a single dev making the transition for a single app, that probably just cost you a day

                                                      3. 3

                                                        The one area I couldn’t possibly defend the cloud providers is bandwidth pricing. All other costs are a footnote compared to it for any account doing high traffic serving

                                                        Thanks, I came here to say that. The article didn’t even factor in bandwidth/network costs, which matter for both EC2 and S3 (not as familiar with the other cloud providers).

                                                        1. 2

                                                          Anecdotally, from friends who work in the AWS machine: once you get to real (financial) scale with AWS - think “7 digits a month” or so - you’ll find Amazon is extremely happy to negotiate those costs.

                                                          Fiber ain’t free, but I wager that the profit margin is probably highest there.

                                                      4. 1

                                                        It is those weird units that prevents me as an individual developer from even considering them - when realistically it should be easy to understand the pricing on this sort of thing.

                                                    1. 19

                                                      I’ve worked on an open source project. Not so tiny, it used to be preinstalled with several major distros, and is still quite popular.

                                                      Early 2018 we had a major CVE, with remote code execution. We had a patch ready within of 8 hours of discovery, had it tested and in our official releases within of a few days.

                                                      Debian took over a month to patch it (and continued using an old version with major bugs, only patching security issues themselves). And they were the fastest. Alpine 3.7 was the first to ship the fix, and that took an eternity. Previous alpine versions (at the time still officially supported) never got the patch.

                                                      Now, we’re moving towards snap/flatpak for desktop and docker for server, and building our own packages and bundles, because distro maintainers are basically useless, always ship ancient broken versions, users come to us to complain about stuff being broken (and distros refuse to ship bugfixes or versions from this decade), and the maintainers are never reachable, and even security updates are shipped at glacial speed.

                                                      Honestly, distro maintainers are a massive security risk, and after this experience, I’m kinda mind blown.

                                                      1. 10

                                                        As an Arch packager, I can’t help but feel a little bit offended by what you said there. >:(

                                                        1. 12

                                                          Arch is actually one of the few distrso where this issue never existed - but that’s because arch, being rolling release, actually just uses our upstream sources, and updates frequently and reliably.

                                                        2. 8

                                                          because distro maintainers are basically useless

                                                          That’s quite an offensive statement.

                                                          1. 6

                                                            If major software that’s preinstalled and in the default start menu of Kubuntu is so outdated that it has remotely exploitable bugs, months after developers have released patches for all version branches, including the one used by Debian/Ubuntu/etc, then how can you really trust the packages installed on your system?

                                                            How many programs from the repos do you have installed which are not that common, or complicated to package. Are you sure they’re actually up to date? Are you sure there are no vulnerabilities in them?

                                                            Ever after this, I can’t trust distros anymore.

                                                            1. 3

                                                              And that makes distro maintainers basically useless?

                                                              1. 8

                                                                Yes. If there’s no practical value add, that statement is true.

                                                                It’s harsh to take, but yes, it’s okay to ask groups that insist on their status - especially in a role prone to gatekeeping - to stand for their value.

                                                                1. 3

                                                                  If you can’t trust software distributed by your distro to be up-to-date and safe, what use does it have then? Stability is never more important than safety.

                                                                  The whole point people use distributions, and especially reputable ones, is because they want to ensure (a) stuff doesn’t break, and (b) stuff is secure.

                                                                  1. 2

                                                                    If you can’t trust software distributed by your distro to be up-to-date and safe, what use does it have then?

                                                                    Of course packagers try to keep stuff up to date and secure, but a) things move fast, and spare time and motivation can be at a premium; and b) there’s too much code to audit for security holes.

                                                                    distro maintainers are basically useless

                                                                    Come on now… I assure you, you’d be pretty upset if you had to build everything from source.

                                                                    1. 4

                                                                      Of course packagers try to keep stuff up to date and secure, but a) things move fast, and spare time and motivation can be at a premium; and b) there’s too much code to audit for security holes.

                                                                      And this is where @arp242’s sentiment comes from. “In a world where there is a serious shortage of volunteers to do all of this, it seems to me that a small army of ‘packagers’ all doing duplicate work is perhaps not necessarily the best way to distribute the available manpower.”

                                                                      1. 1

                                                                        In a world where there is a serious shortage of volunteers

                                                                        This is false. All too often it is difficult to find good software to package. A lot of software out there is either poorly maintained, or insecure, or painful to package due to bundled dependencies, or has hostile upstreams, or it’s just not very useful.

                                                                        It’s also false to imply that all package maintainers are volunteers. There are many paid contributors.

                                                                      2. 2

                                                                        Come on now… I assure you, you’d be pretty upset if you had to build everything from source.

                                                                        I don’t necessarily have to — the distro can provide a clean base with clean APIs, and developers can package their own packages for the distro. As some operating systems already handle it.

                                                              2. 3

                                                                Various distributions, including Debian, backport security fixes to to stable versions even when upstream developers don’t do it. It’s not uncommon that the security fixes are released faster than upstream.

                                                                Your case is an exception. Sometimes this can be due to applications difficult to package or difficult to patch or low on popularity.

                                                                Besides, it’s incorrect to assume that the package mantainer is the only person doing security updates. Most well-known distributions have dedicated security teams that track CVEs and chase the bugs.

                                                                1. 1

                                                                  We already provide backported security fixes, as .patch simply usable with git apply, and provide our own packages for old and recent branches. It’s quite simple to package too. Popularity, well, it was one of the preinstalles programs on Kubuntu, and is in Kubuntus start menu (not anymore recently, but on older versions it still is).

                                                                  The fact that many distro maintainers still take an eternity updating patches, and sometimes not even apply those, makes relying on distro packages quite an issue. I don’t trust distro maintainers anymore, not after this.

                                                                2. 3

                                                                  Honestly, distro maintainers are a massive security risk, and after this experience, I’m kinda mind blown.

                                                                  I think this is mostly because you have a one-sided experience of this and it’s most likely a bit more nuanced and down to several factors.

                                                                  One of them being that the CVE system is broken and hard to follow. How did you disclose and announce the CVE and fix? Did the patches need backports for the given release and where those provided? I don’t know the CVE number, so this is hard to followup on. But the best approach is to announce on a place like oss-sec from open-wall and it should be picked up by all distribution security teams.

                                                                  The other side of this, which is what distribution maintainer see, but few upstreams realize, is patching dependencies is where most of the work is done. Distributing your app as a snap/flatpak works great if you also patch the dependencies and keep track of security issues with those dependencies. This is where most upstreams fails, and this is where distribution maintainers and the distro security teams improve the situation.

                                                                  1. 1

                                                                    The other side of this, which is what distribution maintainer see, but few upstreams realize, is patching dependencies is where most of the work is done. Distributing your app as a snap/flatpak works great if you also patch the dependencies and keep track of security issues with those dependencies

                                                                    That’s why, if you ever build such images yourself, you need to automate it, have it as CI, and update those dependencies at least daily, and generate a new image whenever new dependencies are available. Obviously, you need automated tests in your build procedure to ensure everything still works together, as sometimes some dependencies break important stuff even in patch releases.

                                                                    How did you disclose and announce the CVE and fix? Did the patches need backports for the given release and where those provided

                                                                    We provided patches for every version distros used, as nice patch files that could directly be applied with git apply, and in addition to the more common ways, we also directly contacted the package maintainers for our package for the important distros via email or instant messaging.

                                                                    In general, personally, I’m not a fan of the stable model anyway, though. We’ve done great work to ensure the software stays 100% binary compatible for all its protocols since 2009, we support every supported version of debian and ubuntu even with our absolutely newest builds, and yet, in the end, it’s the distro maintainers shipping not only outdated versions (apparently some users prefer buggy old versions), but also take time to apply security fixes.

                                                                    1. 2

                                                                      That’s why, if you ever build such images yourself, you need to automate it, have it as CI, and update those dependencies at least daily, and generate a new image whenever new dependencies are available. Obviously, you need automated tests in your build procedure to ensure everything still works together, as sometimes some dependencies break important stuff even in patch releases.

                                                                      Which again, few upstream do this, and they surely do not keep an eye on this at all. You sounds like a competent upstream and it’s nice when you encounter them :)

                                                                      We provided patches for every version distros used, as nice patch files that could directly be applied with git apply, and in addition to the more common ways, we also directly contacted the package maintainers for our package for the important distros via email or instant messaging.

                                                                      And this is how you should proceed. I would however contact the linux distro list if it’s a widely used piece of software multiple distributions package, and the CVE is critical enough. https://oss-security.openwall.org/wiki/mailing-lists/distros

                                                                      In general, personally, I’m not a fan of the stable model anyway, though. We’ve done great work to ensure the software stays 100% binary compatible for all its protocols since 2009, we support every supported version of debian and ubuntu even with our absolutely newest builds, and yet, in the end, it’s the distro maintainers shipping not only outdated versions (apparently some users prefer buggy old versions), but also take time to apply security fixes.

                                                                      The work is appreciated, but I’ll still urge you to not let one bad experience ruin the whole ordeal. Distribution security teams is probably one of the least resourceful teams and sometimes things do fall between two chairs.

                                                                      1. 3

                                                                        The work is appreciated, but I’ll still urge you to not let one bad experience ruin the whole ordeal. Distribution security teams is probably one of the least resourceful teams and sometimes things do fall between two chairs.

                                                                        But given that the main argument of distros is security, that statement flies directly in the face of their promises.

                                                                        1. 2

                                                                          But given that the main argument of distros is security, that statement flies directly in the face of their promises.

                                                                          I don’t think it’s the main argument, but surely one them. If you want to be completely covered you need a well paid team able to respond. You wont get this with community based distribution, we are unpaid volunteers, just like most upstreams. You’ll have to use something backed by a paid team if you expect premium service and full coverage.

                                                                          Anything else is only on a best effort basis. The CVE system is sadly hard to navigate, ingest and process. Some things are going to bubble up faster, and something is going to be missed.

                                                                          1. 2

                                                                            I have absolutely no issue with all your statements, but it is a cornerstone argument.

                                                                            I’m fine with community distributions, if they own it, and agree that paid distros are a good way to go. RHEL licenses are actually worth their money.

                                                                            I disagree with the reading of best-effort, though, because it goes both ways. If your work is impacting others, either through making them have more support requests or slowing down their iteration speed, you need to make sure you don’t add undue labor.

                                                                  2. 3

                                                                    With this attitude, which a lot of developers seem to have nowadays, it doesn’t make sense to have your software included in distributions. As a packager I’d call this a hostile upstream… Just distribute it as a flatpak and/or snap and be done with it.

                                                                    Relevant here may be a blog post from an upstream fully embracing the distribution instead of fighting it: https://www.enricozini.org/blog/2014/debian/debops/

                                                                    1. 3

                                                                      It allows me to rely on Debian for security updates, so I don’t have to track upstream activity for each one of the building blocks of the systems I deploy.

                                                                      That’s exactly what I used to believe in, too, but after this experience, the facade has cracked. I can deal with 90% of my packages being years out of date and full of bugs because the distro wants to be stable and refuses to apply bugfixes or update to newer versions, but if security updates aren’t reliably applied even if they have a CVE (and debian just ignores issues entirely if they have no CVE), then how can one still trust the distro for security updates? Having a remotely exploitable unauthenticated DoS if not even RCE in a publicly facing software for 30 days is absolutely not fine.

                                                                      As a packager I’d call this a hostile upstream… Just distribute it as a flatpak and/or snap and be done with it.

                                                                      We actively maintain all version branches, and provide even backported security patches as nice little .patch file even for all the major.minor.patch releases debian/ubuntu still use. You can build it nice and simple, you just have to apply one little patch. It’s not like this we’ve been actively hostile - what more should we have done, in your opinion?

                                                                      1. 2

                                                                        how can one still trust the distro for security updates?

                                                                        Fair enough. If they are not applied. I personally know at least one Debian package maintainer (not me, I don’t like Debian) that takes excellent care of their packages, including in the stable releases. So it may depend on the maintainer. But maybe that is your point, that there is no universal standard for maintainers…

                                                                        what more should we have done, in your opinion?

                                                                        I don’t know this specific case. There are a number of other ‘historical’ cases where packagers gave up on packaging ‘upstream’ software, e.g. https://www.happyassassin.net/2015/08/29/looking-for-new-maintainer-for-fedora-epel-owncloud-packages/. I also wrote a blog post about it in 2016: https://www.tuxed.net/fkooman/blog/owncloud_distributions.html I guess the best one can do is follow these discussions and if possible make it easier for distributions to package the software. Especially the ownCloud case back then bugged me a lot. But as you can see from some other people in those discussions, we just gave up on ownCloud and used something else instead…

                                                                  1. 12

                                                                    From https://about.gitlab.com/blog/2019/10/10/update-free-software-and-telemetry/

                                                                    In order to service the needs of GitLab.com and GitLab Self-Managed users who do not want to be tracked, both GitLab.com and GitLab Self-Managed will honor the Do Not Track (DNT) mechanism in web browsers. This means that, if you turn on Do Not Track in your browser, GitLab will not load the JavaScript snippet. The only downside to this is that users may also not get the benefit of in-app messaging or guides that some third-party telemetry tools have that would require the JavaScript snippet. Overall, we believe these changes will continue to help us achieve results in improving our product experience for users, while also giving choice to users who only want free software. Please let us know your thoughts.

                                                                    I’m not sure what GitHub’s tracking is like (for comparison) and whether they respect DNT but frankly this doesn’t sound to me like the outcry is justified. According to some guy from GitLab support that posted internal chatlogs on the Orange Website (with permission), you can also opt-out of that tracking entirely for an instance.

                                                                    I’m not affiliated with GitLab other than liking the tool but I think we should all try to stay based on facts and less on FUD.

                                                                    1. 7

                                                                      It’s really nice to have relatively boring rustc releases.

                                                                      1. 3

                                                                        It’s the calm before the storm. :D

                                                                        1. 2

                                                                          What’s the storm looking like?

                                                                          1. 6

                                                                            async/await is landing in next release and that’s going to cause a storm in the ecosystem for sure.

                                                                            1. 4

                                                                              Which also means it is now in the beta chanel & Rust’s beta builds are incredibly stable in my experience.

                                                                              1. 2

                                                                                Ooh I should try it out. I usually use beta by default, just to ensure I’m not writing anything that will produce surprises on the next stable release. Hasn’t happened in a long time, but it can happen.

                                                                      1. 3

                                                                        Good for you. The memory requirements for those wrapped products seem exceptionally high. I think you could comfortable run a mail server on the lowest tier EC2 machine or equivalent. Like other comments I’ve been doing my own for a while too, initially on a P200 with 32M RAM, although I wasn’t using modern spam filtering back then (just spamassassin iirc). My present exim instance is using 54212 KiB virt, 2280 KiB resident RAM right now.

                                                                        To pick up on one other point

                                                                        No really, this is the best choice for truly private email. Not ProtonMail, not Tutanota. Sure, they claim so and I don’t dispute it. Quoting Drew Devault,

                                                                        Truly secure systems do not require you to trust the service provider.

                                                                        All you’ve done is push who you are trusting to further down the stack, to the VPS level. Your email server and email are on a virtual storage device being provided by a commercial business. What trust do you have that they aren’t snooping on the bits? Do you know where those bits are actually located? Do you know where they are replicated to, or backed up to? When a RAID disk with some of your data on it is pulled and replaced, do you know where the dead drive is going? Did they wipe it? Did they RMA it without wiping it? Are they using encryption at any layer of their service? Are you? What about their backup scheme? Are your emails on an LTO tape somewhere?

                                                                        An anecdote from my adventures with VPS providers. I have moved through a few providers over the last 15 years. About 5 years after moving from my last host to my current one, I started receiving cron mails from my old server. The old VPS provider had not scrubbed my old VM when I stopped my account, and after performing a system migration or some other kind of maintenance, my old VM got booted up again. I wasn’t paying them, but there it was, in the state I had left it in when I switched it off. I was still able to log into it, so I made sure to scrub the bits as best I could myself that time.

                                                                        1. 2

                                                                          All you’ve done is push who you are trusting to further down the stack, to the VPS level …

                                                                          You’re right. But as I’d also mentioned in the article, it boils down to your threat model. I personally feel this is as far as I would go. Of course, a homelab setup would be ideal, but that’s not something I can achieve at the moment – so settling for a VPS is the next best thing.

                                                                          An anecdote from my adventures with VPS providers …

                                                                          This is very interesting, thanks for sharing. I never once thought something like this would/could occur. Something I’ll have to keep in mind.

                                                                          1. 1

                                                                            I’m running mailcow in a test setup. 1 GB of 2GB used (subtract ~150 MB for one other container), idle - but I don’t think it will go up a lot under load, it’s mostly that there just are quite a few containers/daemons running.

                                                                            Depending what you view as “mailserver”. My prod mailserver (postfix, dovecot, spamassassin, apache+postfixadmin+postgres) uses only ~180MB, but it’s not a full-featured suite. Maybe if you strip off the calendar stuff mailcow would be able to run in 512 MB

                                                                            1. 1

                                                                              Depending what you view as “mailserver”

                                                                              Indeed. I’d personally consider calendaring out of scope bit it seems this isn’t universal.

                                                                            2. 1

                                                                              Well, at some point you have to decide on the best cost compromise between

                                                                              1. Spending the time setting everything up just right, perfectly secure and well-maintained
                                                                              2. Caring about whether some emails get lost in the void without you knowing (GMail and Outlook are known to do that)
                                                                              3. Privacy (or cost resulting from lack thereof)

                                                                              So, you have to weigh many possible scenarios against one another. E.g. what is the cost of GMail reading your mails vs. opportunity costs of lost emails that are sent by your private mail server. I suppose, for the vast majority of people, the worst realistically possible damage by GMail reading your mails is less than the cost of learning how to run your own mail infra properly.

                                                                              I’m not debating whether it’s a good learning exercise. Setting up your own infra with LDAP and everything is a great learning exercise.

                                                                            1. 6

                                                                              Quite a few things:

                                                                              • Arch Linux - lots of packaging like always
                                                                              • proby - checks a port on a different server and returns the status on HTTP
                                                                              • miniserve - small convenient webserver for sharing files
                                                                              • dummyhttp - dummy webserver that always returns a fixed response and logs all incoming requests
                                                                              • wmfocus - focus i3 windows visually
                                                                              • genact - nonsense activity generator for impressing colleagues
                                                                              • mt940 - a parser for bank statements formats
                                                                              1. 3

                                                                                I love genact. The other things are more useful, but genact is pure art.

                                                                                1. 1

                                                                                  Thanks :D

                                                                              1. 2

                                                                                …should I be getting into DevOps/SRE?

                                                                                1. 6

                                                                                  Well, if you’re good with Linux stuff anyway and writing lots of YAML is your cup o’ tea, sure!

                                                                                  Snark aside, if you like overseeing and engineering complex systems and can stay on top of new products coming out every other day and dying just as often, it might be a good field to look at. To get started, try to set up a small Kubernetes cluster with cloud servers and then set up a modern CI/CD pipeline on there. If you like that, DevOps is for you.

                                                                                1. 3

                                                                                  I would love something akin to qalc in Rust. Do you have anything like that planned? I’d like to be able to do stuff like “20% + 100” and “200 USD to EUR”.

                                                                                  1. 3

                                                                                    https://github.com/tiffany352/rink-rs Unit conversion tool and library written in rust

                                                                                    1. 2

                                                                                      Personally I love Lit css. It’s work of art :)

                                                                                      1. 2

                                                                                        While it’s small, it’s not a “classless” CSS framework so it’s not quite in the same ballpark of all these other frameworks. I think the whole point of the other frameworks and OP is that you just add them and you’re done.

                                                                                        1. 1

                                                                                          ah, yes you are right. While most stuff will work without classes, you need at least container for good spacing.

                                                                                      2. 1

                                                                                        Another alternative is Marx https://mblode.github.io/marx/.

                                                                                      1. 9

                                                                                        Have you checked out wasmer yet? I think it will come in quite handy.

                                                                                        Also, I do quite a bit with WebAssembly. In fact, I’ve been using emscripten since before there even was WASM. Check out genact for a useless project I did that outputs to WASM as well as native platforms.

                                                                                        1. 1

                                                                                          I have not seen that, thanks for the link, I will check it out!

                                                                                        1. 2

                                                                                          I would really appreciate it if they’d update the official docker images more often.

                                                                                          1. 1

                                                                                            It’s my understanding that the Docker container contains an autoupdate mechanism. I’m able to pull the latest version of PMS by restarting the container.

                                                                                            1. 2

                                                                                              That seems silly. Isn’t the point of a Docker container so that you know the exact state of the components therein?

                                                                                              1. 1

                                                                                                Oh what? I’m going to try that!

                                                                                                Tested it, doesn’t work. :/ Using plexinc/pms-docker

                                                                                                1. 1

                                                                                                  Ah, sorry. It’s only the public and plexpass tagged images that have this functionality.

                                                                                                  In addition to the standard version and latest tags, two other tags exist: plexpass and public. These two images behave differently than your typical containers. These two images do not have any Plex Media Server binary installed. Instead, when these containers are run, they will perform an update check and fetch the latest version, install it, and then continue execution. They also run the update check whenever the container is restarted. To update the version in the container, simply stop the container and start container again when you have a network connection. The startup script will automatically fetch the appropriate version and install it before starting the Plex Media Server.

                                                                                            1. 4

                                                                                              So I’m currently trying to decide which (if any) CI tools to choose for Arch Linux. We currently have absolutely no auto building except for in a few specialized cases. Do you think your CI would be happy looking out for 15000 packages which each have potentially different versions built?

                                                                                              1. 3

                                                                                                Absolutely. I maintain Arch Linux repos with builds.sr.ht myself, we can put our heads together and figure something out. Shoot me an email at sir@cmpwn.com, or hit me up on IRC: ddevault.

                                                                                              1. 1

                                                                                                Am I blind or is there no mention of cargo-web? I think that is an invaluable tool!

                                                                                                1. 3

                                                                                                  Continuing my work on my rust MT940 parser implementation using pest. For anyone wondering, I tested nom, combine and pest and so far, pest is by far the most usable out of all of those.

                                                                                                  And yes, this is a spare time project. I parse bad banking formats in my spare time. What am I doing.

                                                                                                  1. 9

                                                                                                    Sadly this does not compare the quality of the different clients which definitely plays a big role for me. Also, an in-depth feature analysis would have been nice. I’m missing criteria such as API access, bot scriptability, stickers, gifs, platform integration.

                                                                                                    1. 2

                                                                                                      Sadly this does not compare the quality of the different clients which definitely plays a big role for me.

                                                                                                      Perhaps the most important feature, but, unfortunately, also not as objective as the fields listed here.