1. 14

    I have to admit – as someone who lives in the USA, the “Make ___ ___ Again” formulation is super unpleasant in basically every context.

    1. 26

      I think return and throw should be emphasized. I want it to be very clear if there are early exits!

      1. -2

        Even better:

        Remove superfluous control-flow constructs like return, throw, for, break and continue from the language – they are pointless relics of the past anyway.

        The slight convenience of being able to write “nicer” code is completely offset by the difficulty to read such code a week later (or a month, or a year).

        1. 13

          Oh gods, I could not disagree more; I think explicit control flow should be prominent. One of the languages I find most frustrating has a tonne of implicit and optional control flow that renders it super hard to read or write, as it allows for way too much personal style.

          1. 3

            I’m not sure how you are disagreeing with me – my suggestion is that there is only one set of control-flow keywords:

            • if: replaces return, throw, switch/case, break, continue.
            • while: alternative to recursion. Replaces for loops.

            With this

            • there is neither implicit, nor optional control flow
            • there are no styles, because there is only one choice
            • things become easy to read, because there are only 2 keywords to look out for, not 10
            • things become easy to understand, because there are simply no interactions between different “levels” of control-flow anymore
            1. 9

              Would absolutely love to program in a language with insane nested if statements because returning early is for chumps apparently?

              1. 2

                Couldn’t you pick any existing language for that? It’s not like people aren’t doing it voluntarily…

              2. 4

                This is almost Rust. Everything-is-an-expression makes it possible to make entire function body just ifs and match blocks that exhaustively handle all possible cases, so you never need an explicit return.

                However, in such case every last statement of every “leaf” block becomes an implicit return. If you overdo it, it’s completely inscrutable.

                BTW: your list 100% matches brainfuck: no implicit control flow, no style choices, no superfluous keywords, only one kind of control flow. Less is not always better.

                1. 3

                  This is almost Rust.

                  I don’t think this is even remotely true – there is a huge difference between “you could do that” and “there is no other way of doing it”.

                  The latter means that the code you haven’t written (99.99% of the code) is written in that specific style; the former only means you can make your own code follow these rules (which is rather irrelevant).

                  Rust has waaaaay to much going, it’s not a good data point.

                  Less is not always better.

                  At least the goal posts aren’t moving every single year about the “right” language size, like in Rust’s “more features == better language” ideology. :-)

                  This is why my Rust libraries (very small, only about 5 million total downloads) are permanently staying on Rust 1.13 (released 2016), because I don’t feel the feature additions in the meantime have been worth the cost.

                  your list 100% matches brainfuck

                  Turing tarpit much?

                2. 2

                  while: alternative to recursion. Replaces for loops.

                  I don’t understand this, return, throw, and so on are also alternatives to if, but we’re removing them, yet loops, which can be completely removed and replaced with simple recursion are an alternative that we’re keeping? how so?

                  1. 3

                    The reason is that sometimes it’s hard to write tail-recursive code (or code the compiler can turn into PTCs), so having to pick code that leaks stack frames at runtime, or having awhile loop, I’ll pick the latter.

                    1. 2

                      a fair enough point, I understand

                  2. 2
                    • if: replaces return, throw, switch/case, break, continue.
                    • while: alternative to recursion. Replaces for loops.

                    Would you mind showing us a code example?

                    1. 2
                      class HashMap[K: Identity + Hash, V]()
                        var insertedAndDeleted: BitSet = BitSet(0)
                        var keys: Array[K] = Array.empty()
                        var values: Array[V] = Array.empty()
                      
                        var size: Int = 0
                        var cap: Int = 0
                      
                        ...
                      
                        fun isLive(idx: Int): Bool =
                         self.insertedAndDeleted.contains(2 * idx) &&
                         self.insertedAndDeleted.contains(2 * idx + 1).not
                      
                        fun get(key: K): Option[V] =
                          assert(self.size < self.cap)
                      
                          var hash = key.hash
                          var idx = hash.bitwiseAnd(self.cap - 1)
                      
                          var continue = true
                          var result = None
                      
                          while continue
                          do
                            if self.isLive(idx)
                            then
                              let currentKey = self.keys(idx)
                              if currentKey.hash == hash && currentKey === key
                              then result = Some(self.values(idx))
                              else ()
                              idx = (idx + 1).bitwiseAnd(self.cap - 1)
                            else
                              continue = false
                      
                          result
                      

                      Here is some write-up regarding control flow.

                    2. 1

                      I agree with most things you say, but the “for” construct is invaluable for math-heavy code where “while” feels very unnatural and verbose. I agree that the notation for “for” should be simplified to only allow an iterator over a fixed range of two constant ints. I would actually prefer to keep this “for” (which is general enough) and discard “while”. It makes your code much easier to reason about (you know exactly how many loops it is going to do, without needing to understand the logic of the program).

                      1. 1

                        I think you can make that case every control flow keyword; that’s basically how most languages have accumulated every control flow keyword ever invented in the last 60 years.

                        I’m not claiming that not having some specific keyword isn’t inconvenient in cases that keyword would shine – I’m making the case that the inconvenience caused is smaller than having to learn, remember and understand how the half dozen control flow keywords interact with each other.

                        It’s a bit like static imports in Java – sure, they are sometimes convenient (when writing code), but I’d argue that having two trivial, interchangeable ways to write the same thing is a much bigger inconvenience (when reading it).

                  3. 2

                    Discouraging use of control flow words makes sense for future readability, but getting rid of them completely is too strong. Sometimes you really need to use these old style control flow constructs.

                    J gets this right – idiomatic J code are a series of one-liners using zero old-style control words (therefore allowing code to be one-way-to-do-things idiomatic, as you suggest in a reply). But if, while, try, catch, return, for, continue, goto (?), are all still there if you need them.

                    1. 1

                      getting rid of them completely is too strong

                      I didn’t propose that.

                      But if, while, try, catch, return, for, continue, goto (?), are all still there if you need them.

                      So basically “don’t change anything”, with the result that none of the benefits are realized?

                      1. 1

                        I didn’t propose that.

                        Ah, I misread. Then we’re talking about different things – you were thinking about removing some keywords and I was thinking about removing all keywords.

                        So basically “don’t change anything”, with the result that none of the benefits are realized?

                        I think the idea is to encourage refactoring foreign code using if, for, and while, into a more idiomatic form using combinators and no if, for, and while. Not removing these keywords from the language outright just makes it easier to get started without knowing all about J’s combinators, which is good since everyone’s likely coming from languages that use control flow keywords. Again, this is a different view from yours.

                1. 7

                  Probably one of the best use cases for functional package managers right now is for creating reproducible development environments on any machine. I am looking to set up Nix on both my Mac and Linux servers so my toolchain remains consistent between environments.

                  1. 7

                    I looked into it last night and it has a serious issue on Catalina. One hopes they resolve it soon.

                    1. 4

                      There are relatively straightforward fixes to create a synthetic drive at /nix. It’s not official yet, but it works just fine.

                      https://github.com/NixOS/nix/issues/2925#issuecomment-539570232

                      1. 3

                        This is what I use and what I’m going to include in my tutorial post that I’ll be writing in the next week.

                  1. 21

                    The problem with this post is that when people complain about Rust programmes having too many dependencies they’re not actually complaining about them having too many dependencies, they’re complaining about them taking too long to compile and having too many untrusted dependencies.

                    The compile time issue has been discussed loads and discussing it here again probably wouldn’t be very productive, but it’s still worth pointing one thing out: it wouldn’t be nearly as much of an issue if you didn’t also have to build all the dependencies as well. Even quite big programmes, in my experience, don’t take very long to build. What takes forever is building them and all their recursive dependencies every time I update a programme, even when the dependencies haven’t changed.

                    There’s also the factor of keeping everything synced up. Programmes shouldn’t be using whatever the version of their dependencies were out last time the programme was updated by its authors. They should be using the latest version that actually exists, for obvious security reasons.

                    But the much more important factor is trust.

                    Now you can question how effective e.g. Debian is at evaluating new additions to its repositories and changes to the packages in its repositories but Debian packages are more carefully curated than Cargo packages. Packages on crates.io can be uploaded by anyone, and basically the only thing most people use to evaluate them is whether they look suspicious. If someone has good documentation and lots of downloads, most people - certainly I - just assume that they must be fairly trustworthy. That’s a much lower bar than the bar to get into Debian. The reason most people avoid any dependencies that aren’t in their system package manager is that downloading random crap from the internet does and should feel scary. Whenever I have to set up a Windows computer it feels so strange just downloading programs from random websites. For many programs the standard way to download them seems to be from Softpedia or one of those similar highly suspicious ad-supported software repositories.

                    Recently I installed spotifyd through the Arch User Repositories. Not very trustworthy, using the AUR, you might say, but yay shows you the PKGBUILD for the package you’re installing and it looked fine to me assuming that Spotifyd was fine.

                    Now the spotifyd crate has 27 direct dependencies. Of those, some are standard crates across the whole Rust ecosystem and are maintained by well-known members of the Rust community or the Rust team themselves like lazy_static, failure, futures, libc, log, serde, tokio-core, tokio-io, tokio-signal, url, percent-encoding, and env_logger. It seems fairly safe to trust them. However it’s still concerning to me that many are pre-1.0. tokio-* and futures are understandable, but libc? In fact, Alex Crichton just (as I’m typing this) posted an issue to the libc page on GitHub saying it has no active maintainers. This is a repository hosted under the rust-lang organisation on GitHub, which you’d think would mean it’d be officially supported by the Rust team, which contains a LOT of code, and which has >23m downloads.

                    Take chrono (0.4) for example, a library with a huge number of downloads. Its author, responding over two years ago to an issue on GitHub, lists many backwards-incompatible API changes he’d like to make before releasing the library as 1.0. That’s fine, that’s the whole point of releasing it as v0.4 as he is currently doing, but why is it considered okay for this library to have 9m downloads on crates.io and many many reverse dependencies that could all need to rewrite swathes of code in future?

                    daemonize (0.4) hasn’t had changes on Git for nearly a year. fern (0.5) hasn’t seen changes for four months. gethostname is a crate for getting the computer’s hostname that seems considerably complicated for what would in C be one line for each platform with some #ifdefs etc. It also lets you set the system hostname for some reason, despite the name.

                    hex is another crate that is version 0.4. Seems to be a pretty common problem in Rust: someone works on something for a bit then finds something else to do, but nothing stops you from using these packages in your software. It’s only got 60 commits in the repository’s history, which seems like not many, but it’s only very simple functionality. I’d suggest this should be in the standard library. Its code is ‘mostly extracted from rustc-serialize’, which suggests that the author’s suggestion that

                    Part of my theory is that C programs commonly omit deps by re-implementing the bits they need themselves, because for small stuff it’s far easier to just write or copy-paste your own code than to actually use a library.

                    highly amusing: it seems that the same thing happens in Rust anyway because, well, for small stuff it’s easier to just copy-paste or rewrite code than to use a big complicated library in Rust as well.

                    I’m not going to go through the rest of its dependencies because I think the point has been made: peoples’ concerns about the Cargo/npm approach to dependencies are quite well-founded. Quite reasonable programmes have huge numbers of recursive dependencies (Spotifyd required >400 dependent crates when I built it) that need to be rebuilt every time they’re updated, and the dependencies are untrusted, unverified, unstable and are not automatically kept up-to-date by my system package manager.

                    1. 7

                      There’s also the factor of keeping everything synced up. Programmes shouldn’t be using whatever the version of their dependencies were out last time the programme was updated by its authors. They should be using the latest version that actually exists, for obvious security reasons.

                      I agree with quite a bit of your post, but this struck me as the polar opposite of security.

                      Maintainers get hacked. I’d rather have the version I’ve audited the source of than some random newer one.

                      1. 14

                        The likelihood that you’ve successfully discovered all security flaws in a library through auditing are nearly zero. Your only defense against the flaws someone else will find first is being able to rapidly and safely upgrade to a version that fixes them.

                        For context… my employer very much cares about the security of our software. A lot of very smart people – and I count myself among them in this narrow regard – have weighed the alternatives, including auditing an unchanging pool of trusted code, and concluded that agile and frequent updates to the newest available version of our dependencies is the least risky approach to software security.

                      2. 7

                        Excellent counterpoints, thank you! Can I include your comment on that article as an appendix?

                        The trust problem is indeed part of the real problem (the other being robustness, which is related), and compile times are related because they force people to notice and think about what’s going on. There’s a few different approaches to trying to deal with trust, I think the biggest ones are crev and cargo-audit, but it’s a bit of an unsolved problem how well these things are actually going to work in the long run. So I suppose the real question is not how people use the code that makes their external libraries, and what tools they use, but rather how that code is managed and distributed – who is responsible for it. Debian can include code whose maintainer has been dead for 20 years because they have maintainers of their own, a security team, and a mechanism to consistently apply patches to the code if necessary. If a package in Debian becomes orphaned it can still be maintained by other people without requiring a fork just to be able to publish on crates.io (which I have had to do ), and if it remains orphaned then it can be removed from the next version of Debian and the rest of the system will adjust around that fact.

                        There’s room for counterpoints in all your examples, and you raise a number of real interesting questions: Is four months, or a year, actually that long for no changes in a library if that library performs its purpose and has no known security problems? There’s plenty of C code out there with no version number at all – how does one manage that? If a problem is discovered, who should fix it and how does that fix get communicated? What should and should not be in the standard library? Like I said at the end, these are big problems that don’t have a single solution.

                        1. 4

                          Excellent counterpoints, thank you! Can I include your comment on that article as an appendix?

                          Thanks! I think it would be better to just link the lobsters thread and say something like ‘there was some interesting discussion here about this’. I don’t think my comment deserves to be highlighted over the other wonderful comments here.

                          who is responsible for it

                          I think this is a key point. Ultimately, there’s nothing inherent in the compilation model forcing it to be this way and there’s nothing inherent in dpkg forcing Debian to be well maintained and audited. And in fact I’m sure there are lots of bits of Debian that aren’t very well audited.

                          Maybe what crates.io needs is something akin to Arch’s distinction between core, extra, community and the AUR. Packages could be marked explicitly as being core (developed, maintained and supported by the Rust team, basically the ‘extended standard library’), extra (endorsed by the Rust team but not actively developed by them, closely audited for changes, I think this is what rust-lang-nursery is meant to be, roughly?), community (crates that are supported by and audited by a group of ‘trusted’ people, which would be a fairly broad group of people, much like the Arch ‘trusted users’ group, the idea being that they’re at least notionally endorsed by a group of at least notionally trusted people, but you should still be careful, but you’ll probably be fine), and other (all other crates).

                          Joe Bloggs creates a crate he thinks is useful. He puts it on the AUR. It goes into the ‘other’ category. Joe posts it to /r/rust-lang and people really like the design or the API or the functionality or whatever, but people have some concerns that it’s not super idiomatic. Joe fixes up most of these issues and some trusted community member adopts it into the ‘community’ category after a little while, and maybe Joe becomes a trusted community member himself after a little while.

                          Eventually this crate becomes so widely used that it’s considered not just a popular third party library but a foundational component of the ecosystem. It gets imported into the ‘extra’ category and now it’s guaranteed regular fuzzing, some oversight over changes, reasonably close auditing from the community and the Rust developers, etc.

                          Maybe eventually it becomes so core to the ecosystem that the Rust team adopts it as a ‘core’ crate, meaning it’s now actively supported and developed by the Rust team.

                          Or maybe it stops at any one of those stages. Being in ‘other’ doesn’t mean it’s bad, just not part of the standard ecosystem. People would be taught to be very wary of adding packages in ‘other’ without closely auditing them, because they are totally untrusted.

                          Obviously for something to be imported into ‘X’ its dependencies would all need to be ‘X’ or higher.

                          Thing is, Rust already has the beginnings of this. It does have crates that are developed by the core team. It does have blessed crates not developed by the core team but acknowledged as being used in almost every large project and important to the health of the ecosystem that are regularly fuzzed. It does have a broad set of widely used libraries that are considered trustworthy. It just needs these categories to be explicitly marked on the packages themselves and tooling for seeing how trustworthy the packages you are using actually are.

                        2. 6

                          I can’t help thinking there’s a problem in your argument, though.

                          Your concern, if I’ve understood it, is that people are only trusting, say, a crate because a bunch of other people apparently have trusted it, too, and, presumably, those people haven’t had bad things happen as a result, or they’d have complained in places you could see. But then you seem to ascribe higher value to Linux-distro package repositories, when the only argument that can be made in support of trusting the distro’s packages is that… a bunch of other people apparently have trusted them, too, and presumably haven’t had bad things happen as a result.

                          I’m not sure there’s a way to qualitatively distinguish between “Debian’s packages are trustworthy because lots of people say so” and “crates.io is trustworthy because lots of people say so”. And both groups certainly face negative repercussions if the packages turn out not to be trustworthy, because both groups have staked at least some of their reputation on the claim of trustworthiness.

                          And keep in mind that mere presence of security issues doesn’t seem to impact the trustworthiness of, say, the Debian maintainers. Debian routinely updates packages to fix CVEs, for example, and nobody seems to think it reflects poorly on Debian that those vulnerable packages were in Debian’s repositories. In fact, quite the opposite: we explicitly don’t expect Debian package maintainers to be doing comprehensive security audits that might catch those problems up-front.

                          Merely restricting the number of people who can upload/add a package also doesn’t do it, because (for example) restrictive access policies occur in both good and bad projects, and we all know that.

                          Or if your argument is “I personally trust the Debian maintainers more than I trust the crate maintainers”, that’s an argument that’s persuasive to you personally, but not one that necessarily is persuasive to others, unless others are open to being persuaded on the basis of “other people trust them”, which then lands you back at the start.

                          1. 6

                            Take chrono (0.4) for example, a library with a huge number of downloads. Its author, responding over two years ago to an issue on GitHub, lists many backwards-incompatible API changes he’d like to make before releasing the library as 1.0. That’s fine, that’s the whole point of releasing it as v0.4 as he is currently doing, but why is it considered okay for this library to have 9m downloads on crates.io and many many reverse dependencies that could all need to rewrite swathes of code in future?

                            Why do you need to rewrite? Just pin your dep to 0.4 and be done with it. If 1.0 is too much work, don’t update. I can’t see this as a problem to be honest. Seems at best a nitpick about changes happen, not rust specific. Look at openssl breaking changes between versions and a lot more stuff depends on that.

                            hex is another crate that is version 0.4. Seems to be a pretty common problem in Rust: someone works on something for a bit then finds something else to do, but nothing stops you from using these packages in your software.

                            I see this the opposite way: the software is DONE and doesn’t need to change which is a good thing. I find the incessant view that software needs to be constantly coddled/babied and updated an antipattern. Do you see that something needs to change in that library? Part of having so many dependencies should mean things get stable and essentially need no more updates or at best few updates. I want libraries to be stable and not get updated. Its not like if we had the equivalent of abs() in a library we would think it needs to be updated every few months. Once its done its done barring syntax changes or other changes to the overall language.

                            Quite reasonable programmes have huge numbers of recursive dependencies (Spotifyd required >400 dependent crates when I built it) that need to be rebuilt every time they’re updated, and the dependencies are untrusted, unverified, unstable and are not automatically kept up-to-date by my system package manager.

                            As a nix user I’d say this reflects more on your package manager of choice than rust itself. None of this need be an issue its more an issue in historically how package management has viewed dependencies.

                            1. 0

                              If the software were done it would be 1.0 and stable. It isn’t done. It still has serious issues that need to be fixed, which is why the authors want to change the API for 1.0. That’s kind of the point of 0.x releases, they’re meant to be experimental unstable releases that can’t be relied upon.

                              1. 2

                                If a crate reaches stable at 0.X.Y unexpectedly, why force evaluation of a 1.0 upgrade if there are no breaking changes to be made for it? Sure, let the next update be 1.0, but there’s no immediate reason to move, especially when you can say this in the crate’s top-level documentation and update that with just a 0.X.(Y+1) minor bump.

                                1. 0

                                  Because 1.0 is literally defined to mean stable for a Rust crate. That’s the definition of stable. They’re the same thing. 1.0+ = stable. They are synonyms.

                                  1. 1

                                    The Semver FAQ actually addresses this in a fairly nice way.

                                2. 2

                                  Also true! I’d forgotten about that bit.

                              2. 3

                                recursive dependencies […] that need to be rebuilt every time they’re updated, and the dependencies are untrusted, unverified, unstable and are not automatically kept up-to-date by my system package manager.

                                Lack of updates for transitive deps seems like merely a lack of integration with the package manager. There could be something that watches the Cargo index, rebuilds affected packages and releases updates for them. I know package managers prefer unbundling and dynamic linking instead, but when dependencies rely on using generics and inlining, there’s no easy way around that.

                              1. 7

                                Is the purpose of the ServiceWorker only to invalidate the cache of the home page? I don’t understand why that is needed.

                                1. 2

                                  That’s where I was at; I could not figure out the why of … well, any of it. But that part in particular perplexed me.

                                1. 3

                                  System D

                                  oh boy better be careful they don’t hunt you down for the improper typesetting of “systemd”

                                  1. 5

                                    Haha! Yes, it upsets people when it’s not written as “systemd”, but in the title it’s actually meant to be an allusion to old science fiction films, think “Escape from planet X”. I always capitalise proper nouns though, and so I’ve (otherwise) called it Systemd (as I call my own system Dinit, though the executable is called “dinit”). As far as I’m concerned you can spell it however you like :)

                                    1. 5

                                      In my experience, people who insist on calling it SystemD are the pettiest of detractors.

                                      1. 2

                                        Also they’d insist on systemd, all lowercase, lol

                                        Yeah, agreed, it’s incredibly petty and stupid.

                                        1. 1

                                          I mean, this is a community that still uses “Micro$oft” as a moniker, so…

                                        2. 1

                                          Elasticsearch vs. ElasticSearch is also a fun one :)

                                          1. 1

                                            I remember SystemD being the right way to typeset it. At least, that’s what everyone seemed to be using at the start. Given I have had zero interest in the project since then (except using it on arch linux and finding it… inadequate for my purposes), I haven’t been updated with the systemd-official way of calling it. I do dislike systemd, but I think it’s silly to call everyone who hasn’t kept up to date with the name “detractors”.

                                            Edit: Elsewhere in the thread there’s an implicit comparison between using SystemD and using Micro$oft. But I don’t see how you can compare those things. The first is a reasonably proper name for it (System Daemon, or whatever), the other is a jab at the FUD and EEE tactics of the corporation.

                                        1. 4

                                          Changing the defaults is very slippery thing to do.

                                          Once I’ve prepared a shell script with set -euo pipefail somewhere on the beginning to save time during future modifications. Later, another guy wanted to resolve some bigger problem, and changing this shell script was one step of many. He spent 2 hours fighting with the shell script because he wanted to do it quickly, and re-checking his OS because he thought the shell script “is behaving in a strange way”. He didn’t realize/didn’t notice the “set -euo pipefail” directive.

                                          So, end result is that instead of doing something useful, I’ve made things worse. I think that using modifications from this blog post in a corporate setting can make things worse as well.

                                          1. 6

                                            I don’t agree that you made things worse. Bash in particular has terrible defaults for robust scripting, and making those failure modes explicit and immediate is a net improvement. “wanted to do it quickly” is the root of many programming errors that come back and bite you in the ass later.

                                            1. 5

                                              While I appreciate your point, if you don’t put set -eu in the shell script someone is going to lose two hours because of a typo or other small error leading to an undefined variable, or because the script keeps running after an error in ways that weren’t appreciated.

                                              Never mind situations like rm -rf "$prefiix/usr" 😬

                                              1. 2

                                                I get to see the underside of the Oracle experience here – basically, how we deliver the bits to the hosting machine – and it’s a tangled and awful beast. I’m glad consumer & digital has seen the back of it.

                                                1. 4

                                                  I’m playing in my first ever disc golf tournament.

                                                  The weather forecast is not nice: we’ll probably have both rain and strong winds, and my goal/expectations are low: to not be in the last five players :) But for me, the more important than the result is to have fun and enjoy the weekend in nature, far away from my computer.

                                                  1. 1

                                                    I don’t do it competitively, but I really enjoy disc golf for fun. I was hoping to get a round in this weekend, but I don’t think it’s in the cards.

                                                    1. 1

                                                      I don’t do it competitively

                                                      Neither did I, until this weekend. And I must tell you, it is a great fun and a very nice experience for me. I will definitely repeat it, and I can only recommend it to you if you’ll have some tournaments relatively nearby the place you live.

                                                      my goal/expectations are low: to not be in the last five players :)

                                                      I managed to achieve my goal — I was 6th from behind :)

                                                  1. 5

                                                    Packing and moving. It’s FUN.

                                                    (note: Not actually)

                                                    1. 2

                                                      That’s one of the reasons I’m back in the gym. Seems like I help a friend move a ton of furniture only when my muscles have left me. Always painful. Next time gonna be easy mode. :)

                                                      1. 2

                                                        Now, that’s friendship!

                                                        I’m sure your interest in the gym will fizzle out before another friend decides to move. ;)

                                                        1. 1

                                                          “Now, that’s friendship!”

                                                          Appreciate the compliment but it’s normal down here. People often help their friends and family move in the South. Maybe it’s a cultural thing.

                                                          “I’m sure your interest in the gym will fizzle out before another friend decides to move. ;)”

                                                          It’s happened two or three times already. This time I’ll be strong as hell. (pause) Barring my procrastination and moving on to other things habits… (pause) Maybe…

                                                    1. 2

                                                      Kinda related… is there a way to get my email – on a domain I control – to be delivered to two different mail providers? I’d love to try out FastMail, but not at the cost of losing my existing mail service during the trial period.

                                                      1. 2

                                                        At a protocol level, no (redundancy of MX records is for when one server is down) — but most mail providers have some mechanism of transparently forwarding messages which should allow this kind of trial?

                                                        1. 2

                                                          but most mail providers have some mechanism of transparently forwarding messages which should allow this kind of trial?

                                                          This is exactly what I do when trialing new providers I am interested in. This way no mail is lost and I can still play around with the new provider without fear of losing anything.

                                                        2. 1

                                                          I think you can try out a mail provider on a subdomain. I don’t remember if I ever tried that myself though lol

                                                        1. 1

                                                          I avoided it because I started trying to parse it out to port it to fish.

                                                          1. 2

                                                            I would like to have keybase, personally. I think it’s a fantastic concept.

                                                            1. 16

                                                              Let us not forget this: http://www.nohello.com/

                                                              1. 6

                                                                Personally speaking, I say “hi” and “hello” very often on IRC but I never expect an answer to these greetings. I answer to other’s greetings very infrequently. I’m only notified with explicit mentions so I’m not annoyed by short sympathetic messages. I really expect other people to do the same.

                                                                1. 2

                                                                  I agree with the main point of nohello.com, but is this really true?

                                                                  Typing is much slower than talking.

                                                                  I think I type faster than I talk (when not using a phone).

                                                                    1. 1

                                                                      Totally agree, but using Blogger makes this weird on mobile. It relies on the desktop index page to show the full article, but mobile shows a confusing generated summary.

                                                                    1. 2

                                                                      The nice thing is that since this is someone I report to, speaking publicly, I can actually comment. While any large organization has some variations in the details… this is a pretty solid representation of how we do software.

                                                                      I work in the org building the tools here. We’re always hiring, if you want to help us build the tools that build the tools that build AWS so you can build the internet.

                                                                      (It’s fun)

                                                                      1. 1

                                                                        is it a good thing for AWS to be equated with “the internet”?

                                                                        1. 1

                                                                          Do you hire in Dallas?

                                                                        1. 2

                                                                          I got 5 paragraphs in, and it lost me.

                                                                          Stability through avoidance of change just doesn’t work in practice. Your system will change, constantly, through contact with the world. If you are hoping to hold back the tide by sticking with what worked before, eventually you will lose that battle.

                                                                          Resilient systems change constantly. They embrace it. From the linked article, and the post it references… Debian doesn’t. That’s not healthy.

                                                                          1. 65

                                                                            In the Mastodon universe, technically-minded users are encouraged to run their own node. Sounds good. To install a Mastodon node, I am instructed to install recent versions of

                                                                            • Ruby
                                                                            • Node.JS
                                                                            • Redis
                                                                            • PostgreSQL
                                                                            • nginx

                                                                            This does not seem like a reasonable set of dependencies to me. In particular, using two interpreted languages, two databases, and a separate web server presumably acting as a frontend, all seems like overkill. I look forward to when the Mastodon devs are able to tame this complexity, and reduce the codebase to a something like single (ideally non-interpreted) language and a single database. Or, even better, a single binary that manages its own data on disk, using e.g. embedded SQLite. Until then, I’ll pass.

                                                                            1. 22

                                                                              Totally agree. I heard Pleroma has less dependencies though it looks like it depends a bit on which OS you’re running.

                                                                              1. 11

                                                                                Compared to Mastodon, Pleroma is a piece of cake to install; I followed their tutorial and had an instance set up and running in about twenty minutes on a fresh server.

                                                                                From memory all I needed install was Nginx, Elixir and Postgres, two of which were already set up and configured for other projects.

                                                                                My server is a quad core ARMv7 with 2GB RAM and averages maybe 0.5 load when I hit heavy usage… it does transit a lot of traffic though, since the 1st January my server has pushed out 530GB of traffic.

                                                                                1. 2

                                                                                  doesnt Elixir require Erlang to run?

                                                                                  1. 2

                                                                                    It does. Some linux distributions will require adding the Erlang repo before installing elixir but most seem to have it already included: https://elixir-lang.org/install.html#unix-and-unix-like meaning its a simple one line command to install e.g pkg install elixir

                                                                                2. 7

                                                                                  I’m not a huge social person, but I had only heard of Pleroma without investigating it. After looking a bit more, I don’t really understand why someone would choose Mastodon over Pleroma. They do basically the same thing, but Pleroma takes less resources. Anyone who chose Mastodon over Pleroma have a reason why?

                                                                                  1. 6

                                                                                    Mastodon has more features right now. That’s about it.

                                                                                    1. 4

                                                                                      Pleroma didn’t have releases for a looong time. They finally started down that route. They also don’t have official Docker containers and config changes require recompiling (just due to the way they have Elixir and builds setup). It was a pain to write my Docker container for it.

                                                                                      Pleroma also lacks moderation tools (you need to add blocked domains to the config), it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow) and a couple of other features.

                                                                                      Misskey is another alternative that looks promising.

                                                                                      1. 2

                                                                                        it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow)

                                                                                        I think that might just be the Pleroma FA - if I’m using the Mastodon FE, I get the same interaction on my Pleroma instance replying to someone on a different instance as when I’m using octodon.social (unless I’m radically misunderstanding your sentence)

                                                                                        1. 1

                                                                                          Thanks, this is a really great response. I actually took a quick look at their docs and saw they didn’t have any FreeBSD guide set up, so I stopped looking. I use Vultr’s $2.50 FreeBSD vps and I didn’t feel like fiddling with anything that particular night. I wish they did have an official docker container for it.

                                                                                        2. 3

                                                                                          Pleroma has a bunch of fiddly issues - it doesn’t do streaming properly (bitlbee-mastodon won’t work), the UI doesn’t have any “compose DM” functionality that I can find, I had huge problems with a long password, etc. But they’re mostly minor annoyances than show stoppers for now.

                                                                                        3. 7

                                                                                          It doesn’t depend - they’ve just gone further to define what to do for each OS!

                                                                                          1. 4

                                                                                            I guess it’s mainly the ImageMagick dependency for OpenBSD that got me thinking otherwise.

                                                                                            OpenBSD

                                                                                            • elixir
                                                                                            • gmake
                                                                                            • ImageMagick
                                                                                            • git
                                                                                            • postgresql-server
                                                                                            • postgresql-contrib

                                                                                            Debian Based Distributions

                                                                                            • postgresql
                                                                                            • postgresql-contrib
                                                                                            • elixir
                                                                                            • erlang-dev
                                                                                            • erlang-tools
                                                                                            • erlang-parsetools
                                                                                            • erlang-xmerl
                                                                                            • git
                                                                                            • build-essential
                                                                                            1. 3

                                                                                              imagemagick is purely optional. The only hard dependencies are postgresql and elixir (and some reverse proxy like nginx)

                                                                                              1. 4

                                                                                                imagemagick is strongly recommended though so you can enable the Mogrify filter on uploads and actually strip exif data

                                                                                          2. 3

                                                                                            Specifically, quoting from their readme:

                                                                                            Pleroma is written in Elixir, high-performance and can run on small devices like a Raspberry Pi.

                                                                                            As to the DB, they seem to use Postgres.

                                                                                            The author of the app posted his list of differences, but I’m not sure if it’s complete and what it really means. I haven’t found a better comparison yet, however.

                                                                                          3. 16

                                                                                            Unfortunately I have to agree. I self-host 99% of my online services, and sysadmin for a living. I tried mastodon for a few months, but its installation and management process was far more complicated than anything I’m used to. (I run everything on OpenBSD, so the docker image isn’t an option for me.)

                                                                                            In addition to getting NodeJS, Ruby, and all the other dependencies installed, I had to write 3 separate rc files to run 3 separate daemons to keep the thing running. Compared to something like Gitea, which just requires running a single Go executable and a Postgres DB, it was a massive amount of toil.

                                                                                            The mastodon culture really wasn’t a fit for me either. Even in technical spaces, there was a huge amount of politics/soapboxing. I realized I hadn’t even logged in for a few weeks so I just canned my instance.

                                                                                            Over the past year I’ve given up on the whole social network thing and stick to Matrix/IRC/XMPP/email. I’ve been much happier as a result and there’s a plethora of quality native clients (many are text-based). I’m especially happy on Matrix now that I’ve discovered weechat-matrix.

                                                                                            I don’t mean to discourage federated projects like Mastodon though - I’m always a fan of anything involving well-known URLs or SRV records!

                                                                                            1. 11

                                                                                              Fortunately the “fediverse” is glued by a standard protocol (ActivityPub) that is quite simple so if one implementation (e.g. Mastodon) doesn’t suit someone’s needs it’s not a big problem - just searching for a better one and it still interconnects with the rest of the world.

                                                                                              (I’ve written a small proof-of-concept ActivityPub clients and servers, it works and federates, see also this).

                                                                                              For me the more important problems are not implementation issues with one server but rather design issues within the protocol. For example established standards such as e-mail or XMPP have a way to delegate responsibility of running a server of a particular protocol but still use bare domain for user identifies. In e-mail that is MX records in XMPP it’s DNS SRV records. ActivityPub doesn’t demand anything like it and even though Mastodon tries to provide something that would fix that issue - WebFinger, other implementations are not interested in that (e.g. Pleroma). And then one is left with instances such as “social.company.com”.

                                                                                              For example - Pleroma’s developer’s id is lain@pleroma.soykaf.com.

                                                                                              1. 16

                                                                                                This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack. That is a good thing, because when Fediverse nodes need to scale there are well-understood ways of doing it.

                                                                                                Success in social networking is entirely about network effects and that means low barrier to entry is table stakes. Yeah, it’d be cool if someone built the type of node you’re talking about, but it would be a curiosity pursued only by the most technical users. If that were the barrier to entry for the network, there would be no network.

                                                                                                1. 39

                                                                                                  This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack.

                                                                                                  Yes, but not for a web app I’m expected to run on my own time, for fun.

                                                                                                  1. 6

                                                                                                    I’m not sure that’s the exact expectation, that we all should run our single-user Mastodon instances. I feel like the expectation is that sysadmin with enough knowledge will maintain an instance for many users. This seems to be the norm.

                                                                                                    That, or you go to Mastohost and pay someone else for your own single-user instance.

                                                                                                    1. 2

                                                                                                      You’re not expected to do that is my point.

                                                                                                    2. 16

                                                                                                      completely reasonable and uncontroversial

                                                                                                      Not true. Many people are complaining about the unmanaged proliferation of dependencies and tools. Most projects of this size and complexity don’t need more than one language, bulky javascript frameworks, caching and database services.

                                                                                                      This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu and making it more difficult for people to make the service really decentralized.

                                                                                                      1. 1

                                                                                                        I’m not going to defend the reality of what NPM packaging looks like right now because it sucks but that’s the ecosystem we’re stuck with for the time being until something better comes along. As with social networks, packaging systems are also about network effects.

                                                                                                        But you can’t deny that this is the norm today. Well, you can, but you would be wrong.

                                                                                                        This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu

                                                                                                        I’m sure it is, because dpkg is a wholly unsuitable tool for this use-case. You shouldn’t even try. Anyone who doesn’t know how to set these things up themselves should use the Docker container.

                                                                                                        1. 1

                                                                                                          I think the most difficult part of the Debian packaging would be the js deps, correct?

                                                                                                          1. 3

                                                                                                            Yes and no. Unvendorizing dependencies is done mostly for security and requires a lot of work depending on the amount of dependencies. Sometimes js libraries don’t create serious security concerns because they are only run client-side and can be left in vendorized form.

                                                                                                            The Ruby libraries can be also difficult to unvendorize because many upstream developers introduce breaking changes often. They care little about backward compatibility, packaging and security.

                                                                                                            Yet server-side code is more security-critical and that becomes a problem. And it’s getting even worse with new languages that strongly encourage static linking and vendorization.

                                                                                                            1. 1

                                                                                                              I can’t believe even Debian adopted the Googlism of “vendor” instead of “bundle”.

                                                                                                              That aside, Rust? In Mastodon? I guess the Ruby gems it requires would be the bigger problem?

                                                                                                              1. 2

                                                                                                                The use of the word is mine: I just heard people using “vendor” often. It’s not “adopted by Debian”.

                                                                                                                I don’t understand the second part: maybe you misread Ruby for Rust in my text?

                                                                                                                1. 1

                                                                                                                  No, I really just don’t know what Rust has to do with Mastodon. There’s Rust in there somewhere? I just didn’t notice.

                                                                                                                  1. 2

                                                                                                                    AFAICT there is no Rust in the repo (at least at the moment).

                                                                                                                    1. 1

                                                                                                                      Wow, I’m so dumb, I keep seeing Rust where there is none and misunderstanding you, so sorry!

                                                                                                        2. 7

                                                                                                          Great. Then have two implementations, one for users with large footprints, and another for casual users with five friends.

                                                                                                          It is a reasonable stack if you will devote 1+ servers to the task. Not for something you might want to run on your RPI next to your irc server (a single piece of software in those stacks too)

                                                                                                          1. 4

                                                                                                            Having more than one implementation is healthy.

                                                                                                            1. 2

                                                                                                              Of course it is. Which is why it’s a reasonable solution to the large stack required by the current primary implementation.

                                                                                                        3. 6

                                                                                                          There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching and not as a DB layer like PSQL.

                                                                                                          You can always write your own server if you want in whatever language you choose if you feel like Ruby/Node is too much. Or, like that other guy said, you can just use Docker.

                                                                                                          1. 4

                                                                                                            There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching . . .

                                                                                                            A project that can run on a single instance of the application binary absolutely does not need a cache. Nor does it need a pub/sub or messaging system outside of its process space.

                                                                                                            1. 2

                                                                                                              It’s more likely that Redis is being used for pub/sub messaging and job queuing.

                                                                                                            2. 11

                                                                                                              This does not seem like a reasonable set of dependencies to me

                                                                                                              Huh. I must be just used to this, then. At work I need to use or at least somewhat understand,

                                                                                                              • Postgres
                                                                                                              • Python 2
                                                                                                              • Python 3
                                                                                                              • Django
                                                                                                              • Ansible
                                                                                                              • AWS
                                                                                                              • Git (actually, Mercurial, but this is my choice to avoid using git)
                                                                                                              • Redis
                                                                                                              • Concourse
                                                                                                              • Docker
                                                                                                              • Emacs (My choice, but I could pick anything else)
                                                                                                              • Node
                                                                                                              • nginx
                                                                                                              • Flask
                                                                                                              • cron
                                                                                                              • Linux
                                                                                                              • RabbitMQ
                                                                                                              • Celery
                                                                                                              • Vagrant (well, optional, I actually do a little extra work to have everything native and avoid a VM)
                                                                                                              • The occasional bit of C code

                                                                                                              and so on and so forth.

                                                                                                              Do I just work at a terrible place or is this a reasonable amount of things to have to deal with in this business? I honestly don’t know.

                                                                                                              To me Mastodon’s requirements seem like a pretty standard Rails application. I’m not even sure why Redis is considered another db – it seems like an in-memory cache with optional disk persistence is a different thing than a persistent-only RDBMS. Nor do I even see much of a problem with two interpreted languages – the alternative would be to have js everywhere, since you can’t have Python or Ruby in a web browser, and js just isn’t a pleasant language for certain tasks.

                                                                                                              1. 38

                                                                                                                I can work with all that and more if you pay me. For stuff I’m running at home on my own time, fuck no. When I shut my laptop to leave the office, it stays shut until I’m back again in the morning, or I get paged.

                                                                                                                1. 2

                                                                                                                  So is Mastodon unusual for a Rails program? I wonder if it’s simply unreasonable to ask people to run their own Rails installation. I honestly don’t know.

                                                                                                                  Given the amount of Mastodon instances out there, though, it seems that most people manage. How?

                                                                                                                  1. 4

                                                                                                                    That looks like a bog-standard, very minimal rails stack with a JS frontend. I’m honestly not sure how one could simplify it below that without dropping the JS on the web frontend and any caching, both of which seem like a bad idea.

                                                                                                                    1. 7

                                                                                                                      There’s no need to require node. The compilation should happen at release time, and the release download tarball should contain all the JS you need.

                                                                                                                      1. -3

                                                                                                                        lol “download tarball”, you’re old, dude.

                                                                                                                        1. 7

                                                                                                                          Just you wait another twenty years, and you too will be screaming at the kids to get off your lawn.

                                                                                                                      2. 2

                                                                                                                        You could remove Rails and use something Node-based for the backend. I’m not claiming that’s a good idea (in fact it’s probably not very reasonable), but it’d remove that dependency?

                                                                                                                        1. 1

                                                                                                                          it could just have been a go or rust binary or something along those lines, with an embedded db like bolt or sqlite

                                                                                                                          edit: though the reason i ignore mastodon is the same as cullum, culture doesn’t seem interesting, at least on mastodon.social

                                                                                                                        2. 4

                                                                                                                          If security or privacy focused, I’d try a combo like this:

                                                                                                                          1. Safe language with minimal runtime that compiles to native code and Javascript. Web framework in that language for dynamic stuff.

                                                                                                                          2. Lwan web server for static content.

                                                                                                                          3. SQLite for database.

                                                                                                                          4. Whatever is needed to combine them.

                                                                                                                          Combo will be smaller, faster, more reliable, and more secure.

                                                                                                                          1. 2

                                                                                                                            I don’t think this is unusual for a Rails app. I just don’t want to set up or manage a Rails app in my free time. Other people may want to, but I don’t.

                                                                                                                        3. 7

                                                                                                                          I don’t think it’s reasonable to compare professional requirements and personal requirements.

                                                                                                                          1. 4

                                                                                                                            The thing is, Mastodon is meant to be used on-premise. If you’re building a service you host, knock yourself out! Use 40 programming languages and 40 DBs at the same time. But if you want me to install it, keep it simple :)

                                                                                                                            1. 4

                                                                                                                              Personally, setting up all that seems like too much work for a home server, but maybe I’m just lazy. I had a similar issue when setting up Matrix and ran into an error message that I just didn’t have the heart to debug, given the amount of moving parts which I had to install.

                                                                                                                              1. 3

                                                                                                                                If you can use debian, try installing synapse via their repository, it works really nice for me so far: https://matrix.org/packages/debian/

                                                                                                                                1. 1

                                                                                                                                  Reading other comments about the horror that is Docker, it is a wonder that you dare propose to install an entire OS only to run a Matrix server. ;)

                                                                                                                                  1. 3

                                                                                                                                    i’m not completely sure which parts of you comment are sarcasm :)

                                                                                                                              2. 0

                                                                                                                                Your list there has lots of tools with overlapping functionality, seems like pointless redundancy. Just pick flask OR django. Just pick python3 or node, just pick docker or vagrant, make a choice, remove useless and redundant things.

                                                                                                                                1. 3

                                                                                                                                  We have some Django applications and we have some Flask applications. They have different lineages. One we forked and one we made ourselves.

                                                                                                                              3. 6

                                                                                                                                Alternatively you install it using the Docker as described here.

                                                                                                                                1. 32

                                                                                                                                  I think it’s kinda sad that the solution to “control your own toots” is “give up control of your computer and install this giant blob of software”.

                                                                                                                                  1. 9

                                                                                                                                    Piling another forty years of hexadecimal Unix sludge on top of forty years of slightly different hexadecimal Unix sludge to improve our ability to ship software artifacts … it’s an aesthetic nightmare. But I don’t fully understand what our alternatives are.

                                                                                                                                    I’ve never been happier to be out of the business of having to think about this in anything but the most cursory detail.

                                                                                                                                    1. 11

                                                                                                                                      I mean how is that different from running any binary at the end of the day. Unless you’re compiling everything from scratch on the machine starting from the kernel. Running Mastodon from Docker is really no different. And it’s not like anybody is stopping you from either making your own Dockerfile, or just setting up directly on your machine by hand. The original complaint was that it’s too much work, and if that’s a case you have a simple packaged solution. If you don’t like it then roll up the sleeves and do it by hand. I really don’t see the problem here I’m afraid.

                                                                                                                                      1. 11

                                                                                                                                        “It’s too much work” is a problem.

                                                                                                                                        1. 5

                                                                                                                                          Unless you’re compiling everything from scratch on the machine starting from the kernel

                                                                                                                                          I use NixOS. I have a set of keys that I set as trusted for signature verification of binaries. The binaries are a cache of the build derivation, so I could theoretically build the software from scratch, if I wanted to, or to verify that the binaries are the same as the cached versions.

                                                                                                                                          1. 2

                                                                                                                                            Right, but if you feel strongly about that then you can make your own Dockerfile from source. The discussion is regarding whether there’s a simple way to get an instance up and running, and there is.

                                                                                                                                            1. 3

                                                                                                                                              Docker containers raise a lot of questions though, even if you use a Dockerfile:

                                                                                                                                              • What am I running?
                                                                                                                                              • Which versions am I running?
                                                                                                                                              • Do the versions have security vulnerabilities?
                                                                                                                                              • Will I be able to build the exact same version in 24 months?

                                                                                                                                              Nix answers these pretty will and fairly accurately.

                                                                                                                                          2. 2

                                                                                                                                            Unless you’re compiling everything from scratch on the machine starting from the kernel.

                                                                                                                                            You mean starting with writing a bootstrapping compiler in assembly, then writing your own full featured compiler and compiling it in the bootstrapping compiler. Then moving on to compiling the kernel.

                                                                                                                                            1. 1

                                                                                                                                              No no, your assembler could be compromised ;)

                                                                                                                                              Better write raw machine code directly onto the disk. Using, perhaps, a magnetized needle and a steady hand, or maybe a butterfly.

                                                                                                                                              1. 2

                                                                                                                                                My bootstrapping concept was having the device boot a program from ROM that takes in the user-supplied, initial program via I/O into RAM. Then passes execution to it. You enter the binary through one of those Morse code things with four buttons: 0, 1, backspace, and enter. Begins executing on enter.

                                                                                                                                                Gotta input the keyboard driver next in binary to use a keyboard. Then the display driver blind using the keyboard. Then storage driver to save things. Then, the OS and other components. ;)

                                                                                                                                              2. 1

                                                                                                                                                If I deploy three Go apps on top of a bare OS (picked Go since it has static binaries), and the Nginx server in front of all 3 of them uses OpenSSL, then I have one OpenSSL to patch whenever the inevitable CVE rolls around. If I deploy three Docker container apps on top of a bare OS, now I have four OpenSSLs to patch - three in the containers and one in my base OS. This complexity balloons very quickly which is terrible for user control. Hell, I have so little control over my one operating system that I had to carefully write a custom tool just to make sure I didn’t miss logfile lines in batch summaries created by cron. How am I supposed to manage four? And three with radically different tooling and methodology to boot.

                                                                                                                                                And Docker upstream, AFAIK, has provided nothing to help with the security problem which is probably why known security vulnerabilities in Docker images are rampant. If they have I would like to know because if it’s decent I would switch to it immediately. See this blog post for more about this problem (especially including links) and how we “solved” it in pump.io (spoiler: it’s a giant hack).

                                                                                                                                                1. 3

                                                                                                                                                  That’s not how any of this works. You package the bare minimum needed to run the app in the Docker container, then you front all your containers with a single Nginx server that handles SSL. Meanwhile, there are plenty of great tools, like Dokku for managing Docker based infrastructure. Here’s how you provision a server using Let’s Encrypt with Dokku:

                                                                                                                                                  sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
                                                                                                                                                  okku letsencrypt:auto-renew
                                                                                                                                                  

                                                                                                                                                  viewing logs isn’t rocker science either:

                                                                                                                                                  dokku logs myapp
                                                                                                                                                  
                                                                                                                                                  1. 1

                                                                                                                                                    OK, so OpenSSL was a bad example. Fair enough. But I think my point still stands - you’ll tend to have at least some duplicate libraries across Docker containers. There’s tooling around managing security vulnerabilities in language-level dependencies; see for example Snyk. But Docker imports the entire native package manager into the “static binary” and I don’t know of any tooling that can track problems in Docker images like that. I guess I could use Clair through Quay but… I don’t know. This doesn’t feel like as nice of a solution or as polished somehow. As an image maintainer I’ve added a big manual burden keeping up with native security updates in addition to those my application actually directly needs, when normally I could rely on admins to do that, probably with lots of automation.

                                                                                                                                                    1. 3

                                                                                                                                                      you’ll tend to have at least some duplicate libraries across Docker containers

                                                                                                                                                      That is literally the entire point. Application dependencies must be separate from one another, because even on a tight-knit team keeping n applications in perfect lockstep is impossible.

                                                                                                                                                      1. 1

                                                                                                                                                        OS dependencies are different than application dependencies. I can apply a libc patch on my Debian server with no worry because I know Debian works hard to create a stable base server environment. That’s different than application dependencies, where two applications are much more likely to require conflicting versions of libraries.

                                                                                                                                                        Now, I run most of my stuff on a single server so I’m very used to a heterogeneous environment. Maybe that’s biasing me against Docker. But isn’t that the usecase we’re discussing here anyway? How someone with just a hobbyist server can run Mastodon?

                                                                                                                                                        Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze. Clair is the equivalent of having to run npm install and then go trawling through node_modules looking for known vulnerable code instead of just looking at the lockfile. More broadly, because Docker lacks any notion of a package manifest, it seems to me that while Docker images are immutable once built, the build process that leads you there cannot be made deterministic. This is what makes it hard to keep track of the stuff inside them. I will have to think about this more - as I write this comment I’m wondering if my complaints about duplicated libraries and tracking security there is an instance of the XY problem or if they really are separate things in my mind.

                                                                                                                                                        Maybe I am looking for something like Nix or Guix inside a Docker container. Guix at least can export Docker containers; I suppose I should look into that.

                                                                                                                                                        1. 2

                                                                                                                                                          OS dependencies are different than application dependencies.

                                                                                                                                                          Yes, agreed.

                                                                                                                                                          Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze.

                                                                                                                                                          You don’t need a container to tell you these things. Application dependencies can be checked for exploits straight from the code repo, i.e. brakeman. Both the Gemfile.lock and yarn.lock are available from the root of the repo.

                                                                                                                                                          The container artifacts are most like built automatically for every merge to master, and that entails doing a full system update from the apt repository. So in reality, while not as deterministic as the lockfiles, the system deps in a container are likely to be significantly fresher than a regular server environment.

                                                                                                                                                      2. 1

                                                                                                                                                        You’d want to track security vulnerabilities outside your images though. You’d do it at dev time, and update your Dockerfile with updated dependencies when you publish the application. Think of Docker as just a packaging mechanism. It’s same as making an uberjar on the JVM. You package all your code into a container, and run the container. When you want to make updates, you blow the old one away and run a new one.

                                                                                                                                                2. 4

                                                                                                                                                  I have only rarely used Docker, and am certainly no booster, so keep that in mind as I ask this.

                                                                                                                                                  From the perspective of “install this giant blob of software”, do you see a docker deployment being that different from a single large binary? Particularly the notion of the control that you “give up”, how does that differ between Docker and $ALTERNATIVE?

                                                                                                                                                  1. 14

                                                                                                                                                    Ideally one would choose door number three, something not so large and inauditable. The complaint is not literally about Docker, but the circumstances which have resulted in docker being the most viable deployment option.

                                                                                                                                                  2. 2

                                                                                                                                                    You have the dockerfile and can reconstruct. You haven’t given up control.

                                                                                                                                                    1. 5

                                                                                                                                                      Is there a youtube video I can watch of somebody building a mastodon docker image from scratch?

                                                                                                                                                      1. 1

                                                                                                                                                        I do not know of one.

                                                                                                                                                3. 3

                                                                                                                                                  I totally agree as well, and I wish authors would s/Mastodon/Fediverse/ in their articles. As others have noted, Pieroma is another good choice and others are getting into the game - NextCloud added fediverse node support in their most recent release as a for-instance.

                                                                                                                                                  I tried running my own instance for several months, and it eventually blew up. In addition to the large set of dependencies, the system is overall quite complex. I had several devs from the project look at my instance, and the only thing they could say is it was a “back-end problem” (My instance had stopped getting new posts).

                                                                                                                                                  I gave up and am now using somebody else’s :) I love the fediverse though, it’s a fascinating place.

                                                                                                                                                  1. 4

                                                                                                                                                    I just use the official Docker containers. The tootsuite/mastodon container can be used to launch web, streaming, sidekiq and even database migrations. Then you just need an nginx container, a redis container, a postgres container and an optional elastic search container. I run it all on a 2GB/1vCPU Vultr node (with the NJ data center block store because you will need a lot of space) and it works fairly well (I only have ~10 users; small private server).

                                                                                                                                                    In the past I would agree with out (and it’s the reason I didn’t try out Diaspora years ago when it came out), but containers have made it easier. I do realize they both solve and cause problems and by no means think they’re the end all of tech, but they do make running stuff like this a lot easier.

                                                                                                                                                    If anyone wants to find me, I’m @djsumdog@hitchhiker.social

                                                                                                                                                    1. 2

                                                                                                                                                      Given that there’s a space for your Twitter handle, i wish Lobste.rs had a Mastodon slot as well :)

                                                                                                                                                    2. 2

                                                                                                                                                      Wait, you’re also forgetting systemd to keep all those process humming… :)

                                                                                                                                                      You’re right that this is clearly too much: I have run such systems for work (Rails’ pretty common), but would probably not do that for fun. I am amazed, and thankful, for the people who volunteer the effort to run all this on their week-ends.

                                                                                                                                                      Pleroma does look simpler… If I really wanted to run my own instance, I’d look in that direction. ¯_(ツ)_/¯

                                                                                                                                                      1. 0

                                                                                                                                                        I’m waiting for urbit.org to reach useability. Which I expect for my arbitrary feeling of useability to come about late this year. Then the issue is coming up to speed on a new language and integrated network, OS, build system.

                                                                                                                                                        1. 2

                                                                                                                                                          Urbit is apparently creating a feudal society. (Should note that I haven’t really dug into that thread for several years and am mostly taking @pushcx at his word.)

                                                                                                                                                          1. 1

                                                                                                                                                            The feudal society meme is just not true, and, BTW, Yarvin is no longer associated with Urbit. https://urbit.org/primer/

                                                                                                                                                        2. 1

                                                                                                                                                          I would love to have(make) a solution that could be used locally with sqlite and in aws with lambda, api gateway and dynamodb. That would allow scaling cost and privacy/controll.

                                                                                                                                                          1. 3

                                                                                                                                                            https://github.com/deoxxa/don is sort of in that direction (single binary, single file sqlite database).

                                                                                                                                                        1. 5

                                                                                                                                                          Sure do wish these posts had publication dates on them :/