1. 3

    This guide unconditionally recommends setting DMARC records, which is not a good idea. It should at least mention the difficult problems that DMARC causes for mailing lists. Depending on the used list software and its configuration, setting a DMARC policy of p=reject can kick you from the mailinglist.

    1. 2

      Yeah, I wrote this almost 2 years ago. I now have DMARC set to ignore. Almost missed a google interview because their calendar invite got blocked by my DMARC policy - they do some spoofing of the From: header.

      1. 1

        I’ve been running a similar FreeBSD/postfix/dovecot/yada-yada server for years. I’m just in the middle of building up the “next generation” and this post was great reading.

        That said, if there’s something dangerous in it, it’d be friendly if you updated it.

    1. 5

      Wow, this is a nice walkthrough of a setup that’s very similar to mine.

      I just spent part of my weekend grokking and getting DKIM and DMARC working on my personal mail server. The motivation was that Gmail suddenly decided to start sending all mail from my domain into people’s spam folders and a large number of the people that I email use Gmail. A little while after I got these working, mail from me starting going to inboxes again. (I have a couple of unrelated Gmail accounts to test with.) I always thought that just SPF would be good enough for a small-time single-instance mail server like mine but apparently that’s not true anymore.

      Right now I use a third-party spam filtering service that does a terrific job. I almost never get false negatives and only get one or two false positives a month. Is rspamd comparable out of the box or do you have to spend a lot of time training it?

      1. 2

        What third party spam filtering service are you using?

        1. 1

          Rspamd needs a bit of training volume before it starts attempting to classify messages based on the probabilistic filtering, but it isn’t much. I was happy with it after about a month, and I don’t send or receive much mail.

          The other antispam measures it uses were enough to block most of the spam in the meantime.

        1. 65

          In the Mastodon universe, technically-minded users are encouraged to run their own node. Sounds good. To install a Mastodon node, I am instructed to install recent versions of

          • Ruby
          • Node.JS
          • Redis
          • PostgreSQL
          • nginx

          This does not seem like a reasonable set of dependencies to me. In particular, using two interpreted languages, two databases, and a separate web server presumably acting as a frontend, all seems like overkill. I look forward to when the Mastodon devs are able to tame this complexity, and reduce the codebase to a something like single (ideally non-interpreted) language and a single database. Or, even better, a single binary that manages its own data on disk, using e.g. embedded SQLite. Until then, I’ll pass.

          1. 22

            Totally agree. I heard Pleroma has less dependencies though it looks like it depends a bit on which OS you’re running.

            1. 11

              Compared to Mastodon, Pleroma is a piece of cake to install; I followed their tutorial and had an instance set up and running in about twenty minutes on a fresh server.

              From memory all I needed install was Nginx, Elixir and Postgres, two of which were already set up and configured for other projects.

              My server is a quad core ARMv7 with 2GB RAM and averages maybe 0.5 load when I hit heavy usage… it does transit a lot of traffic though, since the 1st January my server has pushed out 530GB of traffic.

              1. 2

                doesnt Elixir require Erlang to run?

                1. 2

                  It does. Some linux distributions will require adding the Erlang repo before installing elixir but most seem to have it already included: https://elixir-lang.org/install.html#unix-and-unix-like meaning its a simple one line command to install e.g pkg install elixir

              2. 7

                I’m not a huge social person, but I had only heard of Pleroma without investigating it. After looking a bit more, I don’t really understand why someone would choose Mastodon over Pleroma. They do basically the same thing, but Pleroma takes less resources. Anyone who chose Mastodon over Pleroma have a reason why?

                1. 6

                  Mastodon has more features right now. That’s about it.

                  1. 4

                    Pleroma didn’t have releases for a looong time. They finally started down that route. They also don’t have official Docker containers and config changes require recompiling (just due to the way they have Elixir and builds setup). It was a pain to write my Docker container for it.

                    Pleroma also lacks moderation tools (you need to add blocked domains to the config), it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow) and a couple of other features.

                    Misskey is another alternative that looks promising.

                    1. 2

                      it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow)

                      I think that might just be the Pleroma FA - if I’m using the Mastodon FE, I get the same interaction on my Pleroma instance replying to someone on a different instance as when I’m using octodon.social (unless I’m radically misunderstanding your sentence)

                      1. 1

                        Thanks, this is a really great response. I actually took a quick look at their docs and saw they didn’t have any FreeBSD guide set up, so I stopped looking. I use Vultr’s $2.50 FreeBSD vps and I didn’t feel like fiddling with anything that particular night. I wish they did have an official docker container for it.

                      2. 3

                        Pleroma has a bunch of fiddly issues - it doesn’t do streaming properly (bitlbee-mastodon won’t work), the UI doesn’t have any “compose DM” functionality that I can find, I had huge problems with a long password, etc. But they’re mostly minor annoyances than show stoppers for now.

                      3. 7

                        It doesn’t depend - they’ve just gone further to define what to do for each OS!

                        1. 4

                          I guess it’s mainly the ImageMagick dependency for OpenBSD that got me thinking otherwise.

                          OpenBSD

                          • elixir
                          • gmake
                          • ImageMagick
                          • git
                          • postgresql-server
                          • postgresql-contrib

                          Debian Based Distributions

                          • postgresql
                          • postgresql-contrib
                          • elixir
                          • erlang-dev
                          • erlang-tools
                          • erlang-parsetools
                          • erlang-xmerl
                          • git
                          • build-essential
                          1. 3

                            imagemagick is purely optional. The only hard dependencies are postgresql and elixir (and some reverse proxy like nginx)

                            1. 4

                              imagemagick is strongly recommended though so you can enable the Mogrify filter on uploads and actually strip exif data

                        2. 3

                          Specifically, quoting from their readme:

                          Pleroma is written in Elixir, high-performance and can run on small devices like a Raspberry Pi.

                          As to the DB, they seem to use Postgres.

                          The author of the app posted his list of differences, but I’m not sure if it’s complete and what it really means. I haven’t found a better comparison yet, however.

                        3. 16

                          Unfortunately I have to agree. I self-host 99% of my online services, and sysadmin for a living. I tried mastodon for a few months, but its installation and management process was far more complicated than anything I’m used to. (I run everything on OpenBSD, so the docker image isn’t an option for me.)

                          In addition to getting NodeJS, Ruby, and all the other dependencies installed, I had to write 3 separate rc files to run 3 separate daemons to keep the thing running. Compared to something like Gitea, which just requires running a single Go executable and a Postgres DB, it was a massive amount of toil.

                          The mastodon culture really wasn’t a fit for me either. Even in technical spaces, there was a huge amount of politics/soapboxing. I realized I hadn’t even logged in for a few weeks so I just canned my instance.

                          Over the past year I’ve given up on the whole social network thing and stick to Matrix/IRC/XMPP/email. I’ve been much happier as a result and there’s a plethora of quality native clients (many are text-based). I’m especially happy on Matrix now that I’ve discovered weechat-matrix.

                          I don’t mean to discourage federated projects like Mastodon though - I’m always a fan of anything involving well-known URLs or SRV records!

                          1. 11

                            Fortunately the “fediverse” is glued by a standard protocol (ActivityPub) that is quite simple so if one implementation (e.g. Mastodon) doesn’t suit someone’s needs it’s not a big problem - just searching for a better one and it still interconnects with the rest of the world.

                            (I’ve written a small proof-of-concept ActivityPub clients and servers, it works and federates, see also this).

                            For me the more important problems are not implementation issues with one server but rather design issues within the protocol. For example established standards such as e-mail or XMPP have a way to delegate responsibility of running a server of a particular protocol but still use bare domain for user identifies. In e-mail that is MX records in XMPP it’s DNS SRV records. ActivityPub doesn’t demand anything like it and even though Mastodon tries to provide something that would fix that issue - WebFinger, other implementations are not interested in that (e.g. Pleroma). And then one is left with instances such as “social.company.com”.

                            For example - Pleroma’s developer’s id is lain@pleroma.soykaf.com.

                            1. 16

                              This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack. That is a good thing, because when Fediverse nodes need to scale there are well-understood ways of doing it.

                              Success in social networking is entirely about network effects and that means low barrier to entry is table stakes. Yeah, it’d be cool if someone built the type of node you’re talking about, but it would be a curiosity pursued only by the most technical users. If that were the barrier to entry for the network, there would be no network.

                              1. 39

                                This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack.

                                Yes, but not for a web app I’m expected to run on my own time, for fun.

                                1. 6

                                  I’m not sure that’s the exact expectation, that we all should run our single-user Mastodon instances. I feel like the expectation is that sysadmin with enough knowledge will maintain an instance for many users. This seems to be the norm.

                                  That, or you go to Mastohost and pay someone else for your own single-user instance.

                                  1. 2

                                    You’re not expected to do that is my point.

                                  2. 16

                                    completely reasonable and uncontroversial

                                    Not true. Many people are complaining about the unmanaged proliferation of dependencies and tools. Most projects of this size and complexity don’t need more than one language, bulky javascript frameworks, caching and database services.

                                    This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu and making it more difficult for people to make the service really decentralized.

                                    1. 1

                                      I’m not going to defend the reality of what NPM packaging looks like right now because it sucks but that’s the ecosystem we’re stuck with for the time being until something better comes along. As with social networks, packaging systems are also about network effects.

                                      But you can’t deny that this is the norm today. Well, you can, but you would be wrong.

                                      This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu

                                      I’m sure it is, because dpkg is a wholly unsuitable tool for this use-case. You shouldn’t even try. Anyone who doesn’t know how to set these things up themselves should use the Docker container.

                                      1. 1

                                        I think the most difficult part of the Debian packaging would be the js deps, correct?

                                        1. 3

                                          Yes and no. Unvendorizing dependencies is done mostly for security and requires a lot of work depending on the amount of dependencies. Sometimes js libraries don’t create serious security concerns because they are only run client-side and can be left in vendorized form.

                                          The Ruby libraries can be also difficult to unvendorize because many upstream developers introduce breaking changes often. They care little about backward compatibility, packaging and security.

                                          Yet server-side code is more security-critical and that becomes a problem. And it’s getting even worse with new languages that strongly encourage static linking and vendorization.

                                          1. 1

                                            I can’t believe even Debian adopted the Googlism of “vendor” instead of “bundle”.

                                            That aside, Rust? In Mastodon? I guess the Ruby gems it requires would be the bigger problem?

                                            1. 2

                                              The use of the word is mine: I just heard people using “vendor” often. It’s not “adopted by Debian”.

                                              I don’t understand the second part: maybe you misread Ruby for Rust in my text?

                                              1. 1

                                                No, I really just don’t know what Rust has to do with Mastodon. There’s Rust in there somewhere? I just didn’t notice.

                                                1. 2

                                                  AFAICT there is no Rust in the repo (at least at the moment).

                                                  1. 1

                                                    Wow, I’m so dumb, I keep seeing Rust where there is none and misunderstanding you, so sorry!

                                      2. 7

                                        Great. Then have two implementations, one for users with large footprints, and another for casual users with five friends.

                                        It is a reasonable stack if you will devote 1+ servers to the task. Not for something you might want to run on your RPI next to your irc server (a single piece of software in those stacks too)

                                        1. 4

                                          Having more than one implementation is healthy.

                                          1. 2

                                            Of course it is. Which is why it’s a reasonable solution to the large stack required by the current primary implementation.

                                      3. 6

                                        There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching and not as a DB layer like PSQL.

                                        You can always write your own server if you want in whatever language you choose if you feel like Ruby/Node is too much. Or, like that other guy said, you can just use Docker.

                                        1. 4

                                          There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching . . .

                                          A project that can run on a single instance of the application binary absolutely does not need a cache. Nor does it need a pub/sub or messaging system outside of its process space.

                                          1. 2

                                            It’s more likely that Redis is being used for pub/sub messaging and job queuing.

                                          2. 11

                                            This does not seem like a reasonable set of dependencies to me

                                            Huh. I must be just used to this, then. At work I need to use or at least somewhat understand,

                                            • Postgres
                                            • Python 2
                                            • Python 3
                                            • Django
                                            • Ansible
                                            • AWS
                                            • Git (actually, Mercurial, but this is my choice to avoid using git)
                                            • Redis
                                            • Concourse
                                            • Docker
                                            • Emacs (My choice, but I could pick anything else)
                                            • Node
                                            • nginx
                                            • Flask
                                            • cron
                                            • Linux
                                            • RabbitMQ
                                            • Celery
                                            • Vagrant (well, optional, I actually do a little extra work to have everything native and avoid a VM)
                                            • The occasional bit of C code

                                            and so on and so forth.

                                            Do I just work at a terrible place or is this a reasonable amount of things to have to deal with in this business? I honestly don’t know.

                                            To me Mastodon’s requirements seem like a pretty standard Rails application. I’m not even sure why Redis is considered another db – it seems like an in-memory cache with optional disk persistence is a different thing than a persistent-only RDBMS. Nor do I even see much of a problem with two interpreted languages – the alternative would be to have js everywhere, since you can’t have Python or Ruby in a web browser, and js just isn’t a pleasant language for certain tasks.

                                            1. 38

                                              I can work with all that and more if you pay me. For stuff I’m running at home on my own time, fuck no. When I shut my laptop to leave the office, it stays shut until I’m back again in the morning, or I get paged.

                                              1. 2

                                                So is Mastodon unusual for a Rails program? I wonder if it’s simply unreasonable to ask people to run their own Rails installation. I honestly don’t know.

                                                Given the amount of Mastodon instances out there, though, it seems that most people manage. How?

                                                1. 4

                                                  That looks like a bog-standard, very minimal rails stack with a JS frontend. I’m honestly not sure how one could simplify it below that without dropping the JS on the web frontend and any caching, both of which seem like a bad idea.

                                                  1. 7

                                                    There’s no need to require node. The compilation should happen at release time, and the release download tarball should contain all the JS you need.

                                                    1. -3

                                                      lol “download tarball”, you’re old, dude.

                                                      1. 7

                                                        Just you wait another twenty years, and you too will be screaming at the kids to get off your lawn.

                                                    2. 2

                                                      You could remove Rails and use something Node-based for the backend. I’m not claiming that’s a good idea (in fact it’s probably not very reasonable), but it’d remove that dependency?

                                                      1. 1

                                                        it could just have been a go or rust binary or something along those lines, with an embedded db like bolt or sqlite

                                                        edit: though the reason i ignore mastodon is the same as cullum, culture doesn’t seem interesting, at least on mastodon.social

                                                      2. 4

                                                        If security or privacy focused, I’d try a combo like this:

                                                        1. Safe language with minimal runtime that compiles to native code and Javascript. Web framework in that language for dynamic stuff.

                                                        2. Lwan web server for static content.

                                                        3. SQLite for database.

                                                        4. Whatever is needed to combine them.

                                                        Combo will be smaller, faster, more reliable, and more secure.

                                                        1. 2

                                                          I don’t think this is unusual for a Rails app. I just don’t want to set up or manage a Rails app in my free time. Other people may want to, but I don’t.

                                                      3. 7

                                                        I don’t think it’s reasonable to compare professional requirements and personal requirements.

                                                        1. 4

                                                          The thing is, Mastodon is meant to be used on-premise. If you’re building a service you host, knock yourself out! Use 40 programming languages and 40 DBs at the same time. But if you want me to install it, keep it simple :)

                                                          1. 4

                                                            Personally, setting up all that seems like too much work for a home server, but maybe I’m just lazy. I had a similar issue when setting up Matrix and ran into an error message that I just didn’t have the heart to debug, given the amount of moving parts which I had to install.

                                                            1. 3

                                                              If you can use debian, try installing synapse via their repository, it works really nice for me so far: https://matrix.org/packages/debian/

                                                              1. 1

                                                                Reading other comments about the horror that is Docker, it is a wonder that you dare propose to install an entire OS only to run a Matrix server. ;)

                                                                1. 3

                                                                  i’m not completely sure which parts of you comment are sarcasm :)

                                                            2. 0

                                                              Your list there has lots of tools with overlapping functionality, seems like pointless redundancy. Just pick flask OR django. Just pick python3 or node, just pick docker or vagrant, make a choice, remove useless and redundant things.

                                                              1. 3

                                                                We have some Django applications and we have some Flask applications. They have different lineages. One we forked and one we made ourselves.

                                                            3. 6

                                                              Alternatively you install it using the Docker as described here.

                                                              1. 31

                                                                I think it’s kinda sad that the solution to “control your own toots” is “give up control of your computer and install this giant blob of software”.

                                                                1. 9

                                                                  Piling another forty years of hexadecimal Unix sludge on top of forty years of slightly different hexadecimal Unix sludge to improve our ability to ship software artifacts … it’s an aesthetic nightmare. But I don’t fully understand what our alternatives are.

                                                                  I’ve never been happier to be out of the business of having to think about this in anything but the most cursory detail.

                                                                  1. 11

                                                                    I mean how is that different from running any binary at the end of the day. Unless you’re compiling everything from scratch on the machine starting from the kernel. Running Mastodon from Docker is really no different. And it’s not like anybody is stopping you from either making your own Dockerfile, or just setting up directly on your machine by hand. The original complaint was that it’s too much work, and if that’s a case you have a simple packaged solution. If you don’t like it then roll up the sleeves and do it by hand. I really don’t see the problem here I’m afraid.

                                                                    1. 11

                                                                      “It’s too much work” is a problem.

                                                                      1. 5

                                                                        Unless you’re compiling everything from scratch on the machine starting from the kernel

                                                                        I use NixOS. I have a set of keys that I set as trusted for signature verification of binaries. The binaries are a cache of the build derivation, so I could theoretically build the software from scratch, if I wanted to, or to verify that the binaries are the same as the cached versions.

                                                                        1. 2

                                                                          Right, but if you feel strongly about that then you can make your own Dockerfile from source. The discussion is regarding whether there’s a simple way to get an instance up and running, and there is.

                                                                          1. 3

                                                                            Docker containers raise a lot of questions though, even if you use a Dockerfile:

                                                                            • What am I running?
                                                                            • Which versions am I running?
                                                                            • Do the versions have security vulnerabilities?
                                                                            • Will I be able to build the exact same version in 24 months?

                                                                            Nix answers these pretty will and fairly accurately.

                                                                        2. 2

                                                                          Unless you’re compiling everything from scratch on the machine starting from the kernel.

                                                                          You mean starting with writing a bootstrapping compiler in assembly, then writing your own full featured compiler and compiling it in the bootstrapping compiler. Then moving on to compiling the kernel.

                                                                          1. 1

                                                                            No no, your assembler could be compromised ;)

                                                                            Better write raw machine code directly onto the disk. Using, perhaps, a magnetized needle and a steady hand, or maybe a butterfly.

                                                                            1. 2

                                                                              My bootstrapping concept was having the device boot a program from ROM that takes in the user-supplied, initial program via I/O into RAM. Then passes execution to it. You enter the binary through one of those Morse code things with four buttons: 0, 1, backspace, and enter. Begins executing on enter.

                                                                              Gotta input the keyboard driver next in binary to use a keyboard. Then the display driver blind using the keyboard. Then storage driver to save things. Then, the OS and other components. ;)

                                                                            2. 1

                                                                              If I deploy three Go apps on top of a bare OS (picked Go since it has static binaries), and the Nginx server in front of all 3 of them uses OpenSSL, then I have one OpenSSL to patch whenever the inevitable CVE rolls around. If I deploy three Docker container apps on top of a bare OS, now I have four OpenSSLs to patch - three in the containers and one in my base OS. This complexity balloons very quickly which is terrible for user control. Hell, I have so little control over my one operating system that I had to carefully write a custom tool just to make sure I didn’t miss logfile lines in batch summaries created by cron. How am I supposed to manage four? And three with radically different tooling and methodology to boot.

                                                                              And Docker upstream, AFAIK, has provided nothing to help with the security problem which is probably why known security vulnerabilities in Docker images are rampant. If they have I would like to know because if it’s decent I would switch to it immediately. See this blog post for more about this problem (especially including links) and how we “solved” it in pump.io (spoiler: it’s a giant hack).

                                                                              1. 3

                                                                                That’s not how any of this works. You package the bare minimum needed to run the app in the Docker container, then you front all your containers with a single Nginx server that handles SSL. Meanwhile, there are plenty of great tools, like Dokku for managing Docker based infrastructure. Here’s how you provision a server using Let’s Encrypt with Dokku:

                                                                                sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
                                                                                okku letsencrypt:auto-renew
                                                                                

                                                                                viewing logs isn’t rocker science either:

                                                                                dokku logs myapp
                                                                                
                                                                                1. 1

                                                                                  OK, so OpenSSL was a bad example. Fair enough. But I think my point still stands - you’ll tend to have at least some duplicate libraries across Docker containers. There’s tooling around managing security vulnerabilities in language-level dependencies; see for example Snyk. But Docker imports the entire native package manager into the “static binary” and I don’t know of any tooling that can track problems in Docker images like that. I guess I could use Clair through Quay but… I don’t know. This doesn’t feel like as nice of a solution or as polished somehow. As an image maintainer I’ve added a big manual burden keeping up with native security updates in addition to those my application actually directly needs, when normally I could rely on admins to do that, probably with lots of automation.

                                                                                  1. 3

                                                                                    you’ll tend to have at least some duplicate libraries across Docker containers

                                                                                    That is literally the entire point. Application dependencies must be separate from one another, because even on a tight-knit team keeping n applications in perfect lockstep is impossible.

                                                                                    1. 1

                                                                                      OS dependencies are different than application dependencies. I can apply a libc patch on my Debian server with no worry because I know Debian works hard to create a stable base server environment. That’s different than application dependencies, where two applications are much more likely to require conflicting versions of libraries.

                                                                                      Now, I run most of my stuff on a single server so I’m very used to a heterogeneous environment. Maybe that’s biasing me against Docker. But isn’t that the usecase we’re discussing here anyway? How someone with just a hobbyist server can run Mastodon?

                                                                                      Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze. Clair is the equivalent of having to run npm install and then go trawling through node_modules looking for known vulnerable code instead of just looking at the lockfile. More broadly, because Docker lacks any notion of a package manifest, it seems to me that while Docker images are immutable once built, the build process that leads you there cannot be made deterministic. This is what makes it hard to keep track of the stuff inside them. I will have to think about this more - as I write this comment I’m wondering if my complaints about duplicated libraries and tracking security there is an instance of the XY problem or if they really are separate things in my mind.

                                                                                      Maybe I am looking for something like Nix or Guix inside a Docker container. Guix at least can export Docker containers; I suppose I should look into that.

                                                                                      1. 2

                                                                                        OS dependencies are different than application dependencies.

                                                                                        Yes, agreed.

                                                                                        Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze.

                                                                                        You don’t need a container to tell you these things. Application dependencies can be checked for exploits straight from the code repo, i.e. brakeman. Both the Gemfile.lock and yarn.lock are available from the root of the repo.

                                                                                        The container artifacts are most like built automatically for every merge to master, and that entails doing a full system update from the apt repository. So in reality, while not as deterministic as the lockfiles, the system deps in a container are likely to be significantly fresher than a regular server environment.

                                                                                    2. 1

                                                                                      You’d want to track security vulnerabilities outside your images though. You’d do it at dev time, and update your Dockerfile with updated dependencies when you publish the application. Think of Docker as just a packaging mechanism. It’s same as making an uberjar on the JVM. You package all your code into a container, and run the container. When you want to make updates, you blow the old one away and run a new one.

                                                                              2. 4

                                                                                I have only rarely used Docker, and am certainly no booster, so keep that in mind as I ask this.

                                                                                From the perspective of “install this giant blob of software”, do you see a docker deployment being that different from a single large binary? Particularly the notion of the control that you “give up”, how does that differ between Docker and $ALTERNATIVE?

                                                                                1. 14

                                                                                  Ideally one would choose door number three, something not so large and inauditable. The complaint is not literally about Docker, but the circumstances which have resulted in docker being the most viable deployment option.

                                                                                2. 2

                                                                                  You have the dockerfile and can reconstruct. You haven’t given up control.

                                                                                  1. 5

                                                                                    Is there a youtube video I can watch of somebody building a mastodon docker image from scratch?

                                                                                    1. 1

                                                                                      I do not know of one.

                                                                              3. 3

                                                                                I totally agree as well, and I wish authors would s/Mastodon/Fediverse/ in their articles. As others have noted, Pieroma is another good choice and others are getting into the game - NextCloud added fediverse node support in their most recent release as a for-instance.

                                                                                I tried running my own instance for several months, and it eventually blew up. In addition to the large set of dependencies, the system is overall quite complex. I had several devs from the project look at my instance, and the only thing they could say is it was a “back-end problem” (My instance had stopped getting new posts).

                                                                                I gave up and am now using somebody else’s :) I love the fediverse though, it’s a fascinating place.

                                                                                1. 4

                                                                                  I just use the official Docker containers. The tootsuite/mastodon container can be used to launch web, streaming, sidekiq and even database migrations. Then you just need an nginx container, a redis container, a postgres container and an optional elastic search container. I run it all on a 2GB/1vCPU Vultr node (with the NJ data center block store because you will need a lot of space) and it works fairly well (I only have ~10 users; small private server).

                                                                                  In the past I would agree with out (and it’s the reason I didn’t try out Diaspora years ago when it came out), but containers have made it easier. I do realize they both solve and cause problems and by no means think they’re the end all of tech, but they do make running stuff like this a lot easier.

                                                                                  If anyone wants to find me, I’m @djsumdog@hitchhiker.social

                                                                                  1. 2

                                                                                    Given that there’s a space for your Twitter handle, i wish Lobste.rs had a Mastodon slot as well :)

                                                                                  2. 2

                                                                                    Wait, you’re also forgetting systemd to keep all those process humming… :)

                                                                                    You’re right that this is clearly too much: I have run such systems for work (Rails’ pretty common), but would probably not do that for fun. I am amazed, and thankful, for the people who volunteer the effort to run all this on their week-ends.

                                                                                    Pleroma does look simpler… If I really wanted to run my own instance, I’d look in that direction. ¯_(ツ)_/¯

                                                                                    1. 0

                                                                                      I’m waiting for urbit.org to reach useability. Which I expect for my arbitrary feeling of useability to come about late this year. Then the issue is coming up to speed on a new language and integrated network, OS, build system.

                                                                                      1. 2

                                                                                        Urbit is apparently creating a feudal society. (Should note that I haven’t really dug into that thread for several years and am mostly taking @pushcx at his word.)

                                                                                        1. 1

                                                                                          The feudal society meme is just not true, and, BTW, Yarvin is no longer associated with Urbit. https://urbit.org/primer/

                                                                                      2. 1

                                                                                        I would love to have(make) a solution that could be used locally with sqlite and in aws with lambda, api gateway and dynamodb. That would allow scaling cost and privacy/controll.

                                                                                        1. 3

                                                                                          https://github.com/deoxxa/don is sort of in that direction (single binary, single file sqlite database).

                                                                                      1. 23

                                                                                        What does this have over Xscreesaver, which has already been designed for security? And have they taken into consideration everything that XScreensaver has? Rewrites of XScreensaver have repeatedly managed to include flaws that jwz wrote extensively about guarding against in the included documentation. In one case he wrote about a strawman example, and it later was realized in GNOME’s screenlocker!

                                                                                        jwz has blogged about these problems for almost twenty years, now.

                                                                                        2003: https://www.jwz.org/blog/2003/02/the-cadt-model/

                                                                                        2004: https://www.jwz.org/xscreensaver/toolkits.html

                                                                                        2005: https://www.jwz.org/xscreensaver/man1.html#8

                                                                                        2011: https://www.jwz.org/blog/2011/10/has-gnome-3-decided-that-people-shouldnt-want-screen-savers/

                                                                                        2014: https://www.jwz.org/blog/2014/04/the-awful-thing-about-getting-it-right-the-first-time-is-that-nobody-realizes-how-hard-it-was/

                                                                                        2015: https://www.jwz.org/blog/2015/04/i-told-you-so-again/

                                                                                        If you don’t want to bother to following those links:

                                                                                        I wrote this document in 2004, explaining the approach to privilege separation that xscreensaver has taken since 1991. Of course, the people doing needless rewrites of xscreensaver have ignored it for that whole time, and have then gone on to introduce exactly the bug that I described in this document as a hypothetical strawman! And – this would be hilarious if it weren’t so sad – have introduced it multiple times. As I said in 2015:

                                                                                        If you are not running xscreensaver on Linux, then it is safe to assume that your screen does not lock. Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.

                                                                                        There’s little that I can do to make the screen locker secure so long as the kernel and X11 developers are actively working against security. The strength of the lock on your front door doesn’t matter much so long as someone else in the house insists on leaving a key under the welcome mat.

                                                                                        1. 4

                                                                                          Feature-wise, xsecurelock is surely less impressive than xscreensaver. However, in terms of it actually working, I’ve had far more success.

                                                                                          At $job, we have many RHEL-based workstations. Whenever users would choose any of the fancy animations with xscreensaver, they would occasionally return to their desk to find the screen totally unlocked. My hunch was that the fancy graphics had a bug causing xscreensaver to crash - we made a policy to only allow the “blank” screensaver and this issue went away.

                                                                                          The next issue was that, depending on their desktop environment and notification daemon used, occasionally people’s notifications would pop-up OVER xscreensaver’s lock screen. Not exactly best case scenario to have a calendar/email notification for “Friday, 3pm: Fire John Doe” to pop up while you’re in a meeting.

                                                                                          We rolled out xsecurelock and no longer have either of these issues. JWZ is extremely talented, but his hubris leaves many to believe his software is bug free. The X11 protocol is broken and does not give any real provisions for a screenlocker. Most of the current implementations are basically hacks - sadly this is just the state of the Linux desktop today it seems.

                                                                                          I will note that GNOME’s screen locker now integrates with GDM - rather than just showing a full screen window with a PAM login box in your existing session - so it’s probably better now. Of course the downside is you can’t really swap it out for a custom screen lock application.

                                                                                          For a truly secure screenlocker your best bet is probably physlock, though it’s not the best UX for non-technical users.

                                                                                        1. 5

                                                                                          $5/month VPS, running OpenBSD:

                                                                                          • Mail server (OpenSMTPD/Dovecot, spamd for spam filtering)
                                                                                          • XMPP server (prosody)
                                                                                          • Personal website (OpenBSD’s httpd)
                                                                                          • DNS hidden master for my domain, with DNSSEC (NSD)
                                                                                          • Tiny Tiny RSS
                                                                                          • Matrix Homeserver
                                                                                          • IRC Bouncer (ZNC)

                                                                                          I’m using relayd as a TLS reverse proxy for all my services, de-muxing via the HTTP “Host” header. I use acme-client for letsencrypt renewals via cron.

                                                                                          I have ansible roles for each component here.

                                                                                          Todo: git, vpn