In the Mastodon universe, technically-minded users are encouraged to run their own node. Sounds good. To install a Mastodon node, I am instructed to install recent versions of
Ruby
Node.JS
Redis
PostgreSQL
nginx
This does not seem like a reasonable set of dependencies to me. In particular, using two interpreted languages, two databases, and a separate web server presumably acting as a frontend, all seems like overkill. I look forward to when the Mastodon devs are able to tame this complexity, and reduce the codebase to a something like single (ideally non-interpreted) language and a single database. Or, even better, a single binary that manages its own data on disk, using e.g. embedded SQLite. Until then, I’ll pass.
Compared to Mastodon, Pleroma is a piece of cake to install; I followed their tutorial and had an instance set up and running in about twenty minutes on a fresh server.
From memory all I needed install was Nginx, Elixir and Postgres, two of which were already set up and configured for other projects.
My server is a quad core ARMv7 with 2GB RAM and averages maybe 0.5 load when I hit heavy usage… it does transit a lot of traffic though, since the 1st January my server has pushed out 530GB of traffic.
It does. Some linux distributions will require adding the Erlang repo before installing elixir but most seem to have it already included: https://elixir-lang.org/install.html#unix-and-unix-like meaning its a simple one line command to install e.g pkg install elixir
I’m not a huge social person, but I had only heard of Pleroma without investigating it. After looking a bit more, I don’t really understand why someone would choose Mastodon over Pleroma. They do basically the same thing, but Pleroma takes less resources. Anyone who chose Mastodon over Pleroma have a reason why?
Pleroma didn’t have releases for a looong time. They finally started down that route. They also don’t have official Docker containers and config changes require recompiling (just due to the way they have Elixir and builds setup). It was a pain to write my Docker container for it.
Pleroma also lacks moderation tools (you need to add blocked domains to the config), it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow) and a couple of other features.
Misskey is another alternative that looks promising.
it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow)
I think that might just be the Pleroma FA - if I’m using the Mastodon FE, I get the same interaction on my Pleroma instance replying to someone on a different instance as when I’m using octodon.social (unless I’m radically misunderstanding your sentence)
Thanks, this is a really great response. I actually took a quick look at their docs and saw they didn’t have any FreeBSD guide set up, so I stopped looking. I use Vultr’s $2.50 FreeBSD vps and I didn’t feel like fiddling with anything that particular night. I wish they did have an official docker container for it.
Pleroma has a bunch of fiddly issues - it doesn’t do streaming properly (bitlbee-mastodon won’t work), the UI doesn’t have any “compose DM” functionality that I can find, I had huge problems with a long password, etc. But they’re mostly minor annoyances than show stoppers for now.
Pleroma is written in Elixir, high-performance and can run on small devices like a Raspberry Pi.
As to the DB, they seem to use Postgres.
The author of the app posted his list of differences, but I’m not sure if it’s complete and what it really means. I haven’t found a better comparison yet, however.
Unfortunately I have to agree. I self-host 99% of my online services, and sysadmin for a living. I tried mastodon for a few months, but its installation and management process was far more complicated than anything I’m used to. (I run everything on OpenBSD, so the docker image isn’t an option for me.)
In addition to getting NodeJS, Ruby, and all the other dependencies installed, I had to write 3 separate rc files to run 3 separate daemons to keep the thing running. Compared to something like Gitea, which just requires running a single Go executable and a Postgres DB, it was a massive amount of toil.
The mastodon culture really wasn’t a fit for me either. Even in technical spaces, there was a huge amount of politics/soapboxing. I realized I hadn’t even logged in for a few weeks so I just canned my instance.
Over the past year I’ve given up on the whole social network thing and stick to Matrix/IRC/XMPP/email. I’ve been much happier as a result and there’s a plethora of quality native clients (many are text-based). I’m especially happy on Matrix now that I’ve discovered weechat-matrix.
I don’t mean to discourage federated projects like Mastodon though - I’m always a fan of anything involving well-known URLs or SRV records!
Fortunately the “fediverse” is glued by a standard protocol (ActivityPub) that is quite simple so if one implementation (e.g. Mastodon) doesn’t suit someone’s needs it’s not a big problem - just searching for a better one and it still interconnects with the rest of the world.
(I’ve written a small proof-of-concept ActivityPub clients and servers, it works and federates, see also this).
For me the more important problems are not implementation issues with one server but rather design issues within the protocol. For example established standards such as e-mail or XMPP have a way to delegate responsibility of running a server of a particular protocol but still use bare domain for user identifies. In e-mail that is MX records in XMPP it’s DNS SRV records. ActivityPub doesn’t demand anything like it and even though Mastodon tries to provide something that would fix that issue - WebFinger, other implementations are not interested in that (e.g. Pleroma). And then one is left with instances such as “social.company.com”.
This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack. That is a good thing, because when Fediverse nodes need to scale there are well-understood ways of doing it.
Success in social networking is entirely about network effects and that means low barrier to entry is table stakes. Yeah, it’d be cool if someone built the type of node you’re talking about, but it would be a curiosity pursued only by the most technical users. If that were the barrier to entry for the network, there would be no network.
I’m not sure that’s the exact expectation, that we all should run our single-user Mastodon instances. I feel like the expectation is that sysadmin with enough knowledge will maintain an instance for many users. This seems to be the norm.
That, or you go to Mastohost and pay someone else for your own single-user instance.
Not true. Many people are complaining about the unmanaged proliferation of dependencies and tools.
Most projects of this size and complexity don’t need more than one language, bulky javascript frameworks, caching and database services.
This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu and making it more difficult for people to make the service really decentralized.
I’m not going to defend the reality of what NPM packaging looks like right now because it sucks but that’s the ecosystem we’re stuck with for the time being until something better comes along. As with social networks, packaging systems are also about network effects.
But you can’t deny that this is the norm today. Well, you can, but you would be wrong.
This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu
I’m sure it is, because dpkg is a wholly unsuitable tool for this use-case. You shouldn’t even try. Anyone who doesn’t know how to set these things up themselves should use the Docker container.
Yes and no. Unvendorizing dependencies is done mostly for security and requires a lot of work depending on the amount of dependencies. Sometimes js libraries don’t create serious security concerns because they are only run client-side and can be left in vendorized form.
The Ruby libraries can be also difficult to unvendorize because many upstream developers introduce breaking changes often. They care little about backward compatibility, packaging and security.
Yet server-side code is more security-critical and that becomes a problem. And it’s getting even worse with new languages that strongly encourage static linking and vendorization.
Great. Then have two implementations, one for users with large footprints, and another for casual users with five friends.
It is a reasonable stack if you will devote 1+ servers to the task. Not for something you might want to run on your RPI next to your irc server (a single piece of software in those stacks too)
There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching and not as a DB layer like PSQL.
You can always write your own server if you want in whatever language you choose if you feel like Ruby/Node is too much. Or, like that other guy said, you can just use Docker.
There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching . . .
A project that can run on a single instance of the application binary absolutely does not need a cache. Nor does it need a pub/sub or messaging system outside of its process space.
This does not seem like a reasonable set of dependencies to me
Huh. I must be just used to this, then. At work I need to use or at least somewhat understand,
Postgres
Python 2
Python 3
Django
Ansible
AWS
Git (actually, Mercurial, but this is my choice to avoid using git)
Redis
Concourse
Docker
Emacs (My choice, but I could pick anything else)
Node
nginx
Flask
cron
Linux
RabbitMQ
Celery
Vagrant (well, optional, I actually do a little extra work to have everything native and avoid a VM)
The occasional bit of C code
and so on and so forth.
Do I just work at a terrible place or is this a reasonable amount of things to have to deal with in this business? I honestly don’t know.
To me Mastodon’s requirements seem like a pretty standard Rails application. I’m not even sure why Redis is considered another db – it seems like an in-memory cache with optional disk persistence is a different thing than a persistent-only RDBMS. Nor do I even see much of a problem with two interpreted languages – the alternative would be to have js everywhere, since you can’t have Python or Ruby in a web browser, and js just isn’t a pleasant language for certain tasks.
I can work with all that and more if you pay me. For stuff I’m running at home on my own time, fuck no. When I shut my laptop to leave the office, it stays shut until I’m back again in the morning, or I get paged.
So is Mastodon unusual for a Rails program? I wonder if it’s simply unreasonable to ask people to run their own Rails installation. I honestly don’t know.
Given the amount of Mastodon instances out there, though, it seems that most people manage. How?
That looks like a bog-standard, very minimal rails stack with a JS frontend. I’m honestly not sure how one could simplify it below that without dropping the JS on the web frontend and any caching, both of which seem like a bad idea.
You could remove Rails and use something Node-based for the backend. I’m not claiming that’s a good idea (in fact it’s probably not very reasonable), but it’d remove that dependency?
I don’t think this is unusual for a Rails app. I just don’t want to set up or manage a Rails app in my free time. Other people may want to, but I don’t.
The thing is, Mastodon is meant to be used on-premise. If you’re building a service you host, knock yourself out! Use 40 programming languages and 40 DBs at the same time. But if you want me to install it, keep it simple :)
Personally, setting up all that seems like too much work for a home server, but maybe I’m just lazy. I had a similar issue when setting up Matrix and ran into an error message that I just didn’t have the heart to debug, given the amount of moving parts which I had to install.
Your list there has lots of tools with overlapping functionality, seems like pointless redundancy. Just pick flask OR django. Just pick python3 or node, just pick docker or vagrant, make a choice, remove useless and redundant things.
Piling another forty years of hexadecimal Unix sludge on top of forty years of slightly different hexadecimal Unix sludge to improve our ability to ship software artifacts … it’s an aesthetic nightmare. But I don’t fully understand what our alternatives are.
I’ve never been happier to be out of the business of having to think about this in anything but the most cursory detail.
I mean how is that different from running any binary at the end of the day. Unless you’re compiling everything from scratch on the machine starting from the kernel. Running Mastodon from Docker is really no different. And it’s not like anybody is stopping you from either making your own Dockerfile, or just setting up directly on your machine by hand. The original complaint was that it’s too much work, and if that’s a case you have a simple packaged solution. If you don’t like it then roll up the sleeves and do it by hand. I really don’t see the problem here I’m afraid.
Unless you’re compiling everything from scratch on the machine starting from the kernel
I use NixOS. I have a set of keys that I set as trusted for signature verification of binaries. The binaries are a cache of the build derivation, so I could theoretically build the software from scratch, if I wanted to, or to verify that the binaries are the same as the cached versions.
Right, but if you feel strongly about that then you can make your own Dockerfile from source. The discussion is regarding whether there’s a simple way to get an instance up and running, and there is.
Unless you’re compiling everything from scratch on the machine starting from the kernel.
You mean starting with writing a bootstrapping compiler in assembly, then writing your own full featured compiler and compiling it in the bootstrapping compiler. Then moving on to compiling the kernel.
My bootstrapping concept was having the device boot a program from ROM that takes in the user-supplied, initial program via I/O into RAM. Then passes execution to it. You enter the binary through one of those Morse code things with four buttons: 0, 1, backspace, and enter. Begins executing on enter.
Gotta input the keyboard driver next in binary to use a keyboard. Then the display driver blind using the keyboard. Then storage driver to save things. Then, the OS and other components. ;)
If I deploy three Go apps on top of a bare OS (picked Go since it has static binaries), and the Nginx server in front of all 3 of them uses OpenSSL, then I have one OpenSSL to patch whenever the inevitable CVE rolls around. If I deploy three Docker container apps on top of a bare OS, now I have four OpenSSLs to patch - three in the containers and one in my base OS. This complexity balloons very quickly which is terrible for user control. Hell, I have so little control over my one operating system that I had to carefully write a custom tool just to make sure I didn’t miss logfile lines in batch summaries created by cron. How am I supposed to manage four? And three with radically different tooling and methodology to boot.
And Docker upstream, AFAIK, has provided nothing to help with the security problem which is probably why known security vulnerabilities in Docker images are rampant. If they have I would like to know because if it’s decent I would switch to it immediately. See this blog post for more about this problem (especially including links) and how we “solved” it in pump.io (spoiler: it’s a giant hack).
That’s not how any of this works. You package the bare minimum needed to run the app in the Docker container, then you front all your containers with a single Nginx server that handles SSL. Meanwhile, there are plenty of great tools, like Dokku for managing Docker based infrastructure. Here’s how you provision a server using Let’s Encrypt with Dokku:
OK, so OpenSSL was a bad example. Fair enough. But I think my point still stands - you’ll tend to have at least some duplicate libraries across Docker containers. There’s tooling around managing security vulnerabilities in language-level dependencies; see for example Snyk. But Docker imports the entire native package manager into the “static binary” and I don’t know of any tooling that can track problems in Docker images like that. I guess I could use Clair through Quay but… I don’t know. This doesn’t feel like as nice of a solution or as polished somehow. As an image maintainer I’ve added a big manual burden keeping up with native security updates in addition to those my application actually directly needs, when normally I could rely on admins to do that, probably with lots of automation.
you’ll tend to have at least some duplicate libraries across Docker containers
That is literally the entire point. Application dependencies must be separate from one another, because even on a tight-knit team keeping n applications in perfect lockstep is impossible.
OS dependencies are different than application dependencies. I can apply a libc patch on my Debian server with no worry because I know Debian works hard to create a stable base server environment. That’s different than application dependencies, where two applications are much more likely to require conflicting versions of libraries.
Now, I run most of my stuff on a single server so I’m very used to a heterogeneous environment. Maybe that’s biasing me against Docker. But isn’t that the usecase we’re discussing here anyway? How someone with just a hobbyist server can run Mastodon?
Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze. Clair is the equivalent of having to run npm install and then go trawling through node_modules looking for known vulnerable code instead of just looking at the lockfile. More broadly, because Docker lacks any notion of a package manifest, it seems to me that while Docker images are immutable once built, the build process that leads you there cannot be made deterministic. This is what makes it hard to keep track of the stuff inside them. I will have to think about this more - as I write this comment I’m wondering if my complaints about duplicated libraries and tracking security there is an instance of the XY problem or if they really are separate things in my mind.
Maybe I am looking for something like Nix or Guix inside a Docker container. Guix at least can export Docker containers; I suppose I should look into that.
OS dependencies are different than application dependencies.
Yes, agreed.
Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze.
You don’t need a container to tell you these things. Application dependencies can be checked for exploits straight from the code repo, i.e. brakeman. Both the Gemfile.lock and yarn.lock are available from the root of the repo.
The container artifacts are most like built automatically for every merge to master, and that entails doing a full system update from the apt repository. So in reality, while not as deterministic as the lockfiles, the system deps in a container are likely to be significantly fresher than a regular server environment.
You’d want to track security vulnerabilities outside your images though. You’d do it at dev time, and update your Dockerfile with updated dependencies when you publish the application. Think of Docker as just a packaging mechanism. It’s same as making an uberjar on the JVM. You package all your code into a container, and run the container. When you want to make updates, you blow the old one away and run a new one.
I have only rarely used Docker, and am certainly no booster, so keep that in mind as I ask this.
From the perspective of “install this giant blob of software”, do you see a docker deployment being that different from a single large binary? Particularly the notion of the control that you “give up”, how does that differ between Docker and $ALTERNATIVE?
Ideally one would choose door number three, something not so large and inauditable. The complaint is not literally about Docker, but the circumstances which have resulted in docker being the most viable deployment option.
I totally agree as well, and I wish authors would s/Mastodon/Fediverse/ in their articles. As others have noted, Pieroma is another good choice and others are getting into the game - NextCloud added fediverse node support in their most recent release as a for-instance.
I tried running my own instance for several months, and it eventually blew up. In addition to the large set of dependencies, the system is overall quite complex. I had several devs from the project look at my instance, and the only thing they could say is it was a “back-end problem” (My instance had stopped getting new posts).
I gave up and am now using somebody else’s :) I love the fediverse though, it’s a fascinating place.
I just use the official Docker containers. The tootsuite/mastodon container can be used to launch web, streaming, sidekiq and even database migrations. Then you just need an nginx container, a redis container, a postgres container and an optional elastic search container. I run it all on a 2GB/1vCPU Vultr node (with the NJ data center block store because you will need a lot of space) and it works fairly well (I only have ~10 users; small private server).
In the past I would agree with out (and it’s the reason I didn’t try out Diaspora years ago when it came out), but containers have made it easier. I do realize they both solve and cause problems and by no means think they’re the end all of tech, but they do make running stuff like this a lot easier.
Wait, you’re also forgetting systemd to keep all those process humming… :)
You’re right that this is clearly too much: I have run such systems for work (Rails’ pretty common), but would probably not do that for fun. I am amazed, and thankful, for the people who volunteer the effort to run all this on their week-ends.
Pleroma does look simpler… If I really wanted to run my own instance, I’d look in that direction. ¯_(ツ)_/¯
I’m waiting for urbit.org to reach useability. Which I expect for my arbitrary feeling of useability to come about late this year. Then the issue is coming up to speed on a new language and integrated network, OS, build system.
I would love to have(make) a solution that could be used locally with sqlite and in aws with lambda, api gateway and dynamodb. That would allow scaling cost and privacy/controll.
Much as I like the notion of re-decentralizing social networking, I feel that Mastodon’s emphasis on Twitter-like brevity means it is still part of the problem and not part of the solution.
Mastodon does not solve 1) the addiction problem of an app with variable rewards 2) short posts leading to stunted discourse 3) virtual dogpiles 4) people talking to similar people 5) the lack of soft disapproval you see in real spaces (with body language, etc) and many more problems. I don’t think it was meant to solve all these problems, however.
All of these have been addressed to some degree, but generally not in a deep way & generally not until relatively late in the process. Gargron’s focus seemed to be on making a twitter clone, not on engineering good incentives, so unless a problem has been brought up directly his decisions tend to be along the lines of “let’s just do what twitter does”.
The way #1 and #3 have been addressed has been mentioned below. I should note, with regard to #1, that mastodon’s web interface also has substantially more detailed visibility settings for notifications than twitter’s (and notifications can be disabled entirely), and some instances have patched the interface so that metrics are completely invisible.
With regard to #2, mastodon started off with 500 character posts, & a number of instances have much larger post size limits. Threading is not handled particularly well, though (and mastodon & pleroma have different threading behaviors).
With regard to #4, mastodon (and the fediverse in general) is not optimized for creating the kind of inter-community collisions that twitter has. Rando-in-my-mentions problems are a little less frequent. I think, because there’s a tiered community structure rather than a flat one, this makes inter-community communication a bit easier simply on the grounds that it’s generally a decision made by both parties, rather than an imposition by an impersonal algorithm. Mastodon also does not implement algorithmic post ordering – only reverse-chronological feeds – so to the extent that trendism would normally accelerate an echo chamber effect through algorithmic ranking, it doesn’t here (and it’s possible to hide all boosts, as well). (My client, fern, treats boosts as clones of the original post & applies the same read status to it. Since fern operates primarily by jumping to last/next unread post, boosts are functionally only visible the first time you read them or their original.)
With regard to #5, I think social networks very quickly develop their own forms of soft disapproval. (As an autistic, I gotta say: body language for soft disapproval isn’t unambiguous or reliable enough for me, so I much prefer language-based soft disapproval, which no text-oriented social network elides.)
The (3) problem is addressed, somewhat. For example, this is why boosts cannot include a reply in them. The rationale being that you either give someone else a voice or you don’t – you don’t get to also imma-let-ya-finish.
3 is also addressed by making search opt-in instead of opt-out. On twitter it’s common for people to search for topics they disagree strongly with just to get in arguments with random users they would otherwise have no contact with. In Mastodon the search function only works on hashtags, so it’s your choice whether you want your post to be visible in the results or not.
Similarly 1 is addressed by not showing boost/fav counts in the main view.
They’re definitely thinking critically about the impact of certain features and how they can have unintended negative consequences.
In my experience of running fediverse software for an already established userbase of a website/product (i.e., not tech people), the users are stymied about why there is no quote reply. They genuinely want it.
All the GUI clients I’m aware of are just web clients, though some are substantially smaller and lighter than the default web interface (ex. https://brutaldon.online/). This is probably because mastodon messages (and probably pleroma & gnu social ones too) contain a constrained set of arbitrary html tags, & processing html fragments in a non-webtech context is a pain. (Luckily, the only ones that actually matter are and and everything else can be completely stripped. There is no formatting or anything.)
There’s a nice command-line client called https://github.com/magicalraccoon/tootstream, & I wrote a console/curses client called https://github.com/enkiv2/misc/blob/master/fern. Both of those are python & use the mastodon.py library. I haven’t tested them heavily on different systems but I figure they should work on any modern-ish unix with a recent-ish python, including OSX.
The “masto-” prefix is quite unfortunate. “Mastonaut” sounds like a designation given to some NASA test subject doing trials of the effects of airborne semen in zero-g.
First attested 1813, from the New Latin genus name Mastodon (1806), coined by French naturalist Georges Cuvier, from Ancient Greek μαστός (mastós, “breast”) + ὀδούς (odoús, “tooth”), from the similarity of the mammilloid projections on the crowns of the extinct mammal’s molars.
Lock-in is the goal that keeps a single supplier or protocol going for a long time most of the time. Most people are on Facebook because the ecosystem forces them to use it and/or they don’t want to lose the important stuff they put into it. Twitter is similar for first point. Operating systems, container techs, legacy systems, human traditions, and so on.
A proven strategy is to create something that takes a life of its own self-perpetuating with a cost of not being involved and/or leaving. Mastadon and the Fediverse are 100% unnecessary and easy to leave for close to 100% of people on social media. So, they can’t be anything like what I described above. I’m not sure how to overcome that obstacle for social media.
It’s a good definition of “made it” for a product. Lock in basically means that you have a near-monopoly over your existing customer base, and can squeeze them for profits a lot harder than you could if you had healthy competition.
Luckily, the fediverse is not a product. There are many implementations (though not as many as I’d like because of the complexity of the spec), and it’s relatively easy for people to hop between identities or leave the network entirely.
I worry about articles like OP’s because the fediverse remains a pleasant place to be primarily because the folks who care about marketing reach – the folks who think lock-in is desirable even when they’re not personally making money off it – have largely avoided it. Even if it becomes quite big, we ought to carefully avoid giving anybody the idea that it can be used as part of a money-making scheme. Right now, people who like it are there, and they make the community better when they can, and if they stop liking it they leave; that’s a recipe for a good community.
How best can I put a button on my websites that allows users to share the URL like they would for Twitter, Facebook, etc? That viral feature is essential for me sharing anything. I and millions of other internet users are simply too lazy to copy the URL and go load up Mastodon to paste the URL
The Mastodon webui registers a protocol handler for URLs that begin with web+mastodon://, e.g. specially crafted “follow me” buttons would automatically open your correct instance from any webpage.
web+mastodon://follow?uri=alice@example.com opens follow dialog for alice
web+mastodon://share?text=Lorem+ipsum opens new toot dialog with preset text “Lorem ipsum”
You can use that to make a sharing button or link on your website. I have it at the bottom of each page in my statically-generated blog, for example.
If your response to this article was “But what about slightly less well known social networking software”, then you probably were not the intended audience.
It’s appropriate to call out the article for missing the entire point of the fediverse. Treating it like just another Twitter alternative is just asking for folks to judge it by the same criteria as they’d judge Twitter, and it doesn’t hold up that well that way. It’s only when you look at the broader network and interoperability possibilities that you can see the advantages over Twitter.
I agree with that. I was just saying that probably this article wasn’t written for you, and the people it was written for likely will not care about what you care about.
In the Mastodon universe, technically-minded users are encouraged to run their own node. Sounds good. To install a Mastodon node, I am instructed to install recent versions of
This does not seem like a reasonable set of dependencies to me. In particular, using two interpreted languages, two databases, and a separate web server presumably acting as a frontend, all seems like overkill. I look forward to when the Mastodon devs are able to tame this complexity, and reduce the codebase to a something like single (ideally non-interpreted) language and a single database. Or, even better, a single binary that manages its own data on disk, using e.g. embedded SQLite. Until then, I’ll pass.
Totally agree. I heard Pleroma has less dependencies though it looks like it depends a bit on which OS you’re running.
Compared to Mastodon, Pleroma is a piece of cake to install; I followed their tutorial and had an instance set up and running in about twenty minutes on a fresh server.
From memory all I needed install was Nginx, Elixir and Postgres, two of which were already set up and configured for other projects.
My server is a quad core ARMv7 with 2GB RAM and averages maybe 0.5 load when I hit heavy usage… it does transit a lot of traffic though, since the 1st January my server has pushed out 530GB of traffic.
doesnt Elixir require Erlang to run?
It does. Some linux distributions will require adding the Erlang repo before installing elixir but most seem to have it already included: https://elixir-lang.org/install.html#unix-and-unix-like meaning its a simple one line command to install e.g
pkg install elixir
I’m not a huge social person, but I had only heard of Pleroma without investigating it. After looking a bit more, I don’t really understand why someone would choose Mastodon over Pleroma. They do basically the same thing, but Pleroma takes less resources. Anyone who chose Mastodon over Pleroma have a reason why?
Mastodon has more features right now. That’s about it.
Pleroma didn’t have releases for a looong time. They finally started down that route. They also don’t have official Docker containers and config changes require recompiling (just due to the way they have Elixir and builds setup). It was a pain to write my Docker container for it.
Pleroma also lacks moderation tools (you need to add blocked domains to the config), it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow) and a couple of other features.
Misskey is another alternative that looks promising.
I think that might just be the Pleroma FA - if I’m using the Mastodon FE, I get the same interaction on my Pleroma instance replying to someone on a different instance as when I’m using octodon.social (unless I’m radically misunderstanding your sentence)
Thanks, this is a really great response. I actually took a quick look at their docs and saw they didn’t have any FreeBSD guide set up, so I stopped looking. I use Vultr’s $2.50 FreeBSD vps and I didn’t feel like fiddling with anything that particular night. I wish they did have an official docker container for it.
Pleroma has a bunch of fiddly issues - it doesn’t do streaming properly (
bitlbee-mastodon
won’t work), the UI doesn’t have any “compose DM” functionality that I can find, I had huge problems with a long password, etc. But they’re mostly minor annoyances than show stoppers for now.It doesn’t depend - they’ve just gone further to define what to do for each OS!
I guess it’s mainly the ImageMagick dependency for OpenBSD that got me thinking otherwise.
OpenBSD
Debian Based Distributions
imagemagick is purely optional. The only hard dependencies are postgresql and elixir (and some reverse proxy like nginx)
imagemagick is strongly recommended though so you can enable the Mogrify filter on uploads and actually strip exif data
Specifically, quoting from their readme:
As to the DB, they seem to use Postgres.
The author of the app posted his list of differences, but I’m not sure if it’s complete and what it really means. I haven’t found a better comparison yet, however.
Unfortunately I have to agree. I self-host 99% of my online services, and sysadmin for a living. I tried mastodon for a few months, but its installation and management process was far more complicated than anything I’m used to. (I run everything on OpenBSD, so the docker image isn’t an option for me.)
In addition to getting NodeJS, Ruby, and all the other dependencies installed, I had to write 3 separate rc files to run 3 separate daemons to keep the thing running. Compared to something like Gitea, which just requires running a single Go executable and a Postgres DB, it was a massive amount of toil.
The mastodon culture really wasn’t a fit for me either. Even in technical spaces, there was a huge amount of politics/soapboxing. I realized I hadn’t even logged in for a few weeks so I just canned my instance.
Over the past year I’ve given up on the whole social network thing and stick to Matrix/IRC/XMPP/email. I’ve been much happier as a result and there’s a plethora of quality native clients (many are text-based). I’m especially happy on Matrix now that I’ve discovered weechat-matrix.
I don’t mean to discourage federated projects like Mastodon though - I’m always a fan of anything involving well-known URLs or SRV records!
Fortunately the “fediverse” is glued by a standard protocol (ActivityPub) that is quite simple so if one implementation (e.g. Mastodon) doesn’t suit someone’s needs it’s not a big problem - just searching for a better one and it still interconnects with the rest of the world.
(I’ve written a small proof-of-concept ActivityPub clients and servers, it works and federates, see also this).
For me the more important problems are not implementation issues with one server but rather design issues within the protocol. For example established standards such as e-mail or XMPP have a way to delegate responsibility of running a server of a particular protocol but still use bare domain for user identifies. In e-mail that is MX records in XMPP it’s DNS SRV records. ActivityPub doesn’t demand anything like it and even though Mastodon tries to provide something that would fix that issue - WebFinger, other implementations are not interested in that (e.g. Pleroma). And then one is left with instances such as “social.company.com”.
For example - Pleroma’s developer’s id is
lain@pleroma.soykaf.com
.This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack. That is a good thing, because when Fediverse nodes need to scale there are well-understood ways of doing it.
Success in social networking is entirely about network effects and that means low barrier to entry is table stakes. Yeah, it’d be cool if someone built the type of node you’re talking about, but it would be a curiosity pursued only by the most technical users. If that were the barrier to entry for the network, there would be no network.
Yes, but not for a web app I’m expected to run on my own time, for fun.
I’m not sure that’s the exact expectation, that we all should run our single-user Mastodon instances. I feel like the expectation is that sysadmin with enough knowledge will maintain an instance for many users. This seems to be the norm.
That, or you go to Mastohost and pay someone else for your own single-user instance.
You’re not expected to do that is my point.
Not true. Many people are complaining about the unmanaged proliferation of dependencies and tools. Most projects of this size and complexity don’t need more than one language, bulky javascript frameworks, caching and database services.
This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu and making it more difficult for people to make the service really decentralized.
I’m not going to defend the reality of what NPM packaging looks like right now because it sucks but that’s the ecosystem we’re stuck with for the time being until something better comes along. As with social networks, packaging systems are also about network effects.
But you can’t deny that this is the norm today. Well, you can, but you would be wrong.
I’m sure it is, because dpkg is a wholly unsuitable tool for this use-case. You shouldn’t even try. Anyone who doesn’t know how to set these things up themselves should use the Docker container.
I think the most difficult part of the Debian packaging would be the js deps, correct?
Yes and no. Unvendorizing dependencies is done mostly for security and requires a lot of work depending on the amount of dependencies. Sometimes js libraries don’t create serious security concerns because they are only run client-side and can be left in vendorized form.
The Ruby libraries can be also difficult to unvendorize because many upstream developers introduce breaking changes often. They care little about backward compatibility, packaging and security.
Yet server-side code is more security-critical and that becomes a problem. And it’s getting even worse with new languages that strongly encourage static linking and vendorization.
I can’t believe even Debian adopted the Googlism of “vendor” instead of “bundle”.
That aside, Rust? In Mastodon? I guess the Ruby gems it requires would be the bigger problem?
The use of the word is mine: I just heard people using “vendor” often. It’s not “adopted by Debian”.
I don’t understand the second part: maybe you misread Ruby for Rust in my text?
No, I really just don’t know what Rust has to do with Mastodon. There’s Rust in there somewhere? I just didn’t notice.
AFAICT there is no Rust in the repo (at least at the moment).
Wow, I’m so dumb, I keep seeing Rust where there is none and misunderstanding you, so sorry!
Great. Then have two implementations, one for users with large footprints, and another for casual users with five friends.
It is a reasonable stack if you will devote 1+ servers to the task. Not for something you might want to run on your RPI next to your irc server (a single piece of software in those stacks too)
Having more than one implementation is healthy.
Of course it is. Which is why it’s a reasonable solution to the large stack required by the current primary implementation.
There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching and not as a DB layer like PSQL.
You can always write your own server if you want in whatever language you choose if you feel like Ruby/Node is too much. Or, like that other guy said, you can just use Docker.
A project that can run on a single instance of the application binary absolutely does not need a cache. Nor does it need a pub/sub or messaging system outside of its process space.
It’s more likely that Redis is being used for pub/sub messaging and job queuing.
Huh. I must be just used to this, then. At work I need to use or at least somewhat understand,
and so on and so forth.
Do I just work at a terrible place or is this a reasonable amount of things to have to deal with in this business? I honestly don’t know.
To me Mastodon’s requirements seem like a pretty standard Rails application. I’m not even sure why Redis is considered another db – it seems like an in-memory cache with optional disk persistence is a different thing than a persistent-only RDBMS. Nor do I even see much of a problem with two interpreted languages – the alternative would be to have js everywhere, since you can’t have Python or Ruby in a web browser, and js just isn’t a pleasant language for certain tasks.
I can work with all that and more if you pay me. For stuff I’m running at home on my own time, fuck no. When I shut my laptop to leave the office, it stays shut until I’m back again in the morning, or I get paged.
So is Mastodon unusual for a Rails program? I wonder if it’s simply unreasonable to ask people to run their own Rails installation. I honestly don’t know.
Given the amount of Mastodon instances out there, though, it seems that most people manage. How?
That looks like a bog-standard, very minimal rails stack with a JS frontend. I’m honestly not sure how one could simplify it below that without dropping the JS on the web frontend and any caching, both of which seem like a bad idea.
There’s no need to require node. The compilation should happen at release time, and the release download tarball should contain all the JS you need.
lol “download tarball”, you’re old, dude.
Just you wait another twenty years, and you too will be screaming at the kids to get off your lawn.
You could remove Rails and use something Node-based for the backend. I’m not claiming that’s a good idea (in fact it’s probably not very reasonable), but it’d remove that dependency?
it could just have been a go or rust binary or something along those lines, with an embedded db like bolt or sqlite
edit: though the reason i ignore mastodon is the same as cullum, culture doesn’t seem interesting, at least on mastodon.social
If security or privacy focused, I’d try a combo like this:
Safe language with minimal runtime that compiles to native code and Javascript. Web framework in that language for dynamic stuff.
Lwan web server for static content.
SQLite for database.
Whatever is needed to combine them.
Combo will be smaller, faster, more reliable, and more secure.
I don’t think this is unusual for a Rails app. I just don’t want to set up or manage a Rails app in my free time. Other people may want to, but I don’t.
I don’t think it’s reasonable to compare professional requirements and personal requirements.
The thing is, Mastodon is meant to be used on-premise. If you’re building a service you host, knock yourself out! Use 40 programming languages and 40 DBs at the same time. But if you want me to install it, keep it simple :)
Personally, setting up all that seems like too much work for a home server, but maybe I’m just lazy. I had a similar issue when setting up Matrix and ran into an error message that I just didn’t have the heart to debug, given the amount of moving parts which I had to install.
If you can use debian, try installing synapse via their repository, it works really nice for me so far: https://matrix.org/packages/debian/
Reading other comments about the horror that is Docker, it is a wonder that you dare propose to install an entire OS only to run a Matrix server. ;)
i’m not completely sure which parts of you comment are sarcasm :)
Your list there has lots of tools with overlapping functionality, seems like pointless redundancy. Just pick flask OR django. Just pick python3 or node, just pick docker or vagrant, make a choice, remove useless and redundant things.
We have some Django applications and we have some Flask applications. They have different lineages. One we forked and one we made ourselves.
Alternatively you install it using the Docker as described here.
I think it’s kinda sad that the solution to “control your own toots” is “give up control of your computer and install this giant blob of software”.
Piling another forty years of hexadecimal Unix sludge on top of forty years of slightly different hexadecimal Unix sludge to improve our ability to ship software artifacts … it’s an aesthetic nightmare. But I don’t fully understand what our alternatives are.
I’ve never been happier to be out of the business of having to think about this in anything but the most cursory detail.
I mean how is that different from running any binary at the end of the day. Unless you’re compiling everything from scratch on the machine starting from the kernel. Running Mastodon from Docker is really no different. And it’s not like anybody is stopping you from either making your own Dockerfile, or just setting up directly on your machine by hand. The original complaint was that it’s too much work, and if that’s a case you have a simple packaged solution. If you don’t like it then roll up the sleeves and do it by hand. I really don’t see the problem here I’m afraid.
“It’s too much work” is a problem.
I use NixOS. I have a set of keys that I set as trusted for signature verification of binaries. The binaries are a cache of the build derivation, so I could theoretically build the software from scratch, if I wanted to, or to verify that the binaries are the same as the cached versions.
Right, but if you feel strongly about that then you can make your own Dockerfile from source. The discussion is regarding whether there’s a simple way to get an instance up and running, and there is.
Docker containers raise a lot of questions though, even if you use a Dockerfile:
Nix answers these pretty will and fairly accurately.
You mean starting with writing a bootstrapping compiler in assembly, then writing your own full featured compiler and compiling it in the bootstrapping compiler. Then moving on to compiling the kernel.
No no, your assembler could be compromised ;)
Better write raw machine code directly onto the disk. Using, perhaps, a magnetized needle and a steady hand, or maybe a butterfly.
My bootstrapping concept was having the device boot a program from ROM that takes in the user-supplied, initial program via I/O into RAM. Then passes execution to it. You enter the binary through one of those Morse code things with four buttons: 0, 1, backspace, and enter. Begins executing on enter.
Gotta input the keyboard driver next in binary to use a keyboard. Then the display driver blind using the keyboard. Then storage driver to save things. Then, the OS and other components. ;)
That sounds reasonable and admirable http://bootstrappable.org/projects.html https://www.gnu.org/software/guix/manual/en/html_node/Bootstrapping.html
[Comment removed by author]
If I deploy three Go apps on top of a bare OS (picked Go since it has static binaries), and the Nginx server in front of all 3 of them uses OpenSSL, then I have one OpenSSL to patch whenever the inevitable CVE rolls around. If I deploy three Docker container apps on top of a bare OS, now I have four OpenSSLs to patch - three in the containers and one in my base OS. This complexity balloons very quickly which is terrible for user control. Hell, I have so little control over my one operating system that I had to carefully write a custom tool just to make sure I didn’t miss logfile lines in batch summaries created by cron. How am I supposed to manage four? And three with radically different tooling and methodology to boot.
And Docker upstream, AFAIK, has provided nothing to help with the security problem which is probably why known security vulnerabilities in Docker images are rampant. If they have I would like to know because if it’s decent I would switch to it immediately. See this blog post for more about this problem (especially including links) and how we “solved” it in pump.io (spoiler: it’s a giant hack).
That’s not how any of this works. You package the bare minimum needed to run the app in the Docker container, then you front all your containers with a single Nginx server that handles SSL. Meanwhile, there are plenty of great tools, like Dokku for managing Docker based infrastructure. Here’s how you provision a server using Let’s Encrypt with Dokku:
viewing logs isn’t rocker science either:
OK, so OpenSSL was a bad example. Fair enough. But I think my point still stands - you’ll tend to have at least some duplicate libraries across Docker containers. There’s tooling around managing security vulnerabilities in language-level dependencies; see for example Snyk. But Docker imports the entire native package manager into the “static binary” and I don’t know of any tooling that can track problems in Docker images like that. I guess I could use Clair through Quay but… I don’t know. This doesn’t feel like as nice of a solution or as polished somehow. As an image maintainer I’ve added a big manual burden keeping up with native security updates in addition to those my application actually directly needs, when normally I could rely on admins to do that, probably with lots of automation.
That is literally the entire point. Application dependencies must be separate from one another, because even on a tight-knit team keeping n applications in perfect lockstep is impossible.
OS dependencies are different than application dependencies. I can apply a libc patch on my Debian server with no worry because I know Debian works hard to create a stable base server environment. That’s different than application dependencies, where two applications are much more likely to require conflicting versions of libraries.
Now, I run most of my stuff on a single server so I’m very used to a heterogeneous environment. Maybe that’s biasing me against Docker. But isn’t that the usecase we’re discussing here anyway? How someone with just a hobbyist server can run Mastodon?
Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest.
Dockerfile
does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze. Clair is the equivalent of having to runnpm install
and then go trawling throughnode_modules
looking for known vulnerable code instead of just looking at the lockfile. More broadly, because Docker lacks any notion of a package manifest, it seems to me that while Docker images are immutable once built, the build process that leads you there cannot be made deterministic. This is what makes it hard to keep track of the stuff inside them. I will have to think about this more - as I write this comment I’m wondering if my complaints about duplicated libraries and tracking security there is an instance of the XY problem or if they really are separate things in my mind.Maybe I am looking for something like Nix or Guix inside a Docker container. Guix at least can export Docker containers; I suppose I should look into that.
Yes, agreed.
You don’t need a container to tell you these things. Application dependencies can be checked for exploits straight from the code repo, i.e. brakeman. Both the
Gemfile.lock
andyarn.lock
are available from the root of the repo.The container artifacts are most like built automatically for every merge to master, and that entails doing a full system update from the apt repository. So in reality, while not as deterministic as the lockfiles, the system deps in a container are likely to be significantly fresher than a regular server environment.
You’d want to track security vulnerabilities outside your images though. You’d do it at dev time, and update your Dockerfile with updated dependencies when you publish the application. Think of Docker as just a packaging mechanism. It’s same as making an uberjar on the JVM. You package all your code into a container, and run the container. When you want to make updates, you blow the old one away and run a new one.
I have only rarely used Docker, and am certainly no booster, so keep that in mind as I ask this.
From the perspective of “install this giant blob of software”, do you see a docker deployment being that different from a single large binary? Particularly the notion of the control that you “give up”, how does that differ between Docker and
$ALTERNATIVE
?Ideally one would choose door number three, something not so large and inauditable. The complaint is not literally about Docker, but the circumstances which have resulted in docker being the most viable deployment option.
You have the dockerfile and can reconstruct. You haven’t given up control.
Is there a youtube video I can watch of somebody building a mastodon docker image from scratch?
I do not know of one.
I totally agree as well, and I wish authors would s/Mastodon/Fediverse/ in their articles. As others have noted, Pieroma is another good choice and others are getting into the game - NextCloud added fediverse node support in their most recent release as a for-instance.
I tried running my own instance for several months, and it eventually blew up. In addition to the large set of dependencies, the system is overall quite complex. I had several devs from the project look at my instance, and the only thing they could say is it was a “back-end problem” (My instance had stopped getting new posts).
I gave up and am now using somebody else’s :) I love the fediverse though, it’s a fascinating place.
I just use the official Docker containers. The tootsuite/mastodon container can be used to launch web, streaming, sidekiq and even database migrations. Then you just need an nginx container, a redis container, a postgres container and an optional elastic search container. I run it all on a 2GB/1vCPU Vultr node (with the NJ data center block store because you will need a lot of space) and it works fairly well (I only have ~10 users; small private server).
In the past I would agree with out (and it’s the reason I didn’t try out Diaspora years ago when it came out), but containers have made it easier. I do realize they both solve and cause problems and by no means think they’re the end all of tech, but they do make running stuff like this a lot easier.
If anyone wants to find me, I’m @djsumdog@hitchhiker.social
Given that there’s a space for your Twitter handle, i wish Lobste.rs had a Mastodon slot as well :)
Wait, you’re also forgetting systemd to keep all those process humming… :)
You’re right that this is clearly too much: I have run such systems for work (Rails’ pretty common), but would probably not do that for fun. I am amazed, and thankful, for the people who volunteer the effort to run all this on their week-ends.
Pleroma does look simpler… If I really wanted to run my own instance, I’d look in that direction. ¯_(ツ)_/¯
I’m waiting for urbit.org to reach useability. Which I expect for my arbitrary feeling of useability to come about late this year. Then the issue is coming up to speed on a new language and integrated network, OS, build system.
Urbit is apparently creating a feudal society. (Should note that I haven’t really dug into that thread for several years and am mostly taking @pushcx at his word.)
The feudal society meme is just not true, and, BTW, Yarvin is no longer associated with Urbit. https://urbit.org/primer/
I would love to have(make) a solution that could be used locally with sqlite and in aws with lambda, api gateway and dynamodb. That would allow scaling cost and privacy/controll.
https://github.com/deoxxa/don is sort of in that direction (single binary, single file sqlite database).
The branding of the Fediverse under one banner with the branding of just one application is a little annoying.
I agree. Mastodon is software, the Fediverse is a place.
Exactly, it’s not the “Mastodon network”
🤦♂️
Much as I like the notion of re-decentralizing social networking, I feel that Mastodon’s emphasis on Twitter-like brevity means it is still part of the problem and not part of the solution.
Mastodon does not solve 1) the addiction problem of an app with variable rewards 2) short posts leading to stunted discourse 3) virtual dogpiles 4) people talking to similar people 5) the lack of soft disapproval you see in real spaces (with body language, etc) and many more problems. I don’t think it was meant to solve all these problems, however.
All of these have been addressed to some degree, but generally not in a deep way & generally not until relatively late in the process. Gargron’s focus seemed to be on making a twitter clone, not on engineering good incentives, so unless a problem has been brought up directly his decisions tend to be along the lines of “let’s just do what twitter does”.
The way #1 and #3 have been addressed has been mentioned below. I should note, with regard to #1, that mastodon’s web interface also has substantially more detailed visibility settings for notifications than twitter’s (and notifications can be disabled entirely), and some instances have patched the interface so that metrics are completely invisible.
With regard to #2, mastodon started off with 500 character posts, & a number of instances have much larger post size limits. Threading is not handled particularly well, though (and mastodon & pleroma have different threading behaviors).
With regard to #4, mastodon (and the fediverse in general) is not optimized for creating the kind of inter-community collisions that twitter has. Rando-in-my-mentions problems are a little less frequent. I think, because there’s a tiered community structure rather than a flat one, this makes inter-community communication a bit easier simply on the grounds that it’s generally a decision made by both parties, rather than an imposition by an impersonal algorithm. Mastodon also does not implement algorithmic post ordering – only reverse-chronological feeds – so to the extent that trendism would normally accelerate an echo chamber effect through algorithmic ranking, it doesn’t here (and it’s possible to hide all boosts, as well). (My client, fern, treats boosts as clones of the original post & applies the same read status to it. Since fern operates primarily by jumping to last/next unread post, boosts are functionally only visible the first time you read them or their original.)
With regard to #5, I think social networks very quickly develop their own forms of soft disapproval. (As an autistic, I gotta say: body language for soft disapproval isn’t unambiguous or reliable enough for me, so I much prefer language-based soft disapproval, which no text-oriented social network elides.)
Thanks for the info on these points. Ideally, social factors would be considered from day one in the design of any social network.
The (3) problem is addressed, somewhat. For example, this is why boosts cannot include a reply in them. The rationale being that you either give someone else a voice or you don’t – you don’t get to also imma-let-ya-finish.
3 is also addressed by making search opt-in instead of opt-out. On twitter it’s common for people to search for topics they disagree strongly with just to get in arguments with random users they would otherwise have no contact with. In Mastodon the search function only works on hashtags, so it’s your choice whether you want your post to be visible in the results or not.
Similarly 1 is addressed by not showing boost/fav counts in the main view.
They’re definitely thinking critically about the impact of certain features and how they can have unintended negative consequences.
In my experience of running fediverse software for an already established userbase of a website/product (i.e., not tech people), the users are stymied about why there is no quote reply. They genuinely want it.
Yeah, I see it a lot.
But I also think it’s a good reason to not have it. I have seen it as the basis for abuse on Twitter.
Is there a native macOS client for Mastodon that isn’t a steaming pile of Electron?
All the GUI clients I’m aware of are just web clients, though some are substantially smaller and lighter than the default web interface (ex. https://brutaldon.online/). This is probably because mastodon messages (and probably pleroma & gnu social ones too) contain a constrained set of arbitrary html tags, & processing html fragments in a non-webtech context is a pain. (Luckily, the only ones that actually matter are and and everything else can be completely stripped. There is no formatting or anything.)
There’s a nice command-line client called https://github.com/magicalraccoon/tootstream, & I wrote a console/curses client called https://github.com/enkiv2/misc/blob/master/fern. Both of those are python & use the mastodon.py library. I haven’t tested them heavily on different systems but I figure they should work on any modern-ish unix with a recent-ish python, including OSX.
https://mastodon.technology/@brunoph/101650095611618146
This guy is making one. It’s not released yet, but you can follow to be updated.
The “masto-” prefix is quite unfortunate. “Mastonaut” sounds like a designation given to some NASA test subject doing trials of the effects of airborne semen in zero-g.
The effects of breast milk would be more appropriate (or not, considering how tasteless the analogy is to you):
https://en.wiktionary.org/wiki/mastodon
(my emphasis)
As a greek native speaker, when I first heard about mastodon, I thought that it was some kind of gadget for breastfeeding.
The other day I was trying to say I love Mastodon by saying I have mastophilia but that just means love of boobies…
Man, I love the Mastodon network.
When I saw the title I said “nah, still not there yet.”
I’ve now read the piece.
So, one million users in a year. How many stayed? How many of them were grandmas? How many were outside the hype train?
A social network has made it when people who don’t want accounts have them anyway. Mastadon has not made it. I shall go back to ignoring it.
That’s a weird definition of “made it”.
Lock-in is the goal that keeps a single supplier or protocol going for a long time most of the time. Most people are on Facebook because the ecosystem forces them to use it and/or they don’t want to lose the important stuff they put into it. Twitter is similar for first point. Operating systems, container techs, legacy systems, human traditions, and so on.
A proven strategy is to create something that takes a life of its own self-perpetuating with a cost of not being involved and/or leaving. Mastadon and the Fediverse are 100% unnecessary and easy to leave for close to 100% of people on social media. So, they can’t be anything like what I described above. I’m not sure how to overcome that obstacle for social media.
It’s a good definition of “made it” for a product. Lock in basically means that you have a near-monopoly over your existing customer base, and can squeeze them for profits a lot harder than you could if you had healthy competition.
Luckily, the fediverse is not a product. There are many implementations (though not as many as I’d like because of the complexity of the spec), and it’s relatively easy for people to hop between identities or leave the network entirely.
I worry about articles like OP’s because the fediverse remains a pleasant place to be primarily because the folks who care about marketing reach – the folks who think lock-in is desirable even when they’re not personally making money off it – have largely avoided it. Even if it becomes quite big, we ought to carefully avoid giving anybody the idea that it can be used as part of a money-making scheme. Right now, people who like it are there, and they make the community better when they can, and if they stop liking it they leave; that’s a recipe for a good community.
And outside social networks, it probably doesn’t make too much sense.
Even in the context of social networks, it’s essentially saying “you’ve made it when your user base will take any opportunity to leave”.
Not necessarily, just that part of your userbase is only there because of the social effects
I’m in the same boat. I have an account, but no-one I wish to follow has.
How best can I put a button on my websites that allows users to share the URL like they would for Twitter, Facebook, etc? That viral feature is essential for me sharing anything. I and millions of other internet users are simply too lazy to copy the URL and go load up Mastodon to paste the URL
The Mastodon webui registers a protocol handler for URLs that begin with web+mastodon://, e.g. specially crafted “follow me” buttons would automatically open your correct instance from any webpage.
web+mastodon://follow?uri=alice@example.com
opens follow dialog for aliceweb+mastodon://share?text=Lorem+ipsum
opens new toot dialog with preset text “Lorem ipsum”You can use that to make a sharing button or link on your website. I have it at the bottom of each page in my statically-generated blog, for example.
Cool! Are you aware of any marketing efforts to let content producers know of it?
It also seems… shortsighted to use “mastodon” in the URI instead of something like “activitypub”.
No; it was implemented in Mastodon a long time ago, and never announced anywhere except the release notes.
We ain’t.
If your response to this article was “But what about slightly less well known social networking software”, then you probably were not the intended audience.
It’s appropriate to call out the article for missing the entire point of the fediverse. Treating it like just another Twitter alternative is just asking for folks to judge it by the same criteria as they’d judge Twitter, and it doesn’t hold up that well that way. It’s only when you look at the broader network and interoperability possibilities that you can see the advantages over Twitter.
I agree with that. I was just saying that probably this article wasn’t written for you, and the people it was written for likely will not care about what you care about.