1. 1

    Company: Sendwave

    Company site: https://www.sendwave.com

    Position(s): Backend Engineer, iOS Engineer, Android Engineer

    Location: REMOTE only

    Description: In 2017, one billion immigrants worldwide sent over $600 billion home to family and friends, dwarfing foreign governmental aid. In the age of cheap, quick transfers through services like Paypal and Venmo, these people are trekking to stores to pay fees averaging over 7% for transfers that typically take 24 hours or more.

    Sendwave’s mission is to change that by making sending money anywhere in the world easy and affordable. Since 2014, our app has allowed Africans in the US, the UK, and Canada to send money instantly to mobile money wallets in Kenya, Uganda, & Tanzania and Ghana, saving our users over 70% relative to Western Union and MoneyGram.

    Tech stack: Python (Flask, SQLalchemy, Celery, Postgres, Heroku), iOS, Android

    Contact: Apply at https://boards.greenhouse.io/waveapp

    1. 9

      Almost all of these are awful for accessibility - the position-off-the-page with CSS, followed by display: none or visibility: none w/ CSS are probably the most a11y-friendly, but are also probably the most easily detectable by bots.

      1. 1

        Memory management in Beef is manual, and includes first-class support for custom allocators. Care has been taken to reduce the burden of manual memory management with language ergonomics and runtime safeties – Beef can detect memory leaks in real-time, and offers guaranteed protection against use-after-free and double-deletion errors. As with most safety features in Beef, these memory safeties can be turned off in release builds for maximum performance.

        This is a neat compromise between manual memory management and garbage collection.

        1. 3

          This post doesn’t address another frequent problem I’ve stumbled into, which is forgetting places where you reference a deleted relation in something like a view. E.g., if you render page that displays customer.customer_type.description, it’s super easy to forget there’s no referential integrity to protect you when you soft-delete a customer_type (to prevent it from future-use), and this will blow up when you try to dereference nil to get the description when ActiveRecord tries to fetch the customer_type it doesn’t “see” unless you unscope it first.

          It’s a programming error that should be caught in advance, but I’ve seen this happen way too many times.

          1. 1

            Wait, whoa… default_scope applies to the joins of all queries too? TIL! I’ll have to look into this and make an edit

            1. 1

              This was the major reason that default_scope bit me in a rails project.

              1. 1

                Overworked, underpaid (and proud of it!), and stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

                1. 2

                  stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

                  I’m curious; how do you know this? Is it just from their “Diversity & Inclusion” mission statement?

                  1. 2

                    That, casual conversation with some of their older Ops folk, and a chat with Syd himself from ‘back in the day’.

                    1. 9

                      Thanks. It’s definitely a red flag, which is unfortunate because at least superficially, “social justice” sounds like a good thing. Unfortunately, there’s a large overlap between that and hateful tribalism. For example, from this job ad

                      with the goal to change the IT industry from a white, bearded clump to something that’s a little less monochrome and have a few more x-chromosomes

                      Being genuinely inclusive is good and important. Casting aspersions on an entire group of people (their own employees, no less!) for their genitalia and/or skin colour is never ok. For some reason this is given a pass when it comes from proponents of the correct political ideology.

                      1. 1

                        Wouldn’t with the goal to make the IT industry more diverse amount to the same? That’s what I understand from this quote, the only difference being that the quote clearly states the current state of affairs and what would make it more diverse.

                        1. 5

                          I find it totally offensive for myself or any of my peers to be described as a “white, bearded clump”.

                        2. 1

                          I am curious to understand why you immediately redflagged this after law’s statement and rejected the massive evidence (https://www.glassdoor.co.uk/Overview/Working-at-GitLab-EI_IE1296544.11,17.htm) – at least compared to a one-line statement – that Gitlab is, at the very least, a nice place to work in.

                          1. 2

                            Good question. I think it’s because it’s far riskier for one’s own political capital or reputation to say something critical, and I think this is especially true of criticising political correctness. Nobody ever got fired for saying “oh yeah, it’s great. I am happy, everyone is happy.”

                            Or perhaps looking at it another way: a “woke” culture in a company is a good thing to some people. There are many people who are that flavour of political extremist, and would feel welcome among their own. The original observation was indeed “this is a woke company”, and not “this is a bad company.”

                            Glassdoor are not letting me read reviews without an account, but if the company were an echo chamber (likely, since I don’t believe the diversity movement is interested in diversity of opinion), then what’s to correct for all the positive reviews coming from people who 1. want to save their own skin, and/or 2. are quite comfortable with political correctness?

                            1. 2

                              How is law risking anything by saying what he said – or anything for that matter – under a nickname?

                              1. 2

                                I don’t know about this person specifically, but it’s not uncommon to be able to deduce who a person is by combing through their post history, and possibly cross-referencing it against content they’ve authored in other online communities.

                                1. 3

                                  I don’t won’t to be impolite by insisting (sorry if I am) but you actually trusted this person’s single-line statement rather than publicly available, verified, anonymous feedback.

                                  1. 3

                                    Don’t worry, I don’t think you’ve been impolite. It’s totally fair to ask.

                                    You are right, I drew a likely (in my mind) conclusion from a single source over an entire repository of reviews. I’ve presented my justification for this; perhaps it’s not entirely legitimate and it will be based on some of my own experiences and biases.

                                    I wouldn’t say I “trust” the above anecdote comprehensively, but it’s certainly a signal. I could see a motive for someone to say some company is “bad”, but I don’t understand why someone would describe a company’s culture as “woke” if it isn’t.

                    2. 1

                      Dodged a bullet, thanks.

                      1. 1

                        I was shocked to see how much less I’d make at Gitlab - my pay would be literally half what it is right now. They index their remote pay to cost of living wher eyou live, and in the United States it’s indexed for an entire state. In my home state, cost of living varies WIDELY based on what part of the state you are in, and this acted much to my detriment.

                        I understand and appreciate the difficulty of figuring out what to pay remote workers in a global workforce, but I definitely think Gitlab hasn’t solved it yet. I’m also grateful their salary transparency after the introductory interview meant that we weren’t wasting each others’ time - I wish more companies did this.

                    1. 2

                      I use Arq to sync to AWS Glacier. Really easy to set up and pretty cheap.

                      1. 3

                        My first computer was a Tandy 1000EX, which had this bizarro layout of its own: https://deskthority.net/viewtopic.php?f=2&t=17994&start=

                        1. 7

                          Well, some of us are in this category (as the article points out):

                          If you’re building API services that need to support server-to-server or client-to-server (like a mobile app or single page app (SPA)) communication, using JWTs as your API tokens is a very smart idea. In this scenario:

                          • You will have an authentication API which clients authenticate against, and get back a JWT
                          • Clients then use this JWT to send authenticated requests to other API services These other API services use the client’s JWT to validate the client is trusted and can perform some action without needing to perform a network validation

                          so JWT is not that bad. Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

                          1. 17

                            Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

                            The EU “Cookie Law” applies to all methods of identification — cookies, local storage, JWT, parameters in the URL, even canvas fingerprinting. So it shouldn’t have any effect on the privacy policy whatsoever.

                            1. 9

                              You still can use sessions with cookies, especially with SPA. Unless the JWT token is stateless and short lived you should not use it. Also JWT isn’t the best design either as it gives too much flexibility and too much possibilities to misuse. PASETO tries to resolve these problems with versioning protocol and reducing amount of possible hashes/encryption methods.

                              1. 1

                                Why shouldn’t you use long lived JWTs with a single page application?

                                1. 4

                                  Because you cannot invalidate that token.

                                  1. 6

                                    Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

                                    JWTs are useful when short lived for “server-less”/lambda api’s so they can authenticate the request and move along quickly but for more traditional things they can present more challenges than solutions.

                                    1. 7

                                      Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

                                      Yes, that was my point. It was just mental shortcut, that if you do that, then there is no difference between “good ol’” sessions and using JWT.

                                      Simple flow chart.

                                      1. 1

                                        Except it is not exactly the same since loosing a blacklist database is not the same as loosing a token database for instance. The former will not invalidate all sessions but will re-enabled old tokens. Which may not be that bad if the tokens are sufficiently short-lived.

                                        1. 1

                                          Except “reissuing” old tokens has much less impact (at most your clients will be a little annoyed) than allowing leaked tokens to be valid again. If I would be a client I would much more like the former rather than later.

                              2. 5

                                One of my major concerns with JWT’s is that retraction is a problem.

                                Suppose that I have the requirement that old authenticated sessions have to be remotely retractable, then how on earth would I make a certain JWT invalid without having to consult the database for “expired sessions”.

                                The JWT to be invalidated could still reside on the devices of certain users after it has been invalidated remotely.

                                The only way I could think of, is making them so short-lived that they expire almost instantaneous. Like in a few minutes at most, which means that user-sessions will be terminated annoyingly fast as well.

                                If I can get nearly infinite sessions and instant retractions, I will gladly pay the price of hitting the database on each request.

                                1. 8

                                  JWT retraction can be handled in the same way that a traditional API token would; you add it to a black list, or in the case of a JWT a “secret” that its signed against can be changed. However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

                                  I have used short lived JWTs for communicating with various stateless (server-less/lambda) api’s and for that purpose they work quite well; each endpoint has a certificate they can check the JWT validity with and having the users profile and permissions encapsulated means not needing a database connection to know what the user is allowed to do; a 60s validity period gives the request enough time to authenticate before the token expires while removing the need for retraction.

                                  I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

                                  1. 7

                                    However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

                                    I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

                                    This is exactly my main concern and also the single reason I haven’t used JWT’s anywhere yet. I can imagine services where JWT’s would be useful, but I have yet to see or build one where some form of retraction wasn’t a requirement.

                                    My usual go-to solution is to generate some 50-100 characters long string of gibberish and store that into a cookie on the user’s machine and a database table consisting of <user_uuid, token_string, expiration_timestamp> triples which is then joined with the table which contains user-data. Such queries are usually blazing fast and retraction then is a simple DELETE-query. Also: Scaling usually isn’t that big of a concern as most DBMS-systems tend to have the required features built-in already.

                                    Usually, I also set up some scheduled event in the DMBS which deletes all expired tokens from that table periodically. Typically once per day at night, or when the amount of active users is low. It makes for a nice fallback just in case some programming bug inadvertently creeps in.

                                    But I guess this was the original author’s point as well.

                                  2. 1

                                    I’ve never done any work with JWTs so this might be a dumb question - but can’t you just put an expiration time into the JWT data itself, along with the session and/or user information? The user can’t alter the expiration time because presumably that would invalidate the signature, so as long as the timestamp is less than $(current_time) you’d be good to go? I’m sure I’m missing something obvious.

                                    1. 5

                                      If someone steals the JWT they have free reign until it expires. With a session, you can remotely revoke it.

                                      1. 1

                                        That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                                        1. 7

                                          That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                                          The problem with the black mark, is that you have to hit some sort of database to check for that black mark. By doing so, you invalidate the usefulness of JWT’s. That is one of OP’s main points.

                                          1. 2

                                            Well, not necessarily. If you’re making requests often (e.g, every couple of seconds) and you can live with a short delay between logging out and the session being invalidated, you can set the timeout on the JWT to be ~30 seconds or so and only check the blacklist if the JWT is expired (and, if the session isn’t blacklisted, issue a new JWT). This can save a significant number of database requests for a chatty API (like you might find in a chat protocol).

                                            1. 1

                                              Or refresh a local cache of the blacklist periodically on each server, so it’s a purely in-memory lookup.

                                              1. 4

                                                But in that case, you’d be defeating their use as session tokens, because you are limited to very short sessions. You are just one hiccup of the network away from failure which also defeats their purpose. (which was another point of the OP).

                                                I see how they can be useful in situations where you are making a lot of requests, but the point is that 99,9% of websites don’t do that.

                                    2. 1

                                      For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone). The refresh tokens, are then used to issue new authorization token periodically and it is transparent to the user. You can re issue authorization token, using refresh token every 15 minutes, for example.

                                      For web browsers, using refresh tokens may or may not be a good idea. Refresh tokens, are, from the security prospective, same as ‘passwords’ (although temporary). So their storage within web browser, should follow same policy as one would have for passwords.

                                      So if using refresh tokens for your single page app, is not an option, then invalidating would have to happen during access control validation, on the backend. (Backend, still is responsible for access control, anyway, because it cannot be done on web clients, securely).

                                      It is more expensive, and requires a form of distributed cache if you have distributed backend that allows stateless no-ip-bound distribution of requests…

                                      1. 1

                                        For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone).

                                        But then why use 2 tokens instead of single one? It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection. Meh. And even you can use cookie as in your web UI, so in the end it will make everything simpler as you do not need to use 2 separate auth systems in your app.

                                        1. 1

                                          It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection.

                                          This is not really, why 2 tokens are used (authentication token, and refresh token). 2 tokens are used to a) allow fast expiration of an authentication request b) prevent passing of actual user password through to the backend (it only needs to be passed when creating a refresh token).

                                          This is a fairly standard practice though, not something I invented (it requires an API accessible, secure password store on user’s device ,which is why it is prevalent in mobile apps).

                                          I also cannot see how a) and b) can be achieved with a single token.

                                  1. 65

                                    In the Mastodon universe, technically-minded users are encouraged to run their own node. Sounds good. To install a Mastodon node, I am instructed to install recent versions of

                                    • Ruby
                                    • Node.JS
                                    • Redis
                                    • PostgreSQL
                                    • nginx

                                    This does not seem like a reasonable set of dependencies to me. In particular, using two interpreted languages, two databases, and a separate web server presumably acting as a frontend, all seems like overkill. I look forward to when the Mastodon devs are able to tame this complexity, and reduce the codebase to a something like single (ideally non-interpreted) language and a single database. Or, even better, a single binary that manages its own data on disk, using e.g. embedded SQLite. Until then, I’ll pass.

                                    1. 22

                                      Totally agree. I heard Pleroma has less dependencies though it looks like it depends a bit on which OS you’re running.

                                      1. 11

                                        Compared to Mastodon, Pleroma is a piece of cake to install; I followed their tutorial and had an instance set up and running in about twenty minutes on a fresh server.

                                        From memory all I needed install was Nginx, Elixir and Postgres, two of which were already set up and configured for other projects.

                                        My server is a quad core ARMv7 with 2GB RAM and averages maybe 0.5 load when I hit heavy usage… it does transit a lot of traffic though, since the 1st January my server has pushed out 530GB of traffic.

                                        1. 2

                                          doesnt Elixir require Erlang to run?

                                          1. 2

                                            It does. Some linux distributions will require adding the Erlang repo before installing elixir but most seem to have it already included: https://elixir-lang.org/install.html#unix-and-unix-like meaning its a simple one line command to install e.g pkg install elixir

                                        2. 7

                                          I’m not a huge social person, but I had only heard of Pleroma without investigating it. After looking a bit more, I don’t really understand why someone would choose Mastodon over Pleroma. They do basically the same thing, but Pleroma takes less resources. Anyone who chose Mastodon over Pleroma have a reason why?

                                          1. 6

                                            Mastodon has more features right now. That’s about it.

                                            1. 4

                                              Pleroma didn’t have releases for a looong time. They finally started down that route. They also don’t have official Docker containers and config changes require recompiling (just due to the way they have Elixir and builds setup). It was a pain to write my Docker container for it.

                                              Pleroma also lacks moderation tools (you need to add blocked domains to the config), it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow) and a couple of other features.

                                              Misskey is another alternative that looks promising.

                                              1. 2

                                                it doesn’t allow remote follow/interactions (if you see a status elsewhere on Mastodon, you can click remote-reply, it will ask your server name, redirect you to your server and then you can reply to someone you don’t follow)

                                                I think that might just be the Pleroma FA - if I’m using the Mastodon FE, I get the same interaction on my Pleroma instance replying to someone on a different instance as when I’m using octodon.social (unless I’m radically misunderstanding your sentence)

                                                1. 1

                                                  Thanks, this is a really great response. I actually took a quick look at their docs and saw they didn’t have any FreeBSD guide set up, so I stopped looking. I use Vultr’s $2.50 FreeBSD vps and I didn’t feel like fiddling with anything that particular night. I wish they did have an official docker container for it.

                                                2. 3

                                                  Pleroma has a bunch of fiddly issues - it doesn’t do streaming properly (bitlbee-mastodon won’t work), the UI doesn’t have any “compose DM” functionality that I can find, I had huge problems with a long password, etc. But they’re mostly minor annoyances than show stoppers for now.

                                                3. 7

                                                  It doesn’t depend - they’ve just gone further to define what to do for each OS!

                                                  1. 4

                                                    I guess it’s mainly the ImageMagick dependency for OpenBSD that got me thinking otherwise.

                                                    OpenBSD

                                                    • elixir
                                                    • gmake
                                                    • ImageMagick
                                                    • git
                                                    • postgresql-server
                                                    • postgresql-contrib

                                                    Debian Based Distributions

                                                    • postgresql
                                                    • postgresql-contrib
                                                    • elixir
                                                    • erlang-dev
                                                    • erlang-tools
                                                    • erlang-parsetools
                                                    • erlang-xmerl
                                                    • git
                                                    • build-essential
                                                    1. 3

                                                      imagemagick is purely optional. The only hard dependencies are postgresql and elixir (and some reverse proxy like nginx)

                                                      1. 4

                                                        imagemagick is strongly recommended though so you can enable the Mogrify filter on uploads and actually strip exif data

                                                  2. 3

                                                    Specifically, quoting from their readme:

                                                    Pleroma is written in Elixir, high-performance and can run on small devices like a Raspberry Pi.

                                                    As to the DB, they seem to use Postgres.

                                                    The author of the app posted his list of differences, but I’m not sure if it’s complete and what it really means. I haven’t found a better comparison yet, however.

                                                  3. 16

                                                    Unfortunately I have to agree. I self-host 99% of my online services, and sysadmin for a living. I tried mastodon for a few months, but its installation and management process was far more complicated than anything I’m used to. (I run everything on OpenBSD, so the docker image isn’t an option for me.)

                                                    In addition to getting NodeJS, Ruby, and all the other dependencies installed, I had to write 3 separate rc files to run 3 separate daemons to keep the thing running. Compared to something like Gitea, which just requires running a single Go executable and a Postgres DB, it was a massive amount of toil.

                                                    The mastodon culture really wasn’t a fit for me either. Even in technical spaces, there was a huge amount of politics/soapboxing. I realized I hadn’t even logged in for a few weeks so I just canned my instance.

                                                    Over the past year I’ve given up on the whole social network thing and stick to Matrix/IRC/XMPP/email. I’ve been much happier as a result and there’s a plethora of quality native clients (many are text-based). I’m especially happy on Matrix now that I’ve discovered weechat-matrix.

                                                    I don’t mean to discourage federated projects like Mastodon though - I’m always a fan of anything involving well-known URLs or SRV records!

                                                    1. 11

                                                      Fortunately the “fediverse” is glued by a standard protocol (ActivityPub) that is quite simple so if one implementation (e.g. Mastodon) doesn’t suit someone’s needs it’s not a big problem - just searching for a better one and it still interconnects with the rest of the world.

                                                      (I’ve written a small proof-of-concept ActivityPub clients and servers, it works and federates, see also this).

                                                      For me the more important problems are not implementation issues with one server but rather design issues within the protocol. For example established standards such as e-mail or XMPP have a way to delegate responsibility of running a server of a particular protocol but still use bare domain for user identifies. In e-mail that is MX records in XMPP it’s DNS SRV records. ActivityPub doesn’t demand anything like it and even though Mastodon tries to provide something that would fix that issue - WebFinger, other implementations are not interested in that (e.g. Pleroma). And then one is left with instances such as “social.company.com”.

                                                      For example - Pleroma’s developer’s id is lain@pleroma.soykaf.com.

                                                      1. 16

                                                        This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack. That is a good thing, because when Fediverse nodes need to scale there are well-understood ways of doing it.

                                                        Success in social networking is entirely about network effects and that means low barrier to entry is table stakes. Yeah, it’d be cool if someone built the type of node you’re talking about, but it would be a curiosity pursued only by the most technical users. If that were the barrier to entry for the network, there would be no network.

                                                        1. 39

                                                          This is a completely reasonable and uncontroversial set of dependencies for a web app. Some of the largest web apps on the Internet run this stack.

                                                          Yes, but not for a web app I’m expected to run on my own time, for fun.

                                                          1. 6

                                                            I’m not sure that’s the exact expectation, that we all should run our single-user Mastodon instances. I feel like the expectation is that sysadmin with enough knowledge will maintain an instance for many users. This seems to be the norm.

                                                            That, or you go to Mastohost and pay someone else for your own single-user instance.

                                                            1. 2

                                                              You’re not expected to do that is my point.

                                                            2. 16

                                                              completely reasonable and uncontroversial

                                                              Not true. Many people are complaining about the unmanaged proliferation of dependencies and tools. Most projects of this size and complexity don’t need more than one language, bulky javascript frameworks, caching and database services.

                                                              This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu and making it more difficult for people to make the service really decentralized.

                                                              1. 1

                                                                I’m not going to defend the reality of what NPM packaging looks like right now because it sucks but that’s the ecosystem we’re stuck with for the time being until something better comes along. As with social networks, packaging systems are also about network effects.

                                                                But you can’t deny that this is the norm today. Well, you can, but you would be wrong.

                                                                This is making difficult to package Mastodon and Pleroma in Debian and Ubuntu

                                                                I’m sure it is, because dpkg is a wholly unsuitable tool for this use-case. You shouldn’t even try. Anyone who doesn’t know how to set these things up themselves should use the Docker container.

                                                                1. 1

                                                                  I think the most difficult part of the Debian packaging would be the js deps, correct?

                                                                  1. 3

                                                                    Yes and no. Unvendorizing dependencies is done mostly for security and requires a lot of work depending on the amount of dependencies. Sometimes js libraries don’t create serious security concerns because they are only run client-side and can be left in vendorized form.

                                                                    The Ruby libraries can be also difficult to unvendorize because many upstream developers introduce breaking changes often. They care little about backward compatibility, packaging and security.

                                                                    Yet server-side code is more security-critical and that becomes a problem. And it’s getting even worse with new languages that strongly encourage static linking and vendorization.

                                                                    1. 1

                                                                      I can’t believe even Debian adopted the Googlism of “vendor” instead of “bundle”.

                                                                      That aside, Rust? In Mastodon? I guess the Ruby gems it requires would be the bigger problem?

                                                                      1. 2

                                                                        The use of the word is mine: I just heard people using “vendor” often. It’s not “adopted by Debian”.

                                                                        I don’t understand the second part: maybe you misread Ruby for Rust in my text?

                                                                        1. 1

                                                                          No, I really just don’t know what Rust has to do with Mastodon. There’s Rust in there somewhere? I just didn’t notice.

                                                                          1. 2

                                                                            AFAICT there is no Rust in the repo (at least at the moment).

                                                                            1. 1

                                                                              Wow, I’m so dumb, I keep seeing Rust where there is none and misunderstanding you, so sorry!

                                                                2. 7

                                                                  Great. Then have two implementations, one for users with large footprints, and another for casual users with five friends.

                                                                  It is a reasonable stack if you will devote 1+ servers to the task. Not for something you might want to run on your RPI next to your irc server (a single piece of software in those stacks too)

                                                                  1. 4

                                                                    Having more than one implementation is healthy.

                                                                    1. 2

                                                                      Of course it is. Which is why it’s a reasonable solution to the large stack required by the current primary implementation.

                                                                3. 6

                                                                  There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching and not as a DB layer like PSQL.

                                                                  You can always write your own server if you want in whatever language you choose if you feel like Ruby/Node is too much. Or, like that other guy said, you can just use Docker.

                                                                  1. 4

                                                                    There’s really one database and one cache there. I mean, I guess technically Redis is a database, but it’s almost always used for caching . . .

                                                                    A project that can run on a single instance of the application binary absolutely does not need a cache. Nor does it need a pub/sub or messaging system outside of its process space.

                                                                    1. 2

                                                                      It’s more likely that Redis is being used for pub/sub messaging and job queuing.

                                                                    2. 11

                                                                      This does not seem like a reasonable set of dependencies to me

                                                                      Huh. I must be just used to this, then. At work I need to use or at least somewhat understand,

                                                                      • Postgres
                                                                      • Python 2
                                                                      • Python 3
                                                                      • Django
                                                                      • Ansible
                                                                      • AWS
                                                                      • Git (actually, Mercurial, but this is my choice to avoid using git)
                                                                      • Redis
                                                                      • Concourse
                                                                      • Docker
                                                                      • Emacs (My choice, but I could pick anything else)
                                                                      • Node
                                                                      • nginx
                                                                      • Flask
                                                                      • cron
                                                                      • Linux
                                                                      • RabbitMQ
                                                                      • Celery
                                                                      • Vagrant (well, optional, I actually do a little extra work to have everything native and avoid a VM)
                                                                      • The occasional bit of C code

                                                                      and so on and so forth.

                                                                      Do I just work at a terrible place or is this a reasonable amount of things to have to deal with in this business? I honestly don’t know.

                                                                      To me Mastodon’s requirements seem like a pretty standard Rails application. I’m not even sure why Redis is considered another db – it seems like an in-memory cache with optional disk persistence is a different thing than a persistent-only RDBMS. Nor do I even see much of a problem with two interpreted languages – the alternative would be to have js everywhere, since you can’t have Python or Ruby in a web browser, and js just isn’t a pleasant language for certain tasks.

                                                                      1. 38

                                                                        I can work with all that and more if you pay me. For stuff I’m running at home on my own time, fuck no. When I shut my laptop to leave the office, it stays shut until I’m back again in the morning, or I get paged.

                                                                        1. 2

                                                                          So is Mastodon unusual for a Rails program? I wonder if it’s simply unreasonable to ask people to run their own Rails installation. I honestly don’t know.

                                                                          Given the amount of Mastodon instances out there, though, it seems that most people manage. How?

                                                                          1. 4

                                                                            That looks like a bog-standard, very minimal rails stack with a JS frontend. I’m honestly not sure how one could simplify it below that without dropping the JS on the web frontend and any caching, both of which seem like a bad idea.

                                                                            1. 7

                                                                              There’s no need to require node. The compilation should happen at release time, and the release download tarball should contain all the JS you need.

                                                                              1. -3

                                                                                lol “download tarball”, you’re old, dude.

                                                                                1. 7

                                                                                  Just you wait another twenty years, and you too will be screaming at the kids to get off your lawn.

                                                                              2. 2

                                                                                You could remove Rails and use something Node-based for the backend. I’m not claiming that’s a good idea (in fact it’s probably not very reasonable), but it’d remove that dependency?

                                                                                1. 1

                                                                                  it could just have been a go or rust binary or something along those lines, with an embedded db like bolt or sqlite

                                                                                  edit: though the reason i ignore mastodon is the same as cullum, culture doesn’t seem interesting, at least on mastodon.social

                                                                                2. 4

                                                                                  If security or privacy focused, I’d try a combo like this:

                                                                                  1. Safe language with minimal runtime that compiles to native code and Javascript. Web framework in that language for dynamic stuff.

                                                                                  2. Lwan web server for static content.

                                                                                  3. SQLite for database.

                                                                                  4. Whatever is needed to combine them.

                                                                                  Combo will be smaller, faster, more reliable, and more secure.

                                                                                  1. 2

                                                                                    I don’t think this is unusual for a Rails app. I just don’t want to set up or manage a Rails app in my free time. Other people may want to, but I don’t.

                                                                                3. 7

                                                                                  I don’t think it’s reasonable to compare professional requirements and personal requirements.

                                                                                  1. 4

                                                                                    The thing is, Mastodon is meant to be used on-premise. If you’re building a service you host, knock yourself out! Use 40 programming languages and 40 DBs at the same time. But if you want me to install it, keep it simple :)

                                                                                    1. 4

                                                                                      Personally, setting up all that seems like too much work for a home server, but maybe I’m just lazy. I had a similar issue when setting up Matrix and ran into an error message that I just didn’t have the heart to debug, given the amount of moving parts which I had to install.

                                                                                      1. 3

                                                                                        If you can use debian, try installing synapse via their repository, it works really nice for me so far: https://matrix.org/packages/debian/

                                                                                        1. 1

                                                                                          Reading other comments about the horror that is Docker, it is a wonder that you dare propose to install an entire OS only to run a Matrix server. ;)

                                                                                          1. 3

                                                                                            i’m not completely sure which parts of you comment are sarcasm :)

                                                                                      2. 0

                                                                                        Your list there has lots of tools with overlapping functionality, seems like pointless redundancy. Just pick flask OR django. Just pick python3 or node, just pick docker or vagrant, make a choice, remove useless and redundant things.

                                                                                        1. 3

                                                                                          We have some Django applications and we have some Flask applications. They have different lineages. One we forked and one we made ourselves.

                                                                                      3. 6

                                                                                        Alternatively you install it using the Docker as described here.

                                                                                        1. 32

                                                                                          I think it’s kinda sad that the solution to “control your own toots” is “give up control of your computer and install this giant blob of software”.

                                                                                          1. 9

                                                                                            Piling another forty years of hexadecimal Unix sludge on top of forty years of slightly different hexadecimal Unix sludge to improve our ability to ship software artifacts … it’s an aesthetic nightmare. But I don’t fully understand what our alternatives are.

                                                                                            I’ve never been happier to be out of the business of having to think about this in anything but the most cursory detail.

                                                                                            1. 11

                                                                                              I mean how is that different from running any binary at the end of the day. Unless you’re compiling everything from scratch on the machine starting from the kernel. Running Mastodon from Docker is really no different. And it’s not like anybody is stopping you from either making your own Dockerfile, or just setting up directly on your machine by hand. The original complaint was that it’s too much work, and if that’s a case you have a simple packaged solution. If you don’t like it then roll up the sleeves and do it by hand. I really don’t see the problem here I’m afraid.

                                                                                              1. 11

                                                                                                “It’s too much work” is a problem.

                                                                                                1. 5

                                                                                                  Unless you’re compiling everything from scratch on the machine starting from the kernel

                                                                                                  I use NixOS. I have a set of keys that I set as trusted for signature verification of binaries. The binaries are a cache of the build derivation, so I could theoretically build the software from scratch, if I wanted to, or to verify that the binaries are the same as the cached versions.

                                                                                                  1. 2

                                                                                                    Right, but if you feel strongly about that then you can make your own Dockerfile from source. The discussion is regarding whether there’s a simple way to get an instance up and running, and there is.

                                                                                                    1. 3

                                                                                                      Docker containers raise a lot of questions though, even if you use a Dockerfile:

                                                                                                      • What am I running?
                                                                                                      • Which versions am I running?
                                                                                                      • Do the versions have security vulnerabilities?
                                                                                                      • Will I be able to build the exact same version in 24 months?

                                                                                                      Nix answers these pretty will and fairly accurately.

                                                                                                  2. 2

                                                                                                    Unless you’re compiling everything from scratch on the machine starting from the kernel.

                                                                                                    You mean starting with writing a bootstrapping compiler in assembly, then writing your own full featured compiler and compiling it in the bootstrapping compiler. Then moving on to compiling the kernel.

                                                                                                    1. 1

                                                                                                      No no, your assembler could be compromised ;)

                                                                                                      Better write raw machine code directly onto the disk. Using, perhaps, a magnetized needle and a steady hand, or maybe a butterfly.

                                                                                                      1. 2

                                                                                                        My bootstrapping concept was having the device boot a program from ROM that takes in the user-supplied, initial program via I/O into RAM. Then passes execution to it. You enter the binary through one of those Morse code things with four buttons: 0, 1, backspace, and enter. Begins executing on enter.

                                                                                                        Gotta input the keyboard driver next in binary to use a keyboard. Then the display driver blind using the keyboard. Then storage driver to save things. Then, the OS and other components. ;)

                                                                                                      2. 1

                                                                                                        If I deploy three Go apps on top of a bare OS (picked Go since it has static binaries), and the Nginx server in front of all 3 of them uses OpenSSL, then I have one OpenSSL to patch whenever the inevitable CVE rolls around. If I deploy three Docker container apps on top of a bare OS, now I have four OpenSSLs to patch - three in the containers and one in my base OS. This complexity balloons very quickly which is terrible for user control. Hell, I have so little control over my one operating system that I had to carefully write a custom tool just to make sure I didn’t miss logfile lines in batch summaries created by cron. How am I supposed to manage four? And three with radically different tooling and methodology to boot.

                                                                                                        And Docker upstream, AFAIK, has provided nothing to help with the security problem which is probably why known security vulnerabilities in Docker images are rampant. If they have I would like to know because if it’s decent I would switch to it immediately. See this blog post for more about this problem (especially including links) and how we “solved” it in pump.io (spoiler: it’s a giant hack).

                                                                                                        1. 3

                                                                                                          That’s not how any of this works. You package the bare minimum needed to run the app in the Docker container, then you front all your containers with a single Nginx server that handles SSL. Meanwhile, there are plenty of great tools, like Dokku for managing Docker based infrastructure. Here’s how you provision a server using Let’s Encrypt with Dokku:

                                                                                                          sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
                                                                                                          okku letsencrypt:auto-renew
                                                                                                          

                                                                                                          viewing logs isn’t rocker science either:

                                                                                                          dokku logs myapp
                                                                                                          
                                                                                                          1. 1

                                                                                                            OK, so OpenSSL was a bad example. Fair enough. But I think my point still stands - you’ll tend to have at least some duplicate libraries across Docker containers. There’s tooling around managing security vulnerabilities in language-level dependencies; see for example Snyk. But Docker imports the entire native package manager into the “static binary” and I don’t know of any tooling that can track problems in Docker images like that. I guess I could use Clair through Quay but… I don’t know. This doesn’t feel like as nice of a solution or as polished somehow. As an image maintainer I’ve added a big manual burden keeping up with native security updates in addition to those my application actually directly needs, when normally I could rely on admins to do that, probably with lots of automation.

                                                                                                            1. 3

                                                                                                              you’ll tend to have at least some duplicate libraries across Docker containers

                                                                                                              That is literally the entire point. Application dependencies must be separate from one another, because even on a tight-knit team keeping n applications in perfect lockstep is impossible.

                                                                                                              1. 1

                                                                                                                OS dependencies are different than application dependencies. I can apply a libc patch on my Debian server with no worry because I know Debian works hard to create a stable base server environment. That’s different than application dependencies, where two applications are much more likely to require conflicting versions of libraries.

                                                                                                                Now, I run most of my stuff on a single server so I’m very used to a heterogeneous environment. Maybe that’s biasing me against Docker. But isn’t that the usecase we’re discussing here anyway? How someone with just a hobbyist server can run Mastodon?

                                                                                                                Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze. Clair is the equivalent of having to run npm install and then go trawling through node_modules looking for known vulnerable code instead of just looking at the lockfile. More broadly, because Docker lacks any notion of a package manifest, it seems to me that while Docker images are immutable once built, the build process that leads you there cannot be made deterministic. This is what makes it hard to keep track of the stuff inside them. I will have to think about this more - as I write this comment I’m wondering if my complaints about duplicated libraries and tracking security there is an instance of the XY problem or if they really are separate things in my mind.

                                                                                                                Maybe I am looking for something like Nix or Guix inside a Docker container. Guix at least can export Docker containers; I suppose I should look into that.

                                                                                                                1. 2

                                                                                                                  OS dependencies are different than application dependencies.

                                                                                                                  Yes, agreed.

                                                                                                                  Thinking about this more I feel like a big part of what bothers me about Docker, and therefore about Clair, is that there’s no package manifest. Dockerfile does not count, because that’s not actually a package manifest, it’s just a list of commands. I can’t e.g. build a lockfile format on top of that, which is what tools like Snyk analyze.

                                                                                                                  You don’t need a container to tell you these things. Application dependencies can be checked for exploits straight from the code repo, i.e. brakeman. Both the Gemfile.lock and yarn.lock are available from the root of the repo.

                                                                                                                  The container artifacts are most like built automatically for every merge to master, and that entails doing a full system update from the apt repository. So in reality, while not as deterministic as the lockfiles, the system deps in a container are likely to be significantly fresher than a regular server environment.

                                                                                                              2. 1

                                                                                                                You’d want to track security vulnerabilities outside your images though. You’d do it at dev time, and update your Dockerfile with updated dependencies when you publish the application. Think of Docker as just a packaging mechanism. It’s same as making an uberjar on the JVM. You package all your code into a container, and run the container. When you want to make updates, you blow the old one away and run a new one.

                                                                                                        2. 4

                                                                                                          I have only rarely used Docker, and am certainly no booster, so keep that in mind as I ask this.

                                                                                                          From the perspective of “install this giant blob of software”, do you see a docker deployment being that different from a single large binary? Particularly the notion of the control that you “give up”, how does that differ between Docker and $ALTERNATIVE?

                                                                                                          1. 14

                                                                                                            Ideally one would choose door number three, something not so large and inauditable. The complaint is not literally about Docker, but the circumstances which have resulted in docker being the most viable deployment option.

                                                                                                          2. 2

                                                                                                            You have the dockerfile and can reconstruct. You haven’t given up control.

                                                                                                            1. 5

                                                                                                              Is there a youtube video I can watch of somebody building a mastodon docker image from scratch?

                                                                                                              1. 1

                                                                                                                I do not know of one.

                                                                                                        3. 3

                                                                                                          I totally agree as well, and I wish authors would s/Mastodon/Fediverse/ in their articles. As others have noted, Pieroma is another good choice and others are getting into the game - NextCloud added fediverse node support in their most recent release as a for-instance.

                                                                                                          I tried running my own instance for several months, and it eventually blew up. In addition to the large set of dependencies, the system is overall quite complex. I had several devs from the project look at my instance, and the only thing they could say is it was a “back-end problem” (My instance had stopped getting new posts).

                                                                                                          I gave up and am now using somebody else’s :) I love the fediverse though, it’s a fascinating place.

                                                                                                          1. 4

                                                                                                            I just use the official Docker containers. The tootsuite/mastodon container can be used to launch web, streaming, sidekiq and even database migrations. Then you just need an nginx container, a redis container, a postgres container and an optional elastic search container. I run it all on a 2GB/1vCPU Vultr node (with the NJ data center block store because you will need a lot of space) and it works fairly well (I only have ~10 users; small private server).

                                                                                                            In the past I would agree with out (and it’s the reason I didn’t try out Diaspora years ago when it came out), but containers have made it easier. I do realize they both solve and cause problems and by no means think they’re the end all of tech, but they do make running stuff like this a lot easier.

                                                                                                            If anyone wants to find me, I’m @djsumdog@hitchhiker.social

                                                                                                            1. 2

                                                                                                              Given that there’s a space for your Twitter handle, i wish Lobste.rs had a Mastodon slot as well :)

                                                                                                            2. 2

                                                                                                              Wait, you’re also forgetting systemd to keep all those process humming… :)

                                                                                                              You’re right that this is clearly too much: I have run such systems for work (Rails’ pretty common), but would probably not do that for fun. I am amazed, and thankful, for the people who volunteer the effort to run all this on their week-ends.

                                                                                                              Pleroma does look simpler… If I really wanted to run my own instance, I’d look in that direction. ¯_(ツ)_/¯

                                                                                                              1. 0

                                                                                                                I’m waiting for urbit.org to reach useability. Which I expect for my arbitrary feeling of useability to come about late this year. Then the issue is coming up to speed on a new language and integrated network, OS, build system.

                                                                                                                1. 2

                                                                                                                  Urbit is apparently creating a feudal society. (Should note that I haven’t really dug into that thread for several years and am mostly taking @pushcx at his word.)

                                                                                                                  1. 1

                                                                                                                    The feudal society meme is just not true, and, BTW, Yarvin is no longer associated with Urbit. https://urbit.org/primer/

                                                                                                                2. 1

                                                                                                                  I would love to have(make) a solution that could be used locally with sqlite and in aws with lambda, api gateway and dynamodb. That would allow scaling cost and privacy/controll.

                                                                                                                  1. 3

                                                                                                                    https://github.com/deoxxa/don is sort of in that direction (single binary, single file sqlite database).

                                                                                                                1. 3

                                                                                                                  Aw, I remember lists like this from back around 2008 or so on Twitter, when Twitter was organic and fun. Things like this make me like Mastadon even more.

                                                                                                                  1. 4

                                                                                                                    thoughtbot is hiring. We are a software consultancy, still small in the grand scheme of things (~90 folks) with offices in Boston, New York, London, San Francisco, Austin, and Raleigh. Lots of web based projects in Rails, Elm, React, etc. You can view our jobs here or reach out to me directly: edward (a) thoughtbot.com

                                                                                                                    1. 3

                                                                                                                      I know that Thoughtbot is typically not open to remote workers - I live in Portland, ME, which is about 2 hours from Boston. I could come in to the office a couple of days per week if I could work remotely the remaining days. Do you know if the culture at Thoughtbot would support that sort of setup?

                                                                                                                      I realize you probably can’t speak for the entire company. :)

                                                                                                                      1. 2

                                                                                                                        Hey @mosburger 🙂 I believe we’re not looking for remote workers at the moment, sorry. But if you are willing to make the commute I’d absolutely encourage you to apply. Sorry we can’t be more flexible.

                                                                                                                        1. 2

                                                                                                                          Greetings from the 207!

                                                                                                                        2. 2

                                                                                                                          I’ll vouch for Thoughtbot’s incredible friendliness. Everyone I’ve ever met from there has been a Gem.

                                                                                                                          I used to bump into a group of them in SF at a bar nearby there office after a training and I think they always said hi. Pleasant folks and they really care about software.

                                                                                                                        1. 3

                                                                                                                          Just reading this description makes me feel sick and anxious. Terrifying that such a small object can have such a fast & terrible effect.

                                                                                                                          1. 1

                                                                                                                            Yeah, the fact that a 6 year old girl was killed was heartbreaking. :(