1. 6

    I’m in the emailing business, too. We are hosting our own MTAs with our own IP address space totaling under 500 EUR a month on bare-metal in a data center in Frankfurt, Germany.

    But you can have it even cheaper: You can get a /24 for around 100 EURs a month, you can announce that address space using cloud hosters such as Vultr who are super fast, reliable and not really expensive. You can use all of thoses IPs on one VM or split them up like you want.

    1. 2

      Sounds like a good option, thanks! Can I know where could I buy the /24 range?

      1. 3

        There are a couple of options:

        • Become a LIR (RIPE NCC member) yourself and apply for the /24 waiting list.
        • There are a couple of LIRs offering IP space to lease. They usually sponsor an ASN for you as well.
        • Become a LIR, go on secondary markets for IP space. There are a few, current IPv4 prices are up to 30 USD per IP. Regular LIR fee still apply though.

        We also got spare IP space available. PM me if you’re interested in leasing.

      2. 2

        AWS supports hosting your own IP range on EC2 now, as well: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html

        1. 1

          Good point! However it’s not yet available in all regions without all netblock statuses either.

        2. 1

          Out of curiosity, how many watts of power are you getting at 500eur/mo? My vague recollection from a few years ago was that colo providers would hand out rack units for negligible money but charge something like 100eur/mo per ~200W since every Joule the servers consume has to be paid for twice - once at the point of use, then again for the HVAC required to dump the heat. (But all the colo providers I worked with had racks and ranks going spare because servers are so power dense now that their AC and PSUs couldn’t keep up if all the space was filled with hot servers)

          1. 2

            My numbers are not really representative, but we pay around 30 EUR per 150 W ish. Luckily for us we are housed in a shared rack and heat was only once an issue where the data center provider was having an AC outage.

            I agree, some DCs are housing some super old hardware which are dissipating a lot of power in form of heat so they didn’t only maxing out the available power, but were also forced to keep spare units.

            1. 2

              Thanks! I have a feeling my numbers might be really off, it’s been a while

          2. 1

            Question from someone who doesn’t know anything about MTAs: Why do you need so many IP addresses? Is it just that you need many servers to handle the amount of emails and thus 1 IP address per server, or is it something else that I am missing?

            1. 1

              It’s mostly the per IP reputation which can go southwards by the volume you are sending out.

              1. 2

                So you basically spread out the traffic over the IP addresses to keep your reputation stable. I’d love to know more about how that works. Do you rotate through the addresses? Have a container for every address?

          1. 18

            I get where he’s coming from, but I think the really valuable thing about microservices is the people-level organizational scaling that they allow. The benefits from having small teams own well-defined (in terms of their API) pieces of functionality are pretty high. It reduces the communication required between teams, allowing each team to move at something more closely approaching “full speed”. You could, of course, do this with a monolith as well, but that requires much more of the discipline to which the author refers.

            1. 15

              Why do you need network boundaries to introduce this separation? Why not have different teams work on different libraries, and then have a single application that ties those libraries together?

              1. 4

                Network boundaries easily allow teams to release independently, which is key to scaling the number of people working on the application.

                1. 1

                  Network boundaries easily allow teams to release independently

                  …Are you sure?

                  Independent systems are surely easier to release independently, but that’s only because they’re independent.

                  I think the whole point of a “microservice architecture” is that it’s one system with its multiple components spread across multiple smaller interdependent systems.

                  which is key to scaling the number of people working on the application

                  What kind of scale are we talking?

                  1. 1

                    Scaling by adding more people and teams to create and maintain the application.

                    1. 2

                      Sorry, I worded that question ambiguously (although, the passage I quoted already had “number of people” in it). Let me try again.

                      At what number of people writing code should an organisation switch to a microservices architecture?

                      1. 1

                        That’s a great question. There are anecdotes of teams with 100s of people making a monolith work (Etsy for a long time IIRC), so probably more than you’d think.

                        I’ve experienced a few painful symptoms of when monoliths were getting too big: individuals or teams locking large areas of code for some time because they were afraid to make changes in parallel, “big” releases taking days and requiring code freezes on the whole code base, difficulty testing and debugging problems on release.

                    2. 1

                      I think the whole point of a “microservice architecture” is that it’s one system with its multiple components spread across multiple smaller interdependent systems.

                      While this is often the reality, it misses the aspirational goal of microservices.

                      The ideal of software design is “small pieces, loosely joined”. This ideal is hard to attain. The value of microservices is that they provide guardrails to help keep sub-systems loosely joined. They aren’t sufficient by themselves, but they seem to nudge us in the right, more independent direction a lot of the time.

                  2. 2

                    Thinking about this more was interesting.

                    A [micro]service is really just an interface with a mutually agreed upon protocol. The advantage is the code is siloed off, which is significant in a political context: All changes have to occur on the table, so to speak. To me, this is the most compelling explanation for their popularity: they support the larger political context that they operate in.

                    There may be technical advantages, but I regard those as secondary. Never disregard the context under which most industrial programming is done. It is also weird that orgs have to erect barriers to prevent different teams from messing with each other’s code.

                  3. 12

                    The most common pattern of failure I’ve seen occurs in two steps:

                    a) A team owns two interacting microservices, and the interface is poor. Neither side works well with anyone but the other.

                    b) A reorg happens, and the two interacting microservices are distributed to different teams.

                    Repeat this enough times, and all microservices will eventually have crappy interfaces designed purely for the needs of their customers at one point in time.

                    To avoid this it seems to me microservices have to start out designed for multiple clients. But multiple clients usually won’t exist at the start. How do y’all avoid this? Is it by eliminating one of the pieces above, or some sort of compensatory maneuver?

                    1. 4

                      crappy interfaces designed purely for the needs of their customers…

                      I’m not sure by which calculus you would consider these “crappy”. If they are hard to extend towards new customer needs, then I would agree with you. This problem is inherent in all API design, though microservice architecture forces you to do more of that thus you could get it wrong more often.

                      Our clients tend to be auto-generated from API definition files. We can generate various language-specific clients based on these definitions and these are published out to consumers for them to pick up as part of regular updates. This makes API changes somewhat less problematic than at other organizations, though they are by no means all entirely automated.

                      …at one point in time

                      This indicates to me that management has not recognized the need to keep these things up to date as time goes by. Monolith vs. microservices doesn’t really matter if management will not invest in keeping the lights on with respect to operational excellence and speed of delivery. tl;dr if you’re seeing this, you’ve got bigger problems.

                      1. 1

                        Thanks! I’d just like clarification on one point:

                        …management has not recognized the need to keep these things up to date as time goes by…

                        By “keeping up to date” you mean changing APIs in incompatible ways when necessary, and requiring clients to upgrade?


                        crappy interfaces designed purely for the needs of their customers…

                        I’m not sure by which calculus you would consider these “crappy”. If they are hard to extend towards new customer needs, then I would agree with you.

                        Yeah. Most often this is because the first version of an interface is tightly coupled to an implementation. It accidentally leaks implementation details and so on.

                        1. 1

                          By “keeping up to date” you mean changing APIs in incompatible ways when necessary, and requiring clients to upgrade?

                          Both, though the latter is much more frequent in my experience. More on the former below.

                          Yeah. Most often this is because the first version of an interface is tightly coupled to an implementation. It accidentally leaks implementation details and so on.

                          I would tend to agree with this, but I’ve found that this problem mostly solves itself if the APIs get sufficient traction with customers. As you scale the system, you find these kinds of incongruities and you either fix the underlying implementation in an API-compatible way or you introduce new APIs and sunset the old ones. All I was trying to say earlier is, if that’s not happening, then either a) the API hasn’t received sufficient customer interest and probably doesn’t require further investment, or b) management isn’t prioritizing this kind of work. The latter may be reasonable for periods of time, e.g. prioritizing delivery of major new functionality that will generate new customer interest, but can’t be sustained forever if your service is really experiencing customer-driven growth.

                          1. 1

                            Isn’t now the problem moved to the ops team, which had to grow in size in order to support the deployment of all these services, as they need to ensure that compatible versions talk to compatible versions, if that is even possible? What I found the most problematic with any microservices deployment is that ops teams suffer more, new roles are needed just to coordinate all these “independent”, small teams of developers, for the sake of reducing the burden on the programmers. One can implement pretty neat monoliths.

                            1. 2

                              We don’t have dedicated “ops” teams. The developers that write the code also run the service. Thus, the incentives for keeping this stuff working in the field are aligned.

                    2. 6

                      It reduces the communication required between teams, allowing each team to move at something more closely approaching “full speed”.

                      Unfortunately, this has not been my experience. Instead, I’ve experienced random parts of the system failing because someone changed something and didn’t tell our team. CI would usually give a false sense of “success” because everyone’s microservice would pass their own CI pipeline.

                      I don’t have a ton of experience with monoliths, but in a past project, I do remember it was nice just being able to call a function and not have to worry about network instability. Deploying just 1 thing, instead of N things and having to worry about service discovery was also nicer. Granted, I’m not sure how this works at super massive scale, but at small to medium scale it seems nice.

                      1. 2

                        Can you give an example where this really worked out this way for you. These are all the benefits one is supposed to have, but the reality often looks different in my experience

                        1. 10

                          I work at AWS, where this has worked quite well.

                          1. 5

                            Y’all’s entire business model is effectively shipping microservices, though, right? So that kinda makes sense.

                            1. 20

                              We ship services, not microservices. Microservices are the individual components that make up a full service. The service is the thing that satisfies a customer need, whereas microservices do not do so on their own. Comprising a single service, there can be anywhere from a handful of microservices up to several hundred, but they all serve to power a coherent unit of customer value.

                              1. 4

                                Thank you for your explanation!

                            2. 4

                              AWS might be the only place I’ve heard of which has really, truly nailed this approach. I have always wondered - do most teams bill each other for use of their services?

                              1. 5

                                They do, yes. I would highly recommend that approach to others, as well. Without that financial pressure, its way too easy to build in some profligate waste into your systems.

                                1. 1

                                  I think that might be the biggest difference.

                                  I’d almost say ‘we have many products, often small, and we often use our own products’ rather than ‘we use microservices’. The latter, to me, implies that the separation stops at code - but from what I’ve read it runs through the entire business at AMZ.

                                2. 1

                                  It’s worked well at Google too for many years but we also have a monorepo that makes it possible to update all client code when we make API changes.

                          1. 2

                            What’s a COLA in this context?

                            1. 6

                              Cache-oblivious lookahead array, a type of cache-oblivious data structure. Cache-obliviousness is a property whereby an algorithm performs well even without knowing the size or depth of its cache hierarchy a priori. A fractal tree is a type of cache-oblivious data structure. Here’s a pretty good overview paper for COLA to get you started if you want to learn more.

                              1. 2

                                Thanks for the explanation! Although I didn’t know COLA, I did recently submit Fractal Tree Indexes since someone told me Tokutek put them to good use. Thanks to your comment, I just found some stuff that was previously submitted or soon to be submitted that people will like on this topic. :)

                              2. 1

                                I looked it up last night and just now with quite a few search terms. Im getting nothing. Hopefully not that important…

                              1. 5
                                Free software-ness of various popular crypto messengers
                                platform android iOS desktop cli server
                                signal   yes     yes yes     no  yes
                                wire     yes     yes yes     no  yes
                                whatsapp no      no  no      no  no
                                telegram yes     yes yes     no  no
                                
                                1. 2

                                  What is the Wire CLI?

                                  1. 2

                                    whoops… I thought that https://github.com/wireapp/coax was CLI. It is not! I have edited the table…

                                  2. 1

                                    Where is the Signal server code to be found?

                                  1. 5

                                    Great guide! One small note; you should use AWS credentials associated with an IAM role that has the least privileges required to get the job done. In this case, using creds from an IAM role that could only write to the particular S3 bucket and invalidate the specific CloudFront domain would be most secure.

                                    1. 4

                                      This is correct – although right now there is a bug with IAM cloudfront rules, which means you must have full cloudfront access to do even a simple invalidation :(

                                    1. 3

                                      I’ve not read this yet. I’m cautious about new kinds of trees. I didn’t see much in a scan of the readme about balancing. My initial question is how this compares to a Scapegoat Tree.

                                      1. 4

                                        The author says at the end that its identical to a fractal tree.

                                      1. 3

                                        This post is describing CQRS. It has existed and been pretty well fleshed-out for some time now. It also seems to misunderstand the term “domain-driven design”, as CQRS is a key technique in DDD.

                                        1. 1

                                          I didn’t even recognize this as CQRS, but it makes sense now that you mention it. I first learned of CQRS via the Hoplon framework, which linked to Martin Fowler’s article.

                                        1. 1

                                          I see future customer-base-impacting issues with this move for Jetbrains. E.g. a Jetbrains customer that uses their software and likes it can no longer afford the subscription, for whatever reason. They will of course find something else to use because they have to get their work done. They may or may not like as much, but its quite unlikely they will ever come back to Jetbrains. Perhaps this is offset by the rate of new customer acquisition, but perhaps its not. Adobe found that it was not and they had much “stickier” products than Jetbrains sells, in that, there were sometimes no good alternatives to the software Adobe sold.