1. 2

I working on this project where I want the end users to be able to create their own web API instances through a point and click web panel. Each API will have its own url (possibly a subdomain) and will be backed by its own worker. Maybe a docker container or a process, lxd-container, etc.

I am not sure what stack I should use for this and I am trying to gather information about this.

Here are a few options I

  • Erlang/Elixir: this should be up to the task, but there is number crunching involved, I am not too keen on killing the possibility of implementing critical parts in a language that compiles to native binaries.
  • J2EE: wrong decade
  • AWS ECS: I like this concept and I can build working images as small as 9MB. But I haven’t found an easy way to compute the price or viability of this. Could I easily deploy, say 50.000 containers? Would that not break the bank?
  • FAAS: Haven’t tried any faas solution, same questions as for ECS
  • kubernetes: have worked with k8s for the last few years and hate it. Too bloated, Absurdly huge api surface. Sluggish, resource hungry, complicated, lots of overhead. No simple way of doing simple things.
  • Custom made: how would you build this?

Please share your thoughts and experiences

  1.  

  2. 1

    ECS has two options - local and Fargate.

    If you do local, all you need to do is add enough compute capacity like with reserved instances. You could also try spot fleet which tries to consume cheaper unused compute capacity but can die with a five minute warning. If you do Fargate, it’s more expensive and slower to launch, but at the same time you can have really variable node counts - if you did local you’d have to provision, manage the raw capacity.

    If you can get your workers into lambda, then you’ll have capacity for 1000 concurrent calls shared across all workers (but can have different lambda functions). You may use a lambda layer to distribute the userland.

    1. 1

      If I go local, how do I manage capacity? how do I decide how many containers run on each instance?

      1. 1

        You add/remove compute nodes to the ECS cluster using the AWS api. Alternatively, you can attach a Spot Fleet to it if you can architect your nodes to go down/go up with their work finishing in less than five minutes.

        You decide capacity by looking at the memory/cpu usage of the host nodes. There’s a reason why I used Fargate - so I don’t have to do any of that.

    2. 1

      The first question is: Are you building this project to learn a new language or system, or are you building this project to achieve the end result?

      I’ve found it helpful to distinguish between learning projects and goal projects.

      If you’re just trying to achieve a goal: Build your project in whatever language is most familiar to you on whatever platform is most familiar. Try to get something working as soon as possible and continue to iterate. If you get a ton of users and need to scale then the exercise of fixing what you have to scale or migrating to a different system is probably more useful than picking the perfect thing from the start. It’s likely you won’t have to deal with scale for a long time, and if you do it’s a good problem to have.

      If this is a learning project: Pick whatever seems most interesting or most unfamiliar and go from there.