What does one spend $10000/month on? 100 x $100 large EC2 instances? No way to trim that down?
The HN comments on same article indicated many ways to do better with both virtual and bare-metal. The AWS employee pointed out you have to homebrew a lot of stuff esp on scaling or reliability if you do that. Yet, many people apparently did it well enough with almost all spending less bare-metal with some getting better performance. One person wisely countered to AWS person that switching to their Lambda stuff doesn’t get rid of operations staff: you just need to do a different type of operations managing it and AWS that might or might not be cheaper. I’ll add might not if it’s in Silicon Valley given labor costs there.
Theres probably several ways to accomplish the same, but that’s not quite what I’m asking. Where did the previous design go so terribly wrong?
I see a lot of these articles. Everything was terrible, so we threw it all away and rewrote it from scratch, and now it works as expected. That’s great, but the expected result is, well, expected. You could replace lambda in the title with node, go, Google, react, elm, html5, heroku, native, ocaml, mongo, bare metal, postgres, etc. And there’s some article just like it. But I don’t know how much there is to learn, as compared to studying the mistakes of the system that didn’t work. Now an article about how something was fixed, that’s worth reading.
“grayrest 16 hours ago [-]
Rewrite the Readability Parser API
Rewrite the Readability Parser API
That’s at least the third rewrite of the parser. The initial version was a js bookmarklet, I did the rewrite into Python in 2008 which kind of sat around and then got turned into the app by a different group about a year later. Fun to see it go back to JS.”
Maybe that will answer your question for at least this one. In general, these articles are about stuff that is usually not engineered in the first place. You’d have to read and analyze a lot of them from an empirical standpoint to get any real lessons. That’s tricky, too, due to all the variables you’d have to identify and/or eliminate.
This was my curiosity as well. It sounds like a database was used as some sort of misguided cache?
Great article! I recently built a small thing to learn Lambda with, and found the tradeoffs interesting. In return for all the cost and infrastructure management benefits, switching to Lambda requires some mild programming and deployment contortions, and places a much heavier burden on tests. I don’t think development/staging environments are well-covered by AWS Lambda, API Gateway, or friends.
If you’re using cloudformation, I’ve heard reports from friends that dev/staging environments with lambda are both simple and essentially free to run.
As they mentioned, all but 18.5M requests were cached. On average, about 7 req/s though admittedly diurnal/workweek loads will make it vary a lot more.
They could run it a lot cheaper, but for a company without a lot of focus on that piece of software, $370/month seems like a fine investment to not care about scaling themselves or really touching it at all.
That said, an ASG would scale on CPU just fine. A single c4.large (~$75/mo) should be enough CPU for most applications. We served 2k/s per c4.large of fairly CPU light work (json parsing, filtering, Kafka write, json response generation).
You might be able to get away with a fleet of micros instead for even less :)
This is a great article on how a serverless approach can reduce overhead for many applications. I’m still seeing them have a lot of cost in the API Gateway portion of the project, which caught me by surprise. Would this cost be less than standing up a simple web app that can accept incoming requests, and fire off the associated Lambda functions using the AWS SDK as the invocation event instead? We’ve had great luck with this approach at Backand, and have managed to reduce our costs even farther by relying upon a single EC2 instance that can dispatch calls to Lambda functions as appropriate - you could almost consider it a hybrid approach.
One other thing to look into would be to see if using a sole-provider serverless framework, as opposed to a combination of Serverless Framework for deployment and cloud formation management and API Gateway. The costs may be a little higher, but using some utilities they could potentially save on data transfer and API request costs by handing over a large portion of the management to a third party like Backand or Firebase. You’d still need a devops-oriented person to establish security schemes, but using this approach you’re able to outsource scaling, security, DB, and other functionality. I’ll grant it may not be economical at the OP’s scale, but for smaller projects looking to get spun up quickly it could be a valid alternative