I believe I am taking Google’s sustained use discount into account
I haven’t included the free tier because it is marginal and I think most organisations will exhaust it fairly quickly
I think spot and preemptable instances are not a general product but a specialist one: only some applications of virtual machines can tolerate getting evicted
I do discuss the issue of complexity later on. I don’t think it normally works to the advantage of the customer.
My intuition (and experience!) is that most real world AWS customers get bamboozled by the incredible complexity of pricing (especially when it’s presented in non-human readable units like “0.034773 per vCPU hour”) and wind up paying far, far over what the going rate of renting a computer should be.
Agree, it seems to be 10 Euro/month for 8GB in the cloud plan.
I’m running a root server with 32GB of memory and 2TB hard disk at Hetzner for ~30 Euro / month (from the Serverbörse). I do not know about their support at all, but I am quite sure that from a US IT company I could only expect automated mails, anyway. So Hetzner cannot be any worse there.
Of course, root server and cloud hosting are two totally different beasts, but in my humble opinion it’s a choice the US-centric tech community too often does not even consider. The mantra is always the application has to be horizontally scalable.
It should just be noted that Serverbörse is usually based on dekstop machines and the like, often older CPUs and servers, so you might not want to rely on that if your application stack is considered mission-critical.
As for the cloud vs classic servers, it’s a different beast completely, yes. A lot of internet wouldn’t be alive if you had to pay a linux admin to configure your servers, deploy your apps and pay attention to traffic, script kiddies etc. But not having a lot of internet online could perhaps be considered a good thing, eh?
On preemptible/spot they both provide /liberal/ shutdown warnings, it is possible to run almost anything aside from long life connection hosts (e.g. websockets) or extremely stateful applications like databases. Use cases that don’t fit spot are approaching minority in 2020 with current infrastructure trends.
Re: DigitalOcean, I did a migration a few years back where AWS came out vastly cheaper than the equivalent configuration on DO mostly due to AWS instance reservations, which are a trivial factor to plan for when you’re already planning a large migration.
The one area I couldn’t possibly defend the cloud providers is bandwidth pricing. All other costs are a footnote compared to it for any account doing high traffic serving
Not an expert on this, but while it seems it is possible to run lots of things on hosts that may shut themselves down automatically, actually doing so will cost you more developer and devops time instead of just paying a little more for hosting. It seems likely that this is time you want to spend anyway, as part of making an application more resilient against failure, but it still makes the situation yet more complicated, and complexity usually serves Amazon more than the customer. (And I have a hard time believing that databases are approaching a minority use case with current infrastructure trends. ;-)
Personal example: I started to develop an application with AWS services (Lambda, SQS, EC2, S3). Later I changed it to an application for a “normal” server. I still wanted to store data to S3, but the cost to download it from there for analysis is just ridiculous. So the choice was to store to S3 and run on EC2 or not to store to S3. (I decided against S3).
What I mean is that data transfers between services in the same cloud x region are much cheaper than data transfers between clouds. So it’s more expensive to store logs in AWS and analyze them with GCP, compared to just analyzing them in AWS. You can’t take advantage of the best tools in each cloud, but are forced to live in one cloud, creating a lock-in effect.
If there was a rule that bandwidth prices must be based on distance, not whether the endpoints are within the same cloud, we’d see more competition in cloud tools. Any startup could create a really good logs-analysis tool and be viable, for example. This rule runs into some legitimate issues though. For example, if a cloud provider has custom network hardware and fiber between their own data centers, the cost of moving data between their zones might be much cheaper than sending it over the public internet to another cloud provider. Moreover, many different cloud services are co-located in the same data center. So it’s much cheaper to analyze logs using a service that already exists where the data is than to ship it off to another cloud.
The problem is big cloud vendors have little incentive to let users take their data out to another cloud. It’s going to be a market where only a few big players have significant market share, at this rate.
Okay I see what you’re saying now. And when bandwidth costs encourage using more services in one cloud, you become more entrenched and entangled to the services of that particular cloud, locking you in even more.
I agree completely on the bandwidth pricing. At this point, I think this should be considered a common public infrastructure, like roads etc. Yes, I understand that there are costs to providing it all, that some companies have invested in infrastructure privately, all I’m saying is that the traffic should be “free” for the consumers (and even for the business sector that the article OP is mentioning, companies hosting wordpress or timesheet or some similar small apps like that without major engineering teams).
Yep, it’s definitely true for existing apps. Converting a large stateful app to new world is a nightmare, but you get so many benefits, not least the problem of preemptibility and autoscaling are basically identical. The big Django app used to require 16 vCPUs to handle peak, so that’s how it was always deployed. Now it spends evenings and non-business days idling on a single t2.micro
In the case of a typical Django app though, if you’re already using RDS then the single most typical change is moving its media uploads to S3. It’s a 15 minute task to configure the plugin once you’ve got the hang of it, but yep, for a single dev making the transition for a single app, that probably just cost you a day
The one area I couldn’t possibly defend the cloud providers is bandwidth pricing. All other costs are a footnote compared to it for any account doing high traffic serving
Thanks, I came here to say that. The article didn’t even factor in bandwidth/network costs, which matter for both EC2 and S3 (not as familiar with the other cloud providers).
Anecdotally, from friends who work in the AWS machine: once you get to real (financial) scale with AWS - think “7 digits a month” or so - you’ll find Amazon is extremely happy to negotiate those costs.
Fiber ain’t free, but I wager that the profit margin is probably highest there.
It is those weird units that prevents me as an individual developer from even considering them - when realistically it should be easy to understand the pricing on this sort of thing.
You definitely cannot compare AWS and OVH prices. The reason is that this price isn’t reflecting the VM spec only, but the whole ecosystem with it.
OVH has a very clumsy API, no tooling to actually manage your infrastructure besides a web interface that changes all the time. The only presence of “Email Pro” next to my VM management shows the difference in integrations between the 2.
AWS is definitely more expensive, but has also much more integrations and services (mentioned in the article, but dismissed as not a big thing). You rent a VM in AWS and you have key management, audit logs, managed database (not only SQL and Redis, but also Elasticsearch, Mongo, Kafka, …), all the tooling to manage those (terraform, ansible, chef, puppet, awscli), and a much more granular VM size catalog among other things!
Therefore, I think this comparison isn’t complete enough.
I’m also an OVH customer for ~10years, the pricing is great, the customer support has been varying greatly in term of quality, but I wouldn’t use it for any business projects…
NB: This is my employer. My objectivity is shot, because I love working here :)
Thanks for posting this. Many people don’t take the many layers of managed service offerings above and beyond simple image/VM hosting.
If your use case is such that all you need is VM hosting, then such things are irrelevant, but for many companies, leveraging the economies of scale and cost savings you get from having your VM, your database, your elastic search cluster, and and your data transformation / warehousing / BI all managed for you can be quite substantial.
Yes I work for AWS. Sorry for not being clear. I think I was a bit wary of our social media policy, but either I’ve broken it or I haven’t :) No sense dancing around.
Worth mentioning that Linode has had a history of severe security incidents, including two that gave attackers access to customer VPSs. I have not seen a detailed comparison that shows no other providers have had similar incidents, but I haven’t heard similar things about any of them.
It doesn’t matter at the scale of 2 cores/8GB RAM/100GB disk, but if you start wanting more resources, coloing can be significantly cheaper than a VPS or cloud provider, and provide more stability (no noisy neighbors or strange networking problems). Would be interesting to see that cost comparison as well (amortized over some number of years)
For static workloads, yes this is absolutely true. But if scale starts moving up or down, you’re stuck with the hardware you bought until you buy (and deliver and rack and configure) more.
People tell me this all the time, but when I ask people what their ratio of peak load to typical load is, it’s usually less than the premium that one pays for running in the cloud, unless you’re large enough to negotiate a contract where you get much better than the public rates (which many people do, but that makes it pretty hard to price-compare). Maybe there are workloads where you’re only using your hardware a very small percentage of the time (data analysis, etc), that makes sense, but for a lot of companies with very seasonal traffic (ecommerce sites, etc), I’ve asked people about their peak load and it usually doesn’t seem to justify being in the cloud :/
I both agree and not agree with you. People I know that are on this scale (big enough to own hardware, but not one of the super-players) usually have a relatively steady load and so resources in reserve so that they can provision extra/new apps when the need arises, and still have time to get new hardware, On the other hand, if they actually want to colo (because they need to be out there on internet) their load is much more volatile and harder to predict, so they overprovision a lot.
Granted, this is based on personal experience, but I just wanted to bring it up.
And people always under value time to delivery. I was in a position where we had to wait month(s) to get newer nodes in an cluster because the rack was full.
Both are completely valid and it varies organization to organization. I’m an IT consultant and it’s always hit or miss on projects whether the company will have the spare capacity in their vSphere cluster to provision servers when the project demands it. One product I deploy needs four cores and 16GB of RAM dedicated, and you might be surprised at the number of times we put the entire project on hold for months because they need to purchase and install the extra capacity in their VM cluster. If they had a project that was a surprise smash hit and they hadn’t provisioned resources for it weeks in advance, they’re throwing money down the drain.
Some companies have the capacity to scale already (which may or may not be a waste of money like you said), while some can scale but it would take weeks or months. There are pros and cons to both approaches and it really depends on the company and product you’re talking to.
FWIW, if you include a comparison to OVH, everybody is going to look expensive by comparison because OVH provide rock bottom prices by having rock bottom support.
(I’m personally a big fan and would strongly recommend them for many use cases other than hosting highly available services.)
For personal stuff, I have a dedicated server from reliablesite.net with 32GB of ram, and 4 cores/8 threads at 3.6ghz (Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz). $54 dollars a month.
It’s pretty sweet. All my stuff can tolerate some downtime, but it seems like you could go pretty far on 2 dedicated servers in different data centers. I think that’s basically how pinboard.in runs.
Maybe no one else needs a lot of ram, but I’m surprised I rarely really see anyone talking about anything but digitalocean, vultr, linode, etc for personal stuff. Or hetzner, but I don’t want to deal with that latency.
GCP’s UI is perhaps worse! Middle click works almost nowhere, every page load is followed by a 5 second loading spinner and god help you if you have multiple organisations or whatever it was. Worst control panel I have ever used.
In my experience the layout/design of GCP is waaaay better than AWS. It may be visually slow, but at least I can find things. The last time I had to use AWS it took me 5+ minutes of clicking through pages to find what I was looking for. Almost nothing is named in a way that makes sense, and they seemingly hide important things (like billing) deep in the navigation tree.
Not mentioned in your article: AWS’ UI is beyond terrible.
I would seriously encourage anyone, and especially anyone in this forum to investigate either one of the many excellent APIs or, if you want a higher level lower learning curve solution, the AWS CLI.
The help on the CLI is excellent and bonus: You can script with it :)
(Again: I work there. I love it. Objectivity? Shot.)
I must be missing something…
At those prices, why don’t people simply use cheap vServers?
I.e. I just migrated testbit.eu to a new vServer last month, 4 cores, 8GB RAM, 100GB SSD for 5€, a linux-vserver at strato.de.
Would be interested in this, too. I can see a few reasons, but would be cool to know if they are true or if something else is the reason:
you can have longer running contracts for your servers at AWS (e.g. 1 year) and then get a discount, but if you only need a simple server still more expensive than other options
our cheap examples seem Germany-based (strato for you, Hetzner for me); maybe the US server market is more expensive
some developers might only know AWS/Google Cloud and just use that because everybody uses that
some people might need the scalability and some things are easier with AWS/Google in that case (you can define which servers should be located in the same data center, and you get a private subnet automatically; if you use non-cloud vservers you have to setup the VPN yourself)
some people might need the scale-out to different regions in the world (but that’s also possible with smaller companies, e.g. even in the small country of Austria there is a hosting company that provides data centers all around the world)
some people might like the other services that are provided, like SQL database etc. (usually priced by the EC2 instance price + additional price for service)
some people might use it for a part of their infrastructure just to find out how it works and if it is really better
The reason I consider it worthwhile to use one of the big three cloud providers for a production deployment is the combination of managed services and multiple availability zones within a region. I can set up a managed database in a multi-AZ configuration and a load-balanced auto-scaling group of web servers (also across multiple AZs), then sleep well at night. Linode, DigitalOcean, Vultr, and the like just don’t have anything like that.
Totlaly this, and another thing is companies with a large number of teams. Just give each project their AWS/synonim account, and they can deal with their infrastructure, and your classic “corporate IT” is off the hook, both for maintaining the nightmare that those teams want running stably, as well as providing bleeding edge stuff that a regular “drone” doesn’t even know what it means.
Yes that feature only seems to be available from the big 3 - but even there is it is not the default and you pay a little extra for it. Azure have a lot of options but I’m not completely clear in my mind what the usecase for this is
I can help - if you say that you do not want to replicate, there is a special storage tier for that: S3 One Zone - Infrequent Access and with that 1 TB in eu-central-1 (Frankfurt) costs you ~11 USD per month which is as cheap as OVH from your table and you are comparing apples to apples.
Lightsail was something I was vaguely aware of but didn’t really know about and so didn’t think to include. This omission has been pointed out by surprisingly few people (I think you are only the third of all the comments on various sites). I don’t think people consider LightSail enough (I’m guilty of this too) and I plan to update the article to include it when I get a minute. The honest truth is that most people are still using either EC2 (or equivalent, especially in-VPN equivalents). That’s certainly the case on GCP.
Yeah, I don’t disagree. I almost only hear of people using EC2, but I have personally used light sail and had good experiences. It’s not fully integrated into the AWS world like EC2, but it does offer a few managed databases and such. I suppose you can always wire it up to other AWS services yourself, but it might not be as simple as EC2
In my opinion google cloud platform
UI sucks like it takes so long to get service up and running. I remember one time I met a someone who worked at google said the same thing since it’s new in the market and trying there best to make the services better. But on the other hand Amazon Web Services is really smooth and it’s really easy to get started with it.
I haven’t use other cloud services but I’m sure each one has there pros and cons
I set up a small openstack instance this past summer and while it was a royal pain to set up it seemed fairly well thought out. The performance was not amazing but not outside the expected performance of an cloud platform. The user experience is about the same as every other cloud platform after everything is set up. One thing I never got completely sorted was how to set up metrics. There’s probably a service for it, but I couldn’t figure out how to get a dashboard with actual (as opposed to provisioned and unused) cpu and memory stats.
See above. It’s an independent open source meta-project contributed to by many large players, but its popularity is on the wane in favor of containers which are much easier to manage.
Openstack is neither redhat nor canonical. It’s an open source project backed by a number of large vendors, but, to be honest, every single time I talk to anyone who’s implemented it in their day job, they said the same thing: Initial build goes great but the upgrade story is a nightmare, and it’s actually a constellation of separate largely independent projects with varying levels of contribution by developers so the amount of polish varies.
The industry is trending towards container based solutions, and RedHat has a very nice container clustering solution for large scale deployments called Openshift.
No skin in this game since I don’t use any of them, just passing along what I hear from folks who do.
I’d recommend OpenStack based over proprietary clouds, but I’m biased. But if one manages an OpenStack installation badly, the experience can be bad.
This thread mentioned containers (OpenShift is based on Kubernetes, which is for scheduling/orchestrating containers) which some people run on top of VMs (I have seen this with OpenShift on OpenStack Nova which uses KVM but also others). If you need VMs, in the long run I’d recommend the other way around: VMs in Containers. Two interesting projects in that space: KubeVirt is more light weight and runs KVM in Kubernetes. Airship is a way to install and update Kubernetes which also can run among others OpenStack in that Kubernetes.
The article fails to account for reserved instance pricing, the sustained use discount, the free tier, and spot or pre-emptable instances.
Pricing on AWS/GCP is complex, but you can save a lot of money if you’re careful.
Though to be fair that complexity is one way they make money. You could save a lot of money, but it’s all too easy to overlook something.
Hi, OP here.
I do discuss the issue of complexity later on. I don’t think it normally works to the advantage of the customer.
My intuition (and experience!) is that most real world AWS customers get bamboozled by the incredible complexity of pricing (especially when it’s presented in non-human readable units like “0.034773 per vCPU hour”) and wind up paying far, far over what the going rate of renting a computer should be.
Hey OP, could you add Hetzner Cloud servers? Should be a whole lot cheaper than anything else you’ve got on there if I’m seeing this correctly.
There’s also a built-in Terraform provider https://www.terraform.io/docs/providers/hcloud/index.html
Agree, it seems to be 10 Euro/month for 8GB in the cloud plan.
I’m running a root server with 32GB of memory and 2TB hard disk at Hetzner for ~30 Euro / month (from the Serverbörse). I do not know about their support at all, but I am quite sure that from a US IT company I could only expect automated mails, anyway. So Hetzner cannot be any worse there.
Of course, root server and cloud hosting are two totally different beasts, but in my humble opinion it’s a choice the US-centric tech community too often does not even consider. The mantra is always the application has to be horizontally scalable.
It should just be noted that Serverbörse is usually based on dekstop machines and the like, often older CPUs and servers, so you might not want to rely on that if your application stack is considered mission-critical.
As for the cloud vs classic servers, it’s a different beast completely, yes. A lot of internet wouldn’t be alive if you had to pay a linux admin to configure your servers, deploy your apps and pay attention to traffic, script kiddies etc. But not having a lot of internet online could perhaps be considered a good thing, eh?
On preemptible/spot they both provide /liberal/ shutdown warnings, it is possible to run almost anything aside from long life connection hosts (e.g. websockets) or extremely stateful applications like databases. Use cases that don’t fit spot are approaching minority in 2020 with current infrastructure trends.
Re: DigitalOcean, I did a migration a few years back where AWS came out vastly cheaper than the equivalent configuration on DO mostly due to AWS instance reservations, which are a trivial factor to plan for when you’re already planning a large migration.
The one area I couldn’t possibly defend the cloud providers is bandwidth pricing. All other costs are a footnote compared to it for any account doing high traffic serving
Not an expert on this, but while it seems it is possible to run lots of things on hosts that may shut themselves down automatically, actually doing so will cost you more developer and devops time instead of just paying a little more for hosting. It seems likely that this is time you want to spend anyway, as part of making an application more resilient against failure, but it still makes the situation yet more complicated, and complexity usually serves Amazon more than the customer. (And I have a hard time believing that databases are approaching a minority use case with current infrastructure trends. ;-)
Bandwidth pricing is the primary lock-in mechanism cloud providers have. It should be seen as anti-competitive.
I don’t understand what you mean. Are you saying bandwidth costs of migrating data to another cloud would be prohibitive? Or something else?
Personal example: I started to develop an application with AWS services (Lambda, SQS, EC2, S3). Later I changed it to an application for a “normal” server. I still wanted to store data to S3, but the cost to download it from there for analysis is just ridiculous. So the choice was to store to S3 and run on EC2 or not to store to S3. (I decided against S3).
What I mean is that data transfers between services in the same cloud x region are much cheaper than data transfers between clouds. So it’s more expensive to store logs in AWS and analyze them with GCP, compared to just analyzing them in AWS. You can’t take advantage of the best tools in each cloud, but are forced to live in one cloud, creating a lock-in effect.
If there was a rule that bandwidth prices must be based on distance, not whether the endpoints are within the same cloud, we’d see more competition in cloud tools. Any startup could create a really good logs-analysis tool and be viable, for example. This rule runs into some legitimate issues though. For example, if a cloud provider has custom network hardware and fiber between their own data centers, the cost of moving data between their zones might be much cheaper than sending it over the public internet to another cloud provider. Moreover, many different cloud services are co-located in the same data center. So it’s much cheaper to analyze logs using a service that already exists where the data is than to ship it off to another cloud.
The problem is big cloud vendors have little incentive to let users take their data out to another cloud. It’s going to be a market where only a few big players have significant market share, at this rate.
Okay I see what you’re saying now. And when bandwidth costs encourage using more services in one cloud, you become more entrenched and entangled to the services of that particular cloud, locking you in even more.
I agree completely on the bandwidth pricing. At this point, I think this should be considered a common public infrastructure, like roads etc. Yes, I understand that there are costs to providing it all, that some companies have invested in infrastructure privately, all I’m saying is that the traffic should be “free” for the consumers (and even for the business sector that the article OP is mentioning, companies hosting wordpress or timesheet or some similar small apps like that without major engineering teams).
Yep, it’s definitely true for existing apps. Converting a large stateful app to new world is a nightmare, but you get so many benefits, not least the problem of preemptibility and autoscaling are basically identical. The big Django app used to require 16 vCPUs to handle peak, so that’s how it was always deployed. Now it spends evenings and non-business days idling on a single t2.micro
In the case of a typical Django app though, if you’re already using RDS then the single most typical change is moving its media uploads to S3. It’s a 15 minute task to configure the plugin once you’ve got the hang of it, but yep, for a single dev making the transition for a single app, that probably just cost you a day
Thanks, I came here to say that. The article didn’t even factor in bandwidth/network costs, which matter for both EC2 and S3 (not as familiar with the other cloud providers).
Anecdotally, from friends who work in the AWS machine: once you get to real (financial) scale with AWS - think “7 digits a month” or so - you’ll find Amazon is extremely happy to negotiate those costs.
Fiber ain’t free, but I wager that the profit margin is probably highest there.
It is those weird units that prevents me as an individual developer from even considering them - when realistically it should be easy to understand the pricing on this sort of thing.
You definitely cannot compare AWS and OVH prices. The reason is that this price isn’t reflecting the VM spec only, but the whole ecosystem with it.
OVH has a very clumsy API, no tooling to actually manage your infrastructure besides a web interface that changes all the time. The only presence of “Email Pro” next to my VM management shows the difference in integrations between the 2.
AWS is definitely more expensive, but has also much more integrations and services (mentioned in the article, but dismissed as not a big thing). You rent a VM in AWS and you have key management, audit logs, managed database (not only SQL and Redis, but also Elasticsearch, Mongo, Kafka, …), all the tooling to manage those (terraform, ansible, chef, puppet, awscli), and a much more granular VM size catalog among other things!
Therefore, I think this comparison isn’t complete enough.
I’m a very happy OVH customer but I can only agree with you. I would never use OVH for hosting something valuable.
Doesn’t invalidate your point about OVH experience not being great but at least there’s a terraform provider now:
https://www.terraform.io/docs/providers/ovh/index.html
Note: OVH customer for about 15y.
If you look at the modules list, you cannot even create a VM with Terraform…
Just look at the difference with AWS: https://www.terraform.io/docs/providers/aws/index.html
I’m also an OVH customer for ~10years, the pricing is great, the customer support has been varying greatly in term of quality, but I wouldn’t use it for any business projects…
NB: This is my employer. My objectivity is shot, because I love working here :)
Thanks for posting this. Many people don’t take the many layers of managed service offerings above and beyond simple image/VM hosting.
If your use case is such that all you need is VM hosting, then such things are irrelevant, but for many companies, leveraging the economies of scale and cost savings you get from having your VM, your database, your elastic search cluster, and and your data transformation / warehousing / BI all managed for you can be quite substantial.
Maybe dumb, but… “this” & “here” – OVH || AWS??
From following through the profile link to github, it appears it’s AWS. But I also wasn’t certain from the post :)
Yes I work for AWS. Sorry for not being clear. I think I was a bit wary of our social media policy, but either I’ve broken it or I haven’t :) No sense dancing around.
Worth mentioning that Linode has had a history of severe security incidents, including two that gave attackers access to customer VPSs. I have not seen a detailed comparison that shows no other providers have had similar incidents, but I haven’t heard similar things about any of them.
I think one of those was a Xen vulnerability that affected some of the AWS data centers as well.
It doesn’t matter at the scale of 2 cores/8GB RAM/100GB disk, but if you start wanting more resources, coloing can be significantly cheaper than a VPS or cloud provider, and provide more stability (no noisy neighbors or strange networking problems). Would be interesting to see that cost comparison as well (amortized over some number of years)
For static workloads, yes this is absolutely true. But if scale starts moving up or down, you’re stuck with the hardware you bought until you buy (and deliver and rack and configure) more.
People tell me this all the time, but when I ask people what their ratio of peak load to typical load is, it’s usually less than the premium that one pays for running in the cloud, unless you’re large enough to negotiate a contract where you get much better than the public rates (which many people do, but that makes it pretty hard to price-compare). Maybe there are workloads where you’re only using your hardware a very small percentage of the time (data analysis, etc), that makes sense, but for a lot of companies with very seasonal traffic (ecommerce sites, etc), I’ve asked people about their peak load and it usually doesn’t seem to justify being in the cloud :/
I both agree and not agree with you. People I know that are on this scale (big enough to own hardware, but not one of the super-players) usually have a relatively steady load and so resources in reserve so that they can provision extra/new apps when the need arises, and still have time to get new hardware, On the other hand, if they actually want to colo (because they need to be out there on internet) their load is much more volatile and harder to predict, so they overprovision a lot.
Granted, this is based on personal experience, but I just wanted to bring it up.
How much cheaper is it than (eg) 3 year non-convertible reserved instances?
And people always under value time to delivery. I was in a position where we had to wait month(s) to get newer nodes in an cluster because the rack was full.
And having people around who can deal with both admin and hardware stuff.
Both are completely valid and it varies organization to organization. I’m an IT consultant and it’s always hit or miss on projects whether the company will have the spare capacity in their vSphere cluster to provision servers when the project demands it. One product I deploy needs four cores and 16GB of RAM dedicated, and you might be surprised at the number of times we put the entire project on hold for months because they need to purchase and install the extra capacity in their VM cluster. If they had a project that was a surprise smash hit and they hadn’t provisioned resources for it weeks in advance, they’re throwing money down the drain.
Some companies have the capacity to scale already (which may or may not be a waste of money like you said), while some can scale but it would take weeks or months. There are pros and cons to both approaches and it really depends on the company and product you’re talking to.
FWIW, if you include a comparison to OVH, everybody is going to look expensive by comparison because OVH provide rock bottom prices by having rock bottom support.
(I’m personally a big fan and would strongly recommend them for many use cases other than hosting highly available services.)
For personal stuff, I have a dedicated server from reliablesite.net with 32GB of ram, and 4 cores/8 threads at 3.6ghz (Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz). $54 dollars a month.
It’s pretty sweet. All my stuff can tolerate some downtime, but it seems like you could go pretty far on 2 dedicated servers in different data centers. I think that’s basically how pinboard.in runs.
Maybe no one else needs a lot of ram, but I’m surprised I rarely really see anyone talking about anything but digitalocean, vultr, linode, etc for personal stuff. Or hetzner, but I don’t want to deal with that latency.
I wouldn’t really call Linode and DigitalOcean “smaller providers”, they’re pretty big in terms of customer base and revenue.
Not mentioned in your article: AWS’ UI is beyond terrible.
Well..smaller is a relative term :)
GCP’s UI is perhaps worse! Middle click works almost nowhere, every page load is followed by a 5 second loading spinner and god help you if you have multiple organisations or whatever it was. Worst control panel I have ever used.
In my experience the layout/design of GCP is waaaay better than AWS. It may be visually slow, but at least I can find things. The last time I had to use AWS it took me 5+ minutes of clicking through pages to find what I was looking for. Almost nothing is named in a way that makes sense, and they seemingly hide important things (like billing) deep in the navigation tree.
I have the same experience as @ngp. Goog’s console might not be super fast, but I find it consistent and simple enough to use.
I would seriously encourage anyone, and especially anyone in this forum to investigate either one of the many excellent APIs or, if you want a higher level lower learning curve solution, the AWS CLI.
The help on the CLI is excellent and bonus: You can script with it :)
(Again: I work there. I love it. Objectivity? Shot.)
I must be missing something… At those prices, why don’t people simply use cheap vServers? I.e. I just migrated testbit.eu to a new vServer last month, 4 cores, 8GB RAM, 100GB SSD for 5€, a linux-vserver at strato.de.
Would be interested in this, too. I can see a few reasons, but would be cool to know if they are true or if something else is the reason:
The reason I consider it worthwhile to use one of the big three cloud providers for a production deployment is the combination of managed services and multiple availability zones within a region. I can set up a managed database in a multi-AZ configuration and a load-balanced auto-scaling group of web servers (also across multiple AZs), then sleep well at night. Linode, DigitalOcean, Vultr, and the like just don’t have anything like that.
Totlaly this, and another thing is companies with a large number of teams. Just give each project their AWS/synonim account, and they can deal with their infrastructure, and your classic “corporate IT” is off the hook, both for maintaining the nightmare that those teams want running stably, as well as providing bleeding edge stuff that a regular “drone” doesn’t even know what it means.
From what I’ve seen most object stores other than the big 3 don’t actually replicate across data centers.
Yes that feature only seems to be available from the big 3 - but even there is it is not the default and you pay a little extra for it. Azure have a lot of options but I’m not completely clear in my mind what the usecase for this is
I can help - if you say that you do not want to replicate, there is a special storage tier for that: S3 One Zone - Infrequent Access and with that 1 TB in eu-central-1 (Frankfurt) costs you ~11 USD per month which is as cheap as OVH from your table and you are comparing apples to apples.
Not sure if anyone else has mentioned AWS light sail yet, but it’s the exact same price as DO/Linode. You’re comparing VPS instances right?
Lightsail was something I was vaguely aware of but didn’t really know about and so didn’t think to include. This omission has been pointed out by surprisingly few people (I think you are only the third of all the comments on various sites). I don’t think people consider LightSail enough (I’m guilty of this too) and I plan to update the article to include it when I get a minute. The honest truth is that most people are still using either EC2 (or equivalent, especially in-VPN equivalents). That’s certainly the case on GCP.
Yeah, I don’t disagree. I almost only hear of people using EC2, but I have personally used light sail and had good experiences. It’s not fully integrated into the AWS world like EC2, but it does offer a few managed databases and such. I suppose you can always wire it up to other AWS services yourself, but it might not be as simple as EC2
[Comment removed by author]
In my opinion google cloud platform UI sucks like it takes so long to get service up and running. I remember one time I met a someone who worked at google said the same thing since it’s new in the market and trying there best to make the services better. But on the other hand Amazon Web Services is really smooth and it’s really easy to get started with it. I haven’t use other cloud services but I’m sure each one has there pros and cons
Has anyone tried openstack by red hat
Isn’t openstack by Canonical?
I set up a small openstack instance this past summer and while it was a royal pain to set up it seemed fairly well thought out. The performance was not amazing but not outside the expected performance of an cloud platform. The user experience is about the same as every other cloud platform after everything is set up. One thing I never got completely sorted was how to set up metrics. There’s probably a service for it, but I couldn’t figure out how to get a dashboard with actual (as opposed to provisioned and unused) cpu and memory stats.
See above. It’s an independent open source meta-project contributed to by many large players, but its popularity is on the wane in favor of containers which are much easier to manage.
Openstack is neither redhat nor canonical. It’s an open source project backed by a number of large vendors, but, to be honest, every single time I talk to anyone who’s implemented it in their day job, they said the same thing: Initial build goes great but the upgrade story is a nightmare, and it’s actually a constellation of separate largely independent projects with varying levels of contribution by developers so the amount of polish varies.
The industry is trending towards container based solutions, and RedHat has a very nice container clustering solution for large scale deployments called Openshift.
No skin in this game since I don’t use any of them, just passing along what I hear from folks who do.
OP mentioned OVH which uses OpenStack: https://www.openstack.org/marketplace/public-clouds/ovh-group/ovh-public-cloud
Wikimedia runs OpenStack for its community: https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS
I’d recommend OpenStack based over proprietary clouds, but I’m biased. But if one manages an OpenStack installation badly, the experience can be bad.
This thread mentioned containers (OpenShift is based on Kubernetes, which is for scheduling/orchestrating containers) which some people run on top of VMs (I have seen this with OpenShift on OpenStack Nova which uses KVM but also others). If you need VMs, in the long run I’d recommend the other way around: VMs in Containers. Two interesting projects in that space: KubeVirt is more light weight and runs KVM in Kubernetes. Airship is a way to install and update Kubernetes which also can run among others OpenStack in that Kubernetes.