Cloud Run sounds cool I guess, and I might try it sometime. But honestly, I don’t see a problem with just getting a conventional server. I have a $5/month Digital Ocean server, and I run like 10 things on it. That’s the nice thing about a plain old Linux server, as long as none of your individual things takes up a ton of resources or gets too much traffic, you can fit quite a few of them on one cheap server.
It’s all running on 1 server, so there’s only one SSH key to manage. Well, one for every device I connect to it from, but that’s not that many, and there really isn’t anything to manage.
Everything is set up through SystemD services. I wrote control files for the services that didn’t already have them (Nginx, Postgres, etc). It’s perfectly capable of restarting things and bringing them up if the server reboots. Everything that has logs is set up with logrotate and transports to SumoLogic. I did set up a few alerts through there for services that I care about keeping running and have been troublesome in the past. Also have some automatic database backups to S3. These are all one-off toy projects used pretty much only by me, and this level of management has proved sufficient and low-maintenance enough to keep them up to my satisfaction.
Of course, I would re-evaluate things and probably set up something dedicated and more repeatable if any of those services ever got a significant number of users, generated revenue, or otherwise merited it. There’s plenty of options for exactly how, and which one to use would depend on the details.
They said a single server so yes a single SSH key I’d imagine, every major init system on Linux has service crash detection and restart, and syslog (and if you are feeling brave GoAccess).
Assuming you meant SSH and mistyped cert instead of key it’s one machine so one key.
Assuming you meant SSL instead of SSH. I run everything in Docker compose. I use this awesome community maintained nginx image[1] that sets it up as a reverse proxy and automates getting let’s encrypt certificates for each domain I need with just a little config in the compose file.
From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done.
Good point, could have meant SSL Certs. I use the Let’s Encrypt automated package. It’s quite good these days - can set up your nginx config for you mostly-correctly right off the bat, and renews in place automatically. I just set up a cron job to run it once a week, pipe the logs to Sumologic, and then forget about it. Worked fine automatically when I was serving multiple domains from the same nginx instance too, though I’m not doing that right now.
Sorry, I did mean SSL certs. You are right about automating it and that’s what I would do for professional work. For a side-project, however, I prefer eliminating it completely and letting Google do it.
From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done
Can you share more details of your setup here?
I used this too but then my provider sunset the hardware I was on and migration was a nightmare because it’s easy to fall into bad patterns with this mode.
Admittedly it was over 10 years of cruft but still.
That did honestly kind of happen to me too. I had a server like that running with I think Ubuntu 14.04 LTS for quite a while. Eventually I decided it needed upgrading to a new server with 18.04 - security patches, old instance, etc. It was a bit of a pain figuring out the right way to do the same things on a much newer version. It only really took about a full day or so to get everything moved over and running though, and a good opportunity to upgrade a few other things that probably needed it and shut off things that weren’t worth the trouble.
I’d say it’s a pretty low price overall considering the number of things running, the flexibility for handling them any way I feel like, the low price, and the overall simplicity of 1 hosting service and 1 server instead of a dozen different hosting systems I’d probably be using if I didn’t have that flexibility.
I’m a heavy user of Cloud Run for side projects, and I constantly find myself wishing that Google Cloud would offer a free-tier for Google Cloud SQL. Something in the < 500MB range, along with some light CPU restrictions. I’m currently paying $10/month for a Cloud SQL Postgres instance that’s only using 128MB of storage, ~300MB of RAM, and 2% CPU.
Yup, and I’ve done this in the past, but Heroku doesn’t guarantee your connection details unless you get them from the heroku CLI. It’s so they can move your DB around as they see fit. You also don’t get notifications that the connection string has changed, so it’s not an ideal solution for an app running off of the Heroku platform.
I’m curious how you use GCS for persistence. Are you using it as a blob store, or something more structured?
I write JSON blobs. The latency is acceptable as long as you are accessing one blob per HTTP request. You can’t build a latency-sensitive system like a chat server on top of it for sure.
As there are no VMs, I can’t SSH into the machine and make changes, which is excellent from a security perspective since there is no chance of someone compromising and running services on it.
What’s wrong with SSH? Extendeding this logic, if someone compromised your Google account, you’re toast. Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course), and disable root login.
They’re using the appliance model here. They build the appliance with no ability to log into it. It’s uploaded to run on Google’s service. When time to fix or upgrade, a new one is built, the other is thrown away, and new one put in its place. It’s more secure than SSH if Google’s side of it is more secure than SSH.
Now, that part may or may not be true. I do expect it’s true in most cases since random developers or admins are more likely to screw up remote security than Google’s people.
If someone accesses my Google account, they can access my GCP account anyways. The advantage here is that my Google account is more protected (not just with 2-factor) but because Google is always on the watch out. For example, if I am logging in from the USA and suddenly there is a login from Russia, Google is more likely to block that or warn me about it. That’s not going to happen with a VM I am running in GCP.
Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course),
None of that protects against vulnerability in the software though. For example, my Wordpress installation was compromised and someone served malware through it. That attack vector goes away with docker container based websites (Attack vector-like SQL injection do remain though since the database is persistent)
I am a PenTester by trade and one of the things I like to do is keep non-scientific statistics and notes about each of my engagements because I think they can help me point out some common misconceptions that are hard for people to compare in real world (granted these are generally large corporate entities not little side projects).
Of that data only about 4 times have I actually gotten to sensitive data or internal network access via SSH, and that was because they were configured for LDAP authentication and I conducted password sprays. On the other side of the coin, mismanagement of Cloud keys that has lead to the compromise of the entire cloud environment has occurred 15 times. The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account.
Also in my experience actual log analysis from cloud environments does not actually get done (again just my experience). The amount of phone calls from angry sysadmins asking if I was the one who just logged into production SSH during an assessment versus entire account takeovers in the cloud with pure silence is pretty jarring.
I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way
The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account
Thanks for sharing this.
SSRF or SQL injection will remain a concern as long as its a web service irrespective of docker or VM
logging headers containing transient keys - this again is a poor logging issue which holds for both docker and VM
I agree that key management in the cloud is hard. But I think you will have to deal with that both on docker and VM
I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way
This won’t eliminate most issues like SQL injection or SSRF etc. to a great extent. And IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs (unless you always log into through VPN first)
You seem to be kind of missing my point, I’m not arguing between Docker vs VMs or even application security. The original comment was about SSH specifically and I am making an argument that the corner cases for catastrophic failures with SSH tend to be around weak credentials or leaked keys which are all decently well understood. Whereas in the cloud world, the things that can lead to catastrophic failure (sometimes not even of your own mistakes) are much much more unknown, subtle, and platform specific. The default assumption of SSH being worse than cloud native management is not one I agree with, especially for personal projects.
IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs
For some reason I hear this a lot and I seriously wonder, do you not think that’s how it’s always been? There’s a reason that some of the earliest RFC’s for IPv6 address the fact that mobility is an issue. I’m not necessarily advocating this in the personal project territory, but this is the whole point of designing your network with bastion hosts. That way you can authenticate to that one location with very strict rules, logging, and security policies and then also not have SSH exposed to your other services.
I used to deploy everything to Heroku or sometimes, Now.sh for static page apps. Heroku has a great out of the box experience and even has the ability to deploy a Docker container, but they don’t offer SSL for free dynos.
I recently moved all my projects to a k8s cluster, either DO and Linode has great price for a hobby tier k8s cluster ($10/mo for a single node cluster - 1 CPU core, 2GB RAM and 50GB storage), I got the same ability to deploy docker images and also got a free, managed and automatic SSL.
Yeah it seems to be overwhelming, especially with the load of documentation, but the way I use it is just really limited to create deployment, expose the service, and sometime restart it, just that :D the good point I found is we can actually control how much resources we want to allocate to each application running in k8s.
Nice. I have used Cloud Build before and I think its a great idea if the builds are going to take lot of resources. Personally, I still try to manually test via make docker_run before deploying an image, so, building locally works. I am sure though at some point I will migrate to Cloud Build as well.
Interesting setup. How can they deploy a docker container as “serverless”? Will they need to keep the container on standby in case someone uses it? If so, wouldn’t that effect load times?
The container can be booted fairly quickly, but certainly not as quick as an always-running service. For example I just hit one of my Cloud Run endpoints, which I assume was asleep, and it took 200ms to respond to the initial request. Subsequent requests were served in about 80ms.
Exactly. I think the latency could be a problem if you are trying to run a full-fledged money-making project but latency isn’t an issue for side-projects.
For tiny projects, I’m super happy with Dokku, it’s like a poor man’s Heroku. If you don’t need bells and whistles, installing it on a DO or Vultr instance is super easy and it’s just set up as a git remote that you push to, and a buildpack runs and deploys your code. There are buildpacks for a lot of languages and you can add your own, too. And a ton of plugins for databases and so on.
Cloudflare Workers are another thing that look fairly interesting for this kind of purpose - effectively serverside service workers, WASM and all, with 100,000 free hits a day.
Yeah, I considered that as well. I fear a lock-in similar to AWS Lambda here. Google Cloud Run gives full portability since I can move the Docker container elsewhere (say to K8s) as well.
Understandable, although it feels like the compute portion is not really the concerning bit (Docker is “portable”, true service workers are “portable”). What usually isn’t portable are the storage interfaces if you care about persistence at all.
That’s true. You are right that they are only partially portable. In principle, I can move the code over but use the storage API from Google but that can be expensive.
However, if I am moving inside the Google (or AWS) services, then Google Cloud Run allows me to deploy something as a side-project and then upgrade it to a full-fledged K8s or VM based setup in the future in case I desire to.
Cloud Run sounds cool I guess, and I might try it sometime. But honestly, I don’t see a problem with just getting a conventional server. I have a $5/month Digital Ocean server, and I run like 10 things on it. That’s the nice thing about a plain old Linux server, as long as none of your individual things takes up a ton of resources or gets too much traffic, you can fit quite a few of them on one cheap server.
Do you manage SSH certs for those 10 yourself? What happens when the services go down? What about logging?
It’s all running on 1 server, so there’s only one SSH key to manage. Well, one for every device I connect to it from, but that’s not that many, and there really isn’t anything to manage.
Everything is set up through SystemD services. I wrote control files for the services that didn’t already have them (Nginx, Postgres, etc). It’s perfectly capable of restarting things and bringing them up if the server reboots. Everything that has logs is set up with logrotate and transports to SumoLogic. I did set up a few alerts through there for services that I care about keeping running and have been troublesome in the past. Also have some automatic database backups to S3. These are all one-off toy projects used pretty much only by me, and this level of management has proved sufficient and low-maintenance enough to keep them up to my satisfaction.
Of course, I would re-evaluate things and probably set up something dedicated and more repeatable if any of those services ever got a significant number of users, generated revenue, or otherwise merited it. There’s plenty of options for exactly how, and which one to use would depend on the details.
They said a single server so yes a single SSH key I’d imagine, every major init system on Linux has service crash detection and restart, and syslog (and if you are feeling brave GoAccess).
Assuming you meant SSH and mistyped cert instead of key it’s one machine so one key.
Assuming you meant SSL instead of SSH. I run everything in Docker compose. I use this awesome community maintained nginx image[1] that sets it up as a reverse proxy and automates getting let’s encrypt certificates for each domain I need with just a little config in the compose file.
From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done.
[1]https://docs.linuxserver.io/images/docker-letsencrypt
Good point, could have meant SSL Certs. I use the Let’s Encrypt automated package. It’s quite good these days - can set up your nginx config for you mostly-correctly right off the bat, and renews in place automatically. I just set up a cron job to run it once a week, pipe the logs to Sumologic, and then forget about it. Worked fine automatically when I was serving multiple domains from the same nginx instance too, though I’m not doing that right now.
Sorry, I did mean SSL certs. You are right about automating it and that’s what I would do for professional work. For a side-project, however, I prefer eliminating it completely and letting Google do it.
I used this too but then my provider sunset the hardware I was on and migration was a nightmare because it’s easy to fall into bad patterns with this mode.
Admittedly it was over 10 years of cruft but still.
That did honestly kind of happen to me too. I had a server like that running with I think Ubuntu 14.04 LTS for quite a while. Eventually I decided it needed upgrading to a new server with 18.04 - security patches, old instance, etc. It was a bit of a pain figuring out the right way to do the same things on a much newer version. It only really took about a full day or so to get everything moved over and running though, and a good opportunity to upgrade a few other things that probably needed it and shut off things that weren’t worth the trouble.
I’d say it’s a pretty low price overall considering the number of things running, the flexibility for handling them any way I feel like, the low price, and the overall simplicity of 1 hosting service and 1 server instead of a dozen different hosting systems I’d probably be using if I didn’t have that flexibility.
I’m curious how you use GCS for persistence. Are you using it as a blob store, or something more structured?
I’ve been toying around with Cloud Run for a bit, but “free-tier persistence” is a problem I don’t have a great solution for yet
I’m a heavy user of Cloud Run for side projects, and I constantly find myself wishing that Google Cloud would offer a free-tier for Google Cloud SQL. Something in the < 500MB range, along with some light CPU restrictions. I’m currently paying $10/month for a Cloud SQL Postgres instance that’s only using 128MB of storage, ~300MB of RAM, and 2% CPU.
I would love a free-tier of Google Cloud SQL as well :)
Maybe deploy a SQL server on free google VM (f1-micro) or Oracle Cloud VM. But you’ll have to maybe manage it a bit.
That’s the exact opposite of what I want to do.
Could you use heroku postgresql and connect from cloud run?
https://devcenter.heroku.com/articles/connecting-to-heroku-postgres-databases-from-outside-of-heroku
Yup, and I’ve done this in the past, but Heroku doesn’t guarantee your connection details unless you get them from the
heroku
CLI. It’s so they can move your DB around as they see fit. You also don’t get notifications that the connection string has changed, so it’s not an ideal solution for an app running off of the Heroku platform.I write JSON blobs. The latency is acceptable as long as you are accessing one blob per HTTP request. You can’t build a latency-sensitive system like a chat server on top of it for sure.
What’s wrong with SSH? Extendeding this logic, if someone compromised your Google account, you’re toast. Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course), and disable root login.
SSH can be plenty secure but no SSH is even more secure.
Until you invent a less-secure workaround for not having access to ssh.
They’re using the appliance model here. They build the appliance with no ability to log into it. It’s uploaded to run on Google’s service. When time to fix or upgrade, a new one is built, the other is thrown away, and new one put in its place. It’s more secure than SSH if Google’s side of it is more secure than SSH.
Now, that part may or may not be true. I do expect it’s true in most cases since random developers or admins are more likely to screw up remote security than Google’s people.
Uploading Docker images that can’t be SSH into IMHO is much more secure.
If someone accesses my Google account, they can access my GCP account anyways. The advantage here is that my Google account is more protected (not just with 2-factor) but because Google is always on the watch out. For example, if I am logging in from the USA and suddenly there is a login from Russia, Google is more likely to block that or warn me about it. That’s not going to happen with a VM I am running in GCP.
None of that protects against vulnerability in the software though. For example, my Wordpress installation was compromised and someone served malware through it. That attack vector goes away with docker container based websites (Attack vector-like SQL injection do remain though since the database is persistent)
I am a PenTester by trade and one of the things I like to do is keep non-scientific statistics and notes about each of my engagements because I think they can help me point out some common misconceptions that are hard for people to compare in real world (granted these are generally large corporate entities not little side projects).
Of that data only about 4 times have I actually gotten to sensitive data or internal network access via SSH, and that was because they were configured for LDAP authentication and I conducted password sprays. On the other side of the coin, mismanagement of Cloud keys that has lead to the compromise of the entire cloud environment has occurred 15 times. The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account.
Also in my experience actual log analysis from cloud environments does not actually get done (again just my experience). The amount of phone calls from angry sysadmins asking if I was the one who just logged into production SSH during an assessment versus entire account takeovers in the cloud with pure silence is pretty jarring.
I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way
Thanks for sharing this.
You seem to be kind of missing my point, I’m not arguing between Docker vs VMs or even application security. The original comment was about SSH specifically and I am making an argument that the corner cases for catastrophic failures with SSH tend to be around weak credentials or leaked keys which are all decently well understood. Whereas in the cloud world, the things that can lead to catastrophic failure (sometimes not even of your own mistakes) are much much more unknown, subtle, and platform specific. The default assumption of SSH being worse than cloud native management is not one I agree with, especially for personal projects.
For some reason I hear this a lot and I seriously wonder, do you not think that’s how it’s always been? There’s a reason that some of the earliest RFC’s for IPv6 address the fact that mobility is an issue. I’m not necessarily advocating this in the personal project territory, but this is the whole point of designing your network with bastion hosts. That way you can authenticate to that one location with very strict rules, logging, and security policies and then also not have SSH exposed to your other services.
All fair points.
I used to deploy everything to Heroku or sometimes, Now.sh for static page apps. Heroku has a great out of the box experience and even has the ability to deploy a Docker container, but they don’t offer SSL for free dynos.
I recently moved all my projects to a k8s cluster, either DO and Linode has great price for a hobby tier k8s cluster ($10/mo for a single node cluster - 1 CPU core, 2GB RAM and 50GB storage), I got the same ability to deploy docker images and also got a free, managed and automatic SSL.
It was a great experience.
I know basic k8s. Nothing against it, but I felt it a bit overwhelming. Maybe once I get a better hold of it, I will jump onto it.
Yeah it seems to be overwhelming, especially with the load of documentation, but the way I use it is just really limited to create deployment, expose the service, and sometime restart it, just that :D the good point I found is we can actually control how much resources we want to allocate to each application running in k8s.
You can control the resource allocation in Google Cloud Run as well.
Nice writeup.
I started using Cloud Run after they announced it in alpha, for a couple of toy services that were previously in App Engine.
I updated them to use Cloud Build too, so you can avoid that manual
gcloud deploy
step: https://github.com/jamesog/whatthemac/blob/master/cloudbuild.yamlNice. I have used Cloud Build before and I think its a great idea if the builds are going to take lot of resources. Personally, I still try to manually test via
make docker_run
before deploying an image, so, building locally works. I am sure though at some point I will migrate to Cloud Build as well.Interesting setup. How can they deploy a docker container as “serverless”? Will they need to keep the container on standby in case someone uses it? If so, wouldn’t that effect load times?
The container can be booted fairly quickly, but certainly not as quick as an always-running service. For example I just hit one of my Cloud Run endpoints, which I assume was asleep, and it took 200ms to respond to the initial request. Subsequent requests were served in about 80ms.
Exactly. I think the latency could be a problem if you are trying to run a full-fledged money-making project but latency isn’t an issue for side-projects.
And I guess that’s why a small container is even more important.
For tiny projects, I’m super happy with Dokku, it’s like a poor man’s Heroku. If you don’t need bells and whistles, installing it on a DO or Vultr instance is super easy and it’s just set up as a git remote that you push to, and a buildpack runs and deploys your code. There are buildpacks for a lot of languages and you can add your own, too. And a ton of plugins for databases and so on.
Cloudflare Workers are another thing that look fairly interesting for this kind of purpose - effectively serverside service workers, WASM and all, with 100,000 free hits a day.
Yeah, I considered that as well. I fear a lock-in similar to AWS Lambda here. Google Cloud Run gives full portability since I can move the Docker container elsewhere (say to K8s) as well.
Understandable, although it feels like the compute portion is not really the concerning bit (Docker is “portable”, true service workers are “portable”). What usually isn’t portable are the storage interfaces if you care about persistence at all.
That’s true. You are right that they are only partially portable. In principle, I can move the code over but use the storage API from Google but that can be expensive.
However, if I am moving inside the Google (or AWS) services, then Google Cloud Run allows me to deploy something as a side-project and then upgrade it to a full-fledged K8s or VM based setup in the future in case I desire to.