Hey, that was easy.
This was down-voted, but I think your assessment is spot-on.
As always, it depends. I am running some go applications inside a container, e.g. gogs, to isolate the process from the rest of the system. The process could also be isolated by restricting its permisssions through a systemd service configuration but a docker containter is nice for portability.
Containerization is also required if you want to deploy your application to something like kubernetes, in this case I’m compiling my go code without C extensions (CGO_ENABLED=0) so I can use a lightweight musl base image like alpine.
[Comment removed by author]
What are you running that properly handles blue-green deployment for you? Kubernetes? I didn’t realize they had support for that out of the box.
Only if you use OpenShift, apparently.
For the sorts of workloads I run, the answer has historically been no. I think containers have value for heterogenous and/or multitenant workloads. They also can be valuable for very finicky builds (think ruby nokogiri libxml).
But for most of the go apps I’ve worked with, it’s pretty common to have one or two static binaries running on a box, as separate users, using all the resources of the box. Memory and cpu sharing hasn’t historically been an issue.