Honestly I don’t understand this at all. It seems like the point of using Docker is to have a single image that is your unit of deployment but like … you already have that with an uberjar. Your deploy is one file.
With an uberjar you have to ensure you’ve got a JVM installed on the system before you can deploy but with a docker image you have to ensure you’ve got docker installed, so … what have you gained, exactly? Is it just about uniformity and using tools that are easier to hire for because they also work with non-JVM deployments?
Both options are certainly viable. If one works for you, it works.
At 200ok, we deploy our Clojure based micro services with Docker for relatively easy scaling, logging, monitoring. “Relatively easy” because it works the same way in development and production as well as for services written in other languages.
Java runtimes require Java on the underlying host. It may just be me, but I think the JVM sees more churn that an app would care about (GC, versions, features, flags) than a docker env would. So the point of shoving an Uberjar into a container is to allow getting your app tuned without redeploying the underlying server.
Based on my usage of docker at work (not my own choice) I see more churn in docker than the JVM, but that’s because staying on an older version of the JVM is an option, and as far as I can tell, staying on an older version of docker isn’t. (or at least not an option for me personally given the decisions other people in my org are making for me). The version of the JVM we use is backwards-compatible with the same stuff I’ve been using since I first started using the JVM in 2008, but yeah the newer ones don’t have the same guarantees. (That’s why I don’t use em! for now anyway)
You’re right that an uberjar is effectively a container. However, using Docker has become the standard way to manage containers in production nowadays. Packaging apps the same way you package everything else makes sense in this context.