I find the choice of using vxlan directly instead of something like openvswitch to be interesting.
Overall, attempting to duplicate openstack from scratch is not something I would choose to do, and demonstrably did not do. There is syneffo/ganeti, cloudstack, proxmox, opennebula for starters. Evenif libvirtd is an issue, it’s possible to add a new hypervisor type to some, if not all, of those options.
We are actually using opennebula at the moment. While generally speaking we are mostly happen (“it’s the least bad option”), opennebula has its drawbacks when it comes to native IPv6 support (delegating networks, routing, firewalls and co. are in far future).
Regarding openvswitch: I did not yet see a practical advantage of using it versus plain bridging.
And you have a good point, replacing something like openstack is not an easy task to do and I have to confess something: we won’t support everything that openstack does. We won’t even natively support IPv4, but only add it as a 2nd class citizen.
The idea of being compatible with everyone of openstack and its modularity have always been very interesting to me and initially I also thought it is a very good approach. However having seen some openstack installations and also seen that often bigger teams (4+ people) are required to run openstack was shocking to me.
That said, let’s have a look how far we get.
Why isn’t it practical to add better IPv6 support for opennebula?
I’ve been in touch with the authors of Opennebula many years ago about this and other problems and the flow looks as follows:
me: X is not great in OpenNebula. What do you think about us providing a patch?
one: (A) We can do that for you as a service!
one: (A) It costs Y.
me (A): I think I could do it for Y/3
one: (B) Our users don’t need that - we don’t accept that
me (B): But I can provide it
one (B) No thanks
Another of the features we miss is postgresql support or fixing nested XML database entries which would be natively supported by many databases, but are stored as a string. This is not only hard to debug, but also some important fields of a VM are stored in this XML. So when you update it, you need to carefully edit XML that is stored in a database (opennebula has a tool, onedb to access this by xpath, but it is still tricky).
So while opennebula is rather simple, the open source model is not fully an open source model with contributions.
I don’t know your entire circumstances, but based on that statement alone, to me it sounds more like an ideological disagreement. Many, if not most, companies maintain local patches to upstream sources and find this cheaper and easier than starting from scratch.
What you’re talking about building at the beginning of the article isn’t what I would call an OpenStack alternative. We’ve already built what you claim are your requirements.
OpenStack has batteries included. I look at tools like ansible, packer, vagrant, terraform, and the amount of work in maintaining compatibility with all those tools seems fairly substantial.
From my understanding you have an environment where you have one large local network and want to separate this into many customer-private networks?
Did you by chance also think about the opposite situation? I have servers at a non-cloud hoster. This means I see my servers as individual servers, not as a cluster. And when I communicate between them I consider myself in a hostile network. Thus, I want to combine them into one single private cluster. Currently, I am using wireguard for this. Anything else I would have to consider in such a situation?
I think this approach is also worth thinking about, because I guess that many companies might want to use a cluster of multiple servers, but do not need rapid scalability.
I’ve been doing this for several weeks now – between Digital Ocean droplets and scalawag servers – by using https://github.com/slackhq/nebula
It works well.
Wireguard sounds like a good approach in general. What I wonder though is, what is your use case for putting multiple distributed servers into one network?
They’re not so much distributed, ping is 3ms. It’s no problem for me.
The use case is very simple, because it’s only a hobby setup: One server runs the databases and all my services and the other runs all my websites. In fact, all of the websites could easily run on the big server, but the small server costs 3 Euro per month so I kept it. But I believe that some companies could have a similar setup as a real use case.
Or think load balancing. Two or three servers do the same thing and could be in the same private network.
Personally I’d assure that all servers have IPv6 capability and connect the services using TLS. This way you don’t even need to have them in a single network.
I am continuing the story and have just published the 2nd step that describes on how to secure the network: https://ungleich.ch/u/blog/how-to-build-an-openstack-alternative-step-2-secure-the-network/
Thank you for this. I find the existing options float so high above the level I’m used to that none of them are suitable for my own use cases. It’s great to see more expansion in this area!