1. 4

    I think the logic “if one person links to one site often, then it is likely spam” basically hits everyone who use actually creating content. If it is an article or open source, above rule or just notion will do one thing only: centralise. Because of more users use the same site, it weakens above algorithm.

    What probably makes more sense is to focus stronger on the rating of users, especially with regards to inverse relation degree.

    1. 2

      From my understanding you have an environment where you have one large local network and want to separate this into many customer-private networks?

      Did you by chance also think about the opposite situation? I have servers at a non-cloud hoster. This means I see my servers as individual servers, not as a cluster. And when I communicate between them I consider myself in a hostile network. Thus, I want to combine them into one single private cluster. Currently, I am using wireguard for this. Anything else I would have to consider in such a situation?

      I think this approach is also worth thinking about, because I guess that many companies might want to use a cluster of multiple servers, but do not need rapid scalability.

      1. 1

        I’ve been doing this for several weeks now – between Digital Ocean droplets and scalawag servers – by using https://github.com/slackhq/nebula

        It works well.

        1. 1

          Wireguard sounds like a good approach in general. What I wonder though is, what is your use case for putting multiple distributed servers into one network?

          1. 1

            They’re not so much distributed, ping is 3ms. It’s no problem for me.

            The use case is very simple, because it’s only a hobby setup: One server runs the databases and all my services and the other runs all my websites. In fact, all of the websites could easily run on the big server, but the small server costs 3 Euro per month so I kept it. But I believe that some companies could have a similar setup as a real use case.

            Or think load balancing. Two or three servers do the same thing and could be in the same private network.

            1. 1

              Personally I’d assure that all servers have IPv6 capability and connect the services using TLS. This way you don’t even need to have them in a single network.

        1. 1

          I am continuing the story and have just published the 2nd step that describes on how to secure the network: https://ungleich.ch/u/blog/how-to-build-an-openstack-alternative-step-2-secure-the-network/

          1. 2

            Thank you for this. I find the existing options float so high above the level I’m used to that none of them are suitable for my own use cases. It’s great to see more expansion in this area!

          1. 2

            I find the choice of using vxlan directly instead of something like openvswitch to be interesting.

            Overall, attempting to duplicate openstack from scratch is not something I would choose to do, and demonstrably did not do. There is syneffo/ganeti, cloudstack, proxmox, opennebula for starters. Evenif libvirtd is an issue, it’s possible to add a new hypervisor type to some, if not all, of those options.

            1. 3

              We are actually using opennebula at the moment. While generally speaking we are mostly happen (“it’s the least bad option”), opennebula has its drawbacks when it comes to native IPv6 support (delegating networks, routing, firewalls and co. are in far future).

              Regarding openvswitch: I did not yet see a practical advantage of using it versus plain bridging.

              And you have a good point, replacing something like openstack is not an easy task to do and I have to confess something: we won’t support everything that openstack does. We won’t even natively support IPv4, but only add it as a 2nd class citizen.

              The idea of being compatible with everyone of openstack and its modularity have always been very interesting to me and initially I also thought it is a very good approach. However having seen some openstack installations and also seen that often bigger teams (4+ people) are required to run openstack was shocking to me.

              That said, let’s have a look how far we get.

              1. 2

                Why isn’t it practical to add better IPv6 support for opennebula?

                1. 1

                  I’ve been in touch with the authors of Opennebula many years ago about this and other problems and the flow looks as follows:

                  me: X is not great in OpenNebula. What do you think about us providing a patch? one: (A) We can do that for you as a service! one: (A) It costs Y. me (A): I think I could do it for Y/3 one: (B) Our users don’t need that - we don’t accept that me (B): But I can provide it one (B) No thanks

                  (obviously simplfied)

                  Another of the features we miss is postgresql support or fixing nested XML database entries which would be natively supported by many databases, but are stored as a string. This is not only hard to debug, but also some important fields of a VM are stored in this XML. So when you update it, you need to carefully edit XML that is stored in a database (opennebula has a tool, onedb to access this by xpath, but it is still tricky).

                  So while opennebula is rather simple, the open source model is not fully an open source model with contributions.

                  1. 1

                    I don’t know your entire circumstances, but based on that statement alone, to me it sounds more like an ideological disagreement. Many, if not most, companies maintain local patches to upstream sources and find this cheaper and easier than starting from scratch.

                    What you’re talking about building at the beginning of the article isn’t what I would call an OpenStack alternative. We’ve already built what you claim are your requirements.

                    OpenStack has batteries included. I look at tools like ansible, packer, vagrant, terraform, and the amount of work in maintaining compatibility with all those tools seems fairly substantial.

            1. 7

              There is one really ugly thing that ruins IPv6 for end users and makes it worse than IPv4 for reaching one another directly without an intermediate party.

              It’s called DHCP-PD. There is, technically nothing wrong with it, it’s just a protocol for telling the customer’s router what /64 network it should use. However, many ISPs treat it like dynamic IPv4 at its worst and force frequent prefix changes, even on business connections.

              With dynamic IPv4, you can use dynamic DNS and DNAT to keep things reachable by the same address. It’s an ugly and fragile but somewhat usable solution.If your very network changes every day, you can’t even reach a box right next to you unless you are using a router above consumer grade that can do DHCP with DDNS updates, or use mDNS etc.

              If that approach becomes the default, it’s the end of end user networking as we know it. Everything will be useless without a third party that has a fixed address.

              1. 2

                I completely agree with you. I was testing DHCP-PD in our data center and almost all open source implementations suck pretty much. At the moment our approach is to use a custom, REST based API to dispatch static /64 networks to VPS and static /48s for VPN customers.

                However, in theory DHCP-PD could solve all of this, but by default there is no easy way to map a prefix to a customer statically.

                Maybe it’s time to write a new RFC.

                1. 3

                  It’s not just about implementations. I don’t know if proprietary implementations suck less, but I do know that ISP are often forcing a prefix change intentionally, to force customers to get a much more expensive connection with a statically allocated prefix if they want it to stop.

                  My fear is that even if good, easy to use implementations appear, ISPs will choose to make it a premium service that an average user will not want to pay for.

                  1. 2

                    …, but I do know that ISP are often forcing a prefix change intentionally, to force customers to get a much more expensive connection with a statically allocated prefix if they want it to stop.

                    In a previous life, wearing a network sysadmin hat, broken equipment was the bane of all our lives. One example were DHCP clients that ignored the lease time either treating it as infinity or worse a single second.

                    IPv6 to some degree has offered packet pushers the opportunities of a greenfield deployment and arguably is Good Practice(tm) to have everything in flux from day zero to tickle out bugs.

                    Personally I would not be so quick to point to malice where there are good practical reasons to do so. After all, supposedly this is part of the whole infrastructure-as-code malarkey line of thought that is actively preached here.

                    Of course I understand that there are people who want to run a service from their home connection, but the majority do not. In the minority that do (game servers maybe, but probably am showing my age here…xpilot w00t!) you likely need service discovery which requires a central authority (DDNS), or if you are hipster enough DHT, blockchain or IPFS.

                    Personally myself, I’m more upset that global IPv6 multicast (IIRC you have some useable space with every /48) is not available and all the amazing use cases (such as streaming your own radio/video) that would bring.

                    1. 1

                      My ISP (which admittedly is like heaven on earth) hands out dynamic prefixes by default, but if you want a static prefix, all you need to do is send them an email or even a tweet.

                      They even offer reverse DNS delegation for free once you have your static prefix.

                      You still use DHCP-PD to ask for the prefixes for your subnets (they give you a /48), but the prefix remains static.

                      1. 1

                        Do you even think there’s a market for such a premium service any more? Something to motivate the ISPs?

                        When I got rid of the rack I had in the basement, I offered parts to the people I know who also have racks. “Do you want it, or any parts? Spare nuts?” None did, “I’ve also gotten rid of my rack and replaced it with colo servers”.

                        I can believe that ISPs force address changes intentionally, but I’m reluctant to believe the reason you suggest.

                    2. 1

                      I think we’re in a world without fixed addresses already. My primary internet access device has had two address changes so far today. One when I changed from WLAN to 4G, the second when I changed back to WLAN. Am I unusual?

                      That I can’t run servers on the connection that serves my WLAN is an annoyance. But one that has to be weighed against the effect of dhcp-pd+privext on web clients. I see that when I use firefox focus, there isn’t really any way to track me on the web. If I had a permanent address, or a permanent prefix shared with noone, avoiding tracking would not be within my power.