Threads for Teknikal_Domain

    1. 1

      I’m going to catch a lot of flak for this, but uh… Hi. I wrote this. And yes, A rant made in the middle of the night after dealing with some seriously headache inducing crap might not be 100% accurate, but I’m giving it an update SoonTM the next time I push a content update.

    2. 5

      One point I have to disagree with is the complaint about NAT. I know very little about networks, but I do know NAT is the bane of end to end, and is naturally hostile to peer-to-peer communications. There’s still hole punching, but that requires a server somewhere. Without that, people must configure their router, with no help from the software vendor (the router is independent from the computer or phone it routes).

      • Want to hide the layout of your network? Just randomise your local addresses.
      • Want to hide the existence of part of your network? Use a firewall.
      • Want to drop incoming traffic by default? Use a firewall, dammit.

      We could even have a “NAT” that doesn’t translate anything, but open ports like an IPv4 NAT would. Same “security” as before, and roughly the same disadvantages (can’t easily get through without the user explicitly configuring allowances).

      That said, don’t we have protocols to allow the computer on the home network to talk to the router and ask it to open up some port on a permanent basis? I just want to start my game, advertise my IP on the match server, and wait for incoming players. Or setup a chat room without using any server, because I too care about my privacy.

      1. 10

        The NAT section absolutely reads like this person doesn’t understand firewalls.

        Every computer just by default accessible anywhere in the world unless you specifically firewall things?

        Where in the world is there NAT without a stateful firewall denying all incoming connections by default? Removed the NAT (because IPv6) you’d be left with a firewall. Where’s the issue? Maybe they think you need to configure the firewall on each computer specifically? That’s what this seems to imply:

        the kind of devices that do actively use IPv6 (mobile devices, mainly), are able to just zeroconf themselves perfectly, which is nice from a “just works” perspective

        1. 2

          NATs do provide some obsfucation of addresses and can make it more difficult for an attacker to reach a device directly (ignoring, of course, intentionally forwarded ports..)

          1. 5

            I’m not convinced that this is true. Most IPv6 stacks (including the Windows one) support generating a new IPv6 address periodically. The stack holds on to the old address until all connection from it are closed. IPv6 lets you take this to extremes and generate a new IPv6 address for every outbound connection. It’s fairly easy to have a stable IPv6 address that you completely firewall off from the outside and use for connections from the local network, and pick a new address for every new outbound connection. If someone wants to map your internal network, they will never see the address that you listen on and they can’t easily tell when two connections are from the same machine or different ones.

            In contrast, IPv4 NAT relies on heuristics and is actively attacked by benign protocols such as STUN, so implementations often have vulnerabilities (there was one highlighted on this site this week).

          2. 3

            No: I receive SSH bruteforce on my LAN from private IPv4 packets coming from outside, joyfully going through my ISP router from WAN to LAN. Firewalling, in IPv4 and IPv6 alike, prevents that, not NAT alone.

            1. 2

              Also Yes: a misconfigured firewall with NAT might have upstream routers not route devices individual addresses to it.

              Also TLS does have sessioin resuption cookies, and maybe rotating the IPv6 at every request is possible? Not practical though…

              Good point: How would we implement privacy-focused VPNs with IPv6? The Big Bad NAT?

              1. 1

                Also TLS does have sessioin resuption cookies, and maybe rotating the IPv6 at every request is possible? Not practical though…

                With what I’ve researched, it’s possible, assuming the server supports them, which a number do not (because a lot of mechanisms like that, in the interest of accessibility or speed, are a compromise to security)

        2. 1

          Where in the world is there NAT without a stateful firewall denying all incoming connections by default?

          Ideally, nowhere. In the world I live in? I’ve seen a good handful of people order their firewall incorrectly and end up placing an ALLOW ALL rule as the first in the sequence, meaning they have effectively no firewall.

          With IPv4, accidentally leaving your firewall wide open like that, assuming you run a network fully behind NAT, would lead to no real issue, since any port not given a NAT rule has no actual destination to pass it to.

          For the record: I am in no way saying NAT alone should be your security policy. But looking at the design documents, the conclusion of them seems to say that if your firewall dies, for some reason, NAT at least still plays traffic cop (well, maybe more like traffic controller)

      2. 1

        NAT is the bane of P2P, and is something that, yes, I do indeed understand why it’s not considered part of IPv6.

        However the nice ability to have about, at current count, 6 different hosts, accessible all under the same IP with just a port number to deal with is nice - I don’t need to remember multiple addresses, certainly not stupidly long ones, I just know that anywhere in the world I can type in 96.94.238.189 and that’s the only number I need to memorize. And as far as my research has lead me, that’s just not possible in IPv6.

          1. 1

            Of which, 90% of my network traffic is routed through. However, HAProxy is not the end-all be-all here, some things it just can’t do:

            • SSH, which, at least, OpenSSH, does not have support for the PROXY protocol, nor host based routing. I need, currently, 3 separate ports NATted through to deal with the latter. We’ll get to the former in just a moment.
            • Mail, be it SMTP, POP3, or IMAP, but especially SMTP, where the IP address connecting is massively important (See also: SPF), and, again, Postfix, which is what I’m using at an MTA, does not support the proxy protocol either, to the best of my knowledge.
            • Very long-run connections, like IRC. While yes it can handle these, and no, I don’t mean logging, option logasap is a thing, but it really doesn’t seem like HAP was exactly meant to keep track of connections like that, especially when, once again, my ircd, UnrealIRCd, pays attention to the connecting IP (There is discussion about allowing PROXY support, I believe it’s experimental with WebIRC blocks, but for the entire server, it’s not supported)
            • Anything with enough security reasoning to solidly slap fail2ban on it, such as… SMTP and SSH. Fail2Ban doesn’t understand PROXY, though, admittedly, you only need the application service to understand that and log it. But either way if I’m routing connections through another machine then fail2ban is useless without a fair bit of configuration to make it work cross machine, something that I wrote an entire service for, just to allow my Apache instances to correctly ban IPs at the HAProxy level.

            As much as HAProxy is an amazing piece of kit that is very functional and flexible, not everything expects, or exactly allows, arbitrary reverse proxies without a lot of fiddling.

        1. 1

          Can’t you just use the same prefix for all IPV6 addresses? I imagine something like:

          aa:bb:cc:dd:ee:ff:gg:hh::1
          aa:bb:cc:dd:ee:ff:gg:hh::2
          aa:bb:cc:dd:ee:ff:gg:hh::3
          aa:bb:cc:dd:ee:ff:gg:hh::4
          

          The first part may still be a pain if you have to remember it and type it by hand, but the rest doesn’t sound that bad…

          1. 1

            In theory, yes. Also in theory, you should never have to type in an IPv6 address because DNS anyways (let’s just forget that sometimes when I’m configuring a fresh host on the network it has no DNS)

            And minus setting up network prefix translation for your chosen prefix inside the private space, you’ll be dealing with something out of your control. For example, I just disconnected my phone from wifi. It now has a public IPv6 of 2600:387:c:5113::129. And it’s a lot easier to memorize 24 bits of decimal than 64 bits of hex. Heck, if you run a /24 inside the standard 192.168 prefix range for IPv4, you only really have to remember two numbers: your chosen prefix (in my case, 5), and the IP of the host you want to reach (say, 158). Therefore I can mentally remember that the pair (5, 158) is, say, the new container I just brought up, and I can probably hammer out 192.168.5.158 into my browser’s address bar before I’ve even fully recalled that.

            IPv6, however, would likely cause me to have to memorize an entire address, or always be going back to my trusty ip a command to copy it. and something like 2600:387:c:5113 as a prefix isn’t something I can really compact, like I can compact 192.168.5 to 5. And being much longer, it’ll take more repitions to successfully memorize that recall away, meaning I just need to keep my host IP portion in memory.

            God forbid if any part of that address changes on you though. Hopefully dynamic IP assignments (“from an ISP” dynamic, not “from DHCP” dynamic) won’t be a thing in IPv6 the way they are in IPv4

    3. 6

      I wish the author made a better technical case here. I mean, I feel their pain to an extent. I run my home network with ipv6 native, and yet sometimes still build infrastructure in ipv4 space simply because it is much easier to reason about, but IMO that doesn’t mean that IPV6 is bad or that we shouldn’t make the effort, because it seems to me that there are some pretty clear advantages when you bite the bullet and figure things out.

      1. 1

        Honestly, IPv6 as a concept is beautiful. The current implementation(s) of it in most devices I handle on a day-to-day basis make me think a few things are lacking.

        1. 1

          I’ve only really experienced it on Linux, Windows and OSX, and from a user’s perspective at that, not a hard core sysadmin.

          What warts do you see in which implementations?

          1. 2
            • Inconsistent preference for 4 vs 6. A lot of things now default to the loopback address being ::1 not 127.0.0.1, meaning I’ve been bitted more than once by, say, an allow rule in a config for 127.0.0.1 but still getting denied because the actual connection was from ::1 instead. And a complaint likely obsolete by now, way back when I was dealing with some large Java app (I want to say Graylog?), it refused everything IPv4, because the OS had a v6 stack and it was binding to the ports on v6 only. It took a lot of JVM command-line flags to get it to respond to the address I assigned it to.
            • Host separation. A thing I accomplish with VLANS now, but before I learned them (more accurately, when the switch I had installed was so basic it didn’t support them), I had a lot of firewall rules in place between certain hosts to limit levels of accessibility, in case one got compromised, it couldn’t allow access to any other hosts than it needed. IPv6 broke this, because, after enough listening, you’d eventually find a broadcasted (airquotes) v6 packet, and, surprise! I never configured IPv6 (SLAAC handled it), so I never firewalled that… my entire, at the time, security measures became useless. And for the record, do not do that.
            • Windows, when it assigns the same link-local IP to more than one adapter, differentiates them with an index number. Linux uses the symbolic device name. I know the difference between eth0 and lo, but not 4 and 5 immediately.

            There’s a few others, but as some examples…. yeah. A number of companies that I know, that I’m not affialiated with (other family members are) have flat disabled IPv6 throughout their network because it’s just… device intercompatibility is so wack, especially in a space with a lot of vendored equipment like that. It’s bad enough that each different piece of kit has it’s own quirks for one protocol version, let alone another that’s, comparitavely, in their eyes, a large, unlearnable behemoth that’s not understood.

    4. 16

      Most of this is just empty and ill-informed ranting about protocol differences.
      Having said that, there’s one portion that I want to respond to.

      The Address Resolution Protocol is the protocol used in IPv4 to translate link layer addresses (MAC addresses) into internet layer addresses (IP addresses).

      This protocol relies heavily on broadcast. Well wouldn’t you know it, IPv6 has no broadcast. There’s an “all nodes” link-local multicast which does effectively the same thing, but it’s not the same thing.

      But besides that there’s really nothing to it, other than the fact that we had to create an entirely new protocol to do the exact same job as an already existing protocol because someone removed broadcast, the second most common type of traffic.

      ARP uses broadcasts because it was defined in 1982.
      Many older protocols use broadcast instead of multicast because either multicast didn’t exist or wasn’t reliable.
      If the ARP protocol had been designed last week then it would use a multicast Ethernet address.

      1. 3

        NDP is not a brand new protocol per-se as the protocol is ICMPv6. This is far more elegant than a special non IP protocol to manage IP neighbors. Moreover, NDP also provides router discovery.

        1. 1

          ARP was, and, according to protocol, still is, able to resolve more than just IP network addresses. I don’t think that’s a fault of IP or ARP, that’s just the age of the original specifications showing.

          1. 3

            Sure, but this is an additional protocol while ICMPv6 bundles neighbor discovery, router discovery, diagnostics and error reporting in a single protocol with clear semantics. Why keep ARP while you already have a protocol in the stack able to do this job? Also, because of the non-IP nature of ARP, some tools just don’t work with it (eg iptables vs arptables, until it was unified with nftables).

            1. 1

              Now I know everyone is just going to call this an issue that only I have, but I see a description like that for ICMPv6 and I think “is this doing too much?”

              It’s nice to have one protocol that bundles everything, but generally, generally, there are some exceptions, the more you bundle into one unit, the less effective it is at each of those parts. And I don’t want to see ICMPv6 have so much bundled up that we eventually split off separate protocols that do it’s job better - basically going back to square 1

              1. 1

                Are we bundling too much stuff in UDP or in TCP? TCP is a reliable connection-oriented stream protocol with application multiplexing, UDP is an unreliable connection-less message-oriented protocol with application multiplexing, ICMP is a simple control protocol for network signalling (no reliability, no streaming, no application multiplexing). It’s the right tool to implement neighbor discovery. Separate protocols are a burden for all the implementations.

                As for “so much bundled up”, it’s not like in 30 years, there was many stuff added to ICMPv6: https://www.iana.org/assignments/icmpv6-parameters/icmpv6-parameters.xhtml.

                Also, you complain that NDP is brand new protocol, then when I say this is ICMPv6, you complain this is not a separate protocol.

                1. 1

                  I’m not saying what it is now, I’m saying what it could be with a lot more widespread acoption. Most things don’t get innovated on and changed when they’re barely deployed.

      2. 3

        ARP uses broadcasts because it was defined in 1982.

        If you read The Alto: A Personal Computer it talks about the development of Ethernet and says, somewhat tellingly, (quoting from memory, possibly slightly wrong) ‘It’s possible to imagine a network with as many as tens of thousands of computers’. It’s easy to forget, in a world where we’re struggling with four billion addresses not being sufficient for a single network, that some of the core technologies were built on the assumption that a 16-bit address is probably sufficient for pretty much anything and so designing more than that was providing massive amounts of headroom for future expansion.

        1. 2

          And yet, when Ethernet, a local area network, was standardised in the early 80s, it had a 48-bit address space, whereas TCP/IP, a wide-area network, only had 32 bits to address the entire world.

          We’re running the entire world on a proof-of-concept stack.

          1. 1

            We’re running the entire world on a proof-of-concept stack.

            I do not disagree in the slightest.

      3. 1

        My point was this: Why remove broadcast, then re-work everything to use an “everyone” multicast group, which is just broadcast with extra steps?

        The only really difference between “broadcast” and “all nodes” multicast, functionally, is nothing. There’s some benefit to not having three addressing modes to deal with, only two, but at the end of the day, we could take “all-nodes”, rebrand that as the one designated address for “broadcast” instead of just saying “set all the bits in the host field” and it’d work the exact same.

        1. 1

          My point was this: Why remove broadcast, then re-work everything to use an “everyone” multicast group, which is just broadcast with extra steps?

          I would turn this question around. Since broadcast can be simulated with multicast, why have broadcasts at all? There’s no advantage to having both.

          If you want to know why this change was made, then I would try searching the IETF’s IPng mailing list archives and look at the IPng proposals. SIPP, the proposal that became IP6, was an evolution of SIP. You could also try emailing Stephen Derring (the designer of SIP).

          1. 1

            I do understand your point. And, with the current implementation that’s what’s been done, I get that.

            My point being, why not label the “broadcast” (airquotes) group here as real broadcast. It’s the multicast addressing scheme using a fixed address, but for protocols that expected “broadcast” (ARP), they just need to change their semantics for IPv6 instead of being cast aside completely.

      4. 1

        Not to word this as an attack: The entire article is going to be re-written shortly, regardless. Even the spell checker gave up on this one. If there’s any other things you’d like to specifically point out aas empty or ill-informed I’d love to hear it to take into consideration.

        1. 3

          If there’s any other things you’d like to specifically point out as empty or ill-informed I’d love to hear it to take into consideration.

          I’ll start with a disclaimer and a couple caveats.

          • It’s been quite awhile since I’ve worked with layer 3 or layer 4 on a regular basis.
          • I haven’t used IPv6 in anger.
          • I won’t respond to your entire post; I’ve hit my time box.

          You might also consider reading the comments at HN.

          First Paragraph

          IPv6 was a draft in 1997(!), and became a real Internet Standard in 2017.

          IETF standards terminology is confusing.

          • Proposed Standard - stable and has a well-reviewed specification.
          • Draft Standard - like a proposed standard but more mature. This category was discontinued in 2011.
          • Standard - stable, mature, and has a well-reviewed specification. It can take a number of years for a standard to reach this category.

          Proposed Standards and Draft Standards are just as real as Standards; they’re just newer. IPv6 was a draft standard in 1998, it just took years for it to reach the Standard category.

          Allocation Issues

          Even though the entire “special” address assignments are exactly 1.271% of the entire IPv6 address space, we’re still allocating giant swathes of addresses. History repeats itself, you can see that right here.

          I don’t see this as repeating the mistakes of IPv4. Addressing requirements only increase over time; allocations in a new protocol should be generous to allow for new applications and larger networks. It’s much better to reserve 1.271% now than reserve 0.5% and be stuck with an undersized allocation. To paraphrase an old saying, there are only two sizes of allocations - too large and too small.

          Address Representation

          I agree that IPv6 addresses are longer and harder to type than IPv4. The same would be true for any IPv4 alternative, though. Hexadecimal addresses are easy to convert to binary and seem like the least worst option.

          URLs

          To connect to a raw IPv6 address, you wrap it in square brackets. To connect to 2607:f0d0:1002:51::4 directly, that’s http://[2607:f0d0:1002:51::4]/ Why is this a thing?!.

          The first URI RFC was in 1994. It used “:” to specify the port number. IPv6 was still a work in progress at the time, it’s not too surprising that there’s a conflict here. URLs have since evolved into a monster specification.

          DNS

          Unless you have your own DNS server (actually not that hard) that’s configured, you’re still manually typing addresses. Of course if you have, say, pfSense managing your network, every static DHCP lease will be registered in DNS, but it has to take a DHCP lease. And if this device doesn’t… well, I hope you don’t mind typing that out by hand to connect so you can configure it.

          You could also use multicast DNS, hosts files, or LLMNR for local name resolution. I would avoid LLMNR but either of the others would work. IPv6 does require more use of DHCP than IPv4. Any device with an IPv6 stack and no DHCP client is unfit for use. There’s also the old standby of assigning devices pneumonic MACs.

          That is insane. The IPv6 rDNS TLD is just ip6.arpa, and the IP part is… every single hex digit, reversed.

          It’s longer and uglier but consistent with IPv4.

          NAT

          Don’t make a virtue out of a necessity. NAT is ubiquitous in IPv4 networks because addresses are scarce. IPv6 addresses are plentiful enough that it isn’t needed.

          In this sense, all unknown traffic is dropped, and traffic that I have NAT rules for are also allowed past the firewall. This is a “default drop” system. Nothing gets through unless I say so.

          NAT doesn’t guard against compromised internal systems reaching out, e.g. UPnP or NAT slipstreaming. Allowing internal hosts to initiate arbitrary connections is a security risk for IPv4 or IPv6; default permit is an anti-pattern. Your network, your rules but NAT isn’t a substitute for a firewall.

          ARP / NDP

          I agree with @vbernat’s opinion on this topic. ARP was fine when it was introduced but NDP is a better solution to the problem it solves.

          One Other Random Remark

          And this is just the sign that you’ve made a stupid protocol: Enabling IPv6 on the LAN side of Sophos UTM 9 causes the WAN side to lose it’s link. …Why? I can’t even make my local network IPv6 for link-local communications because then the machine can’t connect to the rest of the internet.

          Why is IPv6 to blame for a buggy/broken implementation? Poorly behaved hosts are a fact of life on networks.

          1. 1

            I’ll take that into consideration… Also that “One Other Random Remark” section wasn’t meant to be blaming IPv6, I’m just pointing out the headache it’s given me just getting any IPv6 support over here.

            I can clearly see a couple sections are worded such that people are consistently missing what I indended to say, and I’ll update that.

    5. 32

      This article is, IMHO, mostly complaining about that things are different.

      1. 9

        Right. The only thing I hate about IPv6, is that it isn’t supported everywhere.

        1. 4

          I have an IPv6-only server hosting repos that I sync to GitHub periodically. However GitHub is IPv4-only, so I accomplish this by pulling all the repos down to my home network (dual stack), and pushing them back up to GitHub. Bit of a joke really.

        2. 1

          And most of the things I gripe about are likely due to:

          1. It not being supported everywhere, so there’s no consistent baseline of what an IPv6 host “should” do, partially because…
          2. Nobody quite knows IPv6 to any serious degree, minus the writers of the spec. Many places I’ve interacted with have already written off IPv6 as this giant monster that’s “just too different to ever understand,” which kinda causes a self-feeding cycle of nobody wanting to learn the protocol, so nobody implements it, which means nobody learns it..

          I think it would be a great thing if we had some serious, global, IPv6 adoption. There’d be a lot more effort into understanding it, and things that have been consigned to the “well this is impossible” bin, we might actually find a way with that gained knowledge.

          But any large change like this tends to cause it’s own catch-22. And without forcing people to upgrade, it’s just not going to progress at any serious rate.

      2. 3

        Yep. I have IPv4 / IPv6 dual stack at home working with iOS, MacOS, Windows, Linux, Roku, a Midea U (smart air conditioner), and some other things I don’t even remember off the top of my head. It wasn’t even hard to set up. Never have any problems. And I use VLANs, WireGuard, and so on, which according to this article should have driven me to drink during setup.

        I also have an IPv6-only VPS. Added an AAAA record for it, some ip6tables rules (because I’m lazy and haven’t learned nftables syntax), and everything worked first try.

        1. 4

          nftables is worth the time investment. My VPS is dual stack, but thanks to nftables I can use a single config for both protocols.

        2. 1

          VLANs, WireGuard

          I don’t believe I talked about either of those, actually, unless you want to count a cross-VLAN firewall/NAT/NPT..

          Personally, most issues I see come from dual-stack, running IPv6 solely would probably improve a lot of things. And, admittedly, for most people, IPv6 isn’t going to cause them a headache. For people like me who will tweak every veriable of a network to within an inch of its life to get the results, performance, and overall function that I want, then it’s a lot easier to just do without.

    6. 2

      Makes me think about doing this myself, but I’ll need to get Hugo to behave correctly in rendering proper HTML markup. (especially since images are a pain for it)

      1. 2

        You really should do it.

        1. 1

          I’m working on it now, you could check https://teknikaldomain.me/index.xml if you wanted to see the moment it deploys and doesn’t magically break. (Meaning, in 4 years when computers behave correctly)

          Luckily Cloudflare doesn’t cache XML files be default, or my RSS would only update every 2 days.

          1. 1

            Only one issue: some posts can have a banner image (or even a gallery / slideshow of banner images that flip through every few seconds) and that can’t reliably be translated into RSS, mainly because with my current knowledge of Hugo templates, conditionals, and page variables, I don’t know the way to either inset the images only if one exists, or a little “note: Due to the limitations of RSS, we’re unable to show the banner images for this post. Please visit the real post link to view them.” line at the fery top before the rest of the content.

            …I could just put a generic “some resources may not have loaded” warning on everything but that feels… wrong somehow. Guess for right now it has to be images in the actual content only.

            Edit: Found it!

    7. 8

      Nice article. I cut my teeth on Apple’s internal Projector system; dunno when it was introduced, but it was already there when I started in 1991, and was used until the late 90s when everything was gradually imported into this “CVS” tool the new NeXT overlords brought with them. Projector was CVS-like. What I remember most was its terrible merging: it originally had no 3-way merge, so all competing changes had to be merged by hand! My AppleScript co-worker Wm. Cook got frustrated enough to write a 3-way merge script (in MPW shell) which eventually became a standard part of the workflow though it was never integrated into the tool itself.

      When I was briefly at Sun in 1997-98 I was introduced to a system they’d built atop SCCS, that was distributed in the same sense as modern 3rd generation tools. I thought it was genius the way you could have your own repo on your local machine, and how commits could be successively pushed into dev/integration/build servers. Back at Apple I told my co-workers about this awesome idea, but resigned myself to CVS and then SVN. So I was very happy when the 3rd-gen systems like Monotone, Darcs and Mercurial started to appear in the wild in the early 00s.

      1. 4

        Very cool! Glad you enjoyed it! I find it very interesting to hear about the internal solutions that companies devise to fill the tooling gaps in their workflows.

        Luckily CVS is before my time so I never had to experience the pain of a brutal merge with it on a real project (and it sounds like Projector was even more painful). I just set up some test projects with CVS, played with it, peeked until the hood, and consulted my buddy Teknikal_Domain to get a feel for how it works.

        Despite the negative sentiment (generally frustration) that the older tools tend to evoke, I was surprised to find that most of the features of most of the tools seem to work very well (at least on my trivial test projects).

        I had the luxury of starting my VCS learning with Git once it was already a well-formed project. I think newer developers tend to discount (or be completely unaware of) the impact that the older tools had on this field. My eyes were opened by the influence (including direct integration and extension) that the “legacy systems” have on the newer tools. Expressing that view became a (initially unplanned) purpose of the article.

      2. 2

        I wonder, just out of curiosity: Do you think there’s any merit to the older systems like CVS and SVN? Is there anything more that we could learn from them, or for the most part are they obsolete and their secrets exhausted.

        1. 5

          Overall, I don’t think so. I would never go back!

          I remember a few times thinking an innovation in a new system was a bad idea, but changed my mind after using it. Like, I didn’t like how in SVN a file didn’t have a consecutively-numbered history anymore; and the “staging area” feature of git seemed needlessly over complicated. I learned better.

          I don’t find the pre-3rd gen systems interesting, because they’re not distributed. What I’m fascinated by is propagating content (documents, discussions, databases…) across a decentralized network, because I think that’s the future of the open Internet. Modern VCSs are obviously good at that, but unfortunately they’re optimized for source code (directory hierarchies of smallish line-based text files), and the amount of metadata they keep is excessive for many use cases where it’s not critical to keep a full revision history.

          1. 3

            What I’m fascinated by is propagating content (documents, discussions, databases…) across a decentralized network

            May I suggest you take a look at IPFS if you haven’t already? That sounds like something you’d enjoy playing around with.

        2. 2

          I sometimes use first generation version control, such as RCS for files I only work on (configuration files, html documents, etc.). The main benefit over git is that I can have multiple independently versioned files in one directory.

          Emacs has a great interface for interacting with it (vc), so I don’t have to bother with the specific commands, but still can easily browse the history, create blames and add changes.

          1. 4

            A lot of my servers have RCS guarding config files with strict locking disabled, just so I always have the ability to undo to a clean copy, and once something works I can actually describe the change out-of-band instead of making my config 80% comments as to what piece does what, why, the history of it all…

            If you’re going to do that, fun tip, create an RCS/ directory, and RCS will put all it’s ,v files in there instead of in the same directory, makes things cleaner. Especially when I have 20 different files of VCL (Varnish) and then 20 other files that are the exact same to my tired eyes.

            1. 2

              I just use Ansible to manage those config files, with the Ansible playbooks and templates in a Mercurial repo. In my experience it works far better because you have everything for a system in one place.

              1. 3

                I looked at Ansible and Chef (and still can’t really decide which one I think is “better”), but in the end, my, err… organically-grown network is just a bit too much of an unorganized mess to properly set that up. I decided that next time I tear it down and do a full rebuild (which will be done one day), then I’ll start with those tools from the ground up instead of trying to mash them into an already existing system that really doesn’t want to change.

    8. 4

      For “internals” I find it quite superficial. There is nothing about Daarcs scalability problems which Pijul claims to fix.

      1. 6

        Me-ow! Maybe you can get a refund?

        I found the level of detail about right for a high-level historical overview. I’m sure I could dig up more information if I want… for instance this “interleaved deltas” thing SCCS used sounds fascinating.

        1. 4

          It is, from a conceptual standpoint. Unlike the successive (or “reverse”) deltas of RCS, SCCS can construct any revision in about the same amount of time because it’s only dependent on the size of the history file. Since RCS successively deltas revisions, the father back in history you go, the more you have to “un-delta” to extract a revision, and the longer that takes to perform.

          I know Wikipedia has an article on the interleaved deltas, if you want a starting point..

        2. 2

          Lol and thanks! Yes there is always a balance to strike between depth/breadth/accessibility/length. I tried to appeal to the technical side of the uber-nerd without shunning the interested novice, and added a pinch of historical context.

      2. 3

        Appreciate the feedback. I will look into that Darcs item you mentioned and consider adding a note about it. If you have any other suggestions I’d love to hear them.

    9. 3

      Hey everyone, I know I wasn’t here for the first post that went up (I was just on reddit for that), but I’m here now to help out with questions and explanations of any of the VCS systems covered. (I’ve been the one helping out behind the scenes, contributing knowledge, insights, and in this case a spare server to test with)

      Additionally, if you like this sort of thing, feel free to head over to my blog where I cover a number of things… some programming, some server stuff, some just putting way too many stickers on my laptops because it was funny.