Threads for grahamedgecombe

  1. 5

    Apparently you can buy IPv6 addresses, use them for the servers on your home network, and then if you change your ISP, continue to use the same IP addresses?

    You need to be a RIR (RIPE/ARIN/LACNIC/APNIC/AfriNIC) member for that. The membership fee alone is within thousands/year. Then you need to arrange routing with the hosting providers, and those that are ready to do that will also charge at least hundreds per month. No public cloud I’m aware of supports that at all, so you also need your own hardware in a datacenter where your transit provider is present.

    In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

    1. 10

      Vultr will let you do BGP with them for (as far as I know) no additional cost above the price of your VPS: https://www.vultr.com/docs/configuring-bgp-on-vultr/

      In the RIPE area at least, you can obtain a provider-independent IPv6 assignment via an LIR - you don’t have to go directly to RIPE. A cheap option is Snapserv, who offer an IPv6 PI assignment for 99 EUR/year and an ASN for a one-off fee of 99 EUR. These can both be transferred to another LIR if, for example, Snapserv went out of business, or you wanted to switch LIR for some other reason. They also offer IPv6 PA assignments for less money, but the trade-off is that a PA assignment is tied to the LIR.

      You do need to be multi-homed to justify the PI/ASN assignments, so you’d need to find another upstream provider in addition to Vultr. Someone I know uses Vultr and a HE tunnel to justify it.

      1. 1

        Interesting, that’s sure an improvement. My company is a RIPE member so I haven’t been watching the PI situation closely, I’m glad to see it improve.

      2. 9

        In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

        I suspect the problem is routing tables. It would be trivial to assign every person a /64 without making a dent in the address space but then you’d end up with every router on any Internet backbone needing a few billion entries in its routing table. That would completely kill current (and near-future) hardware. Not to mention the fact that if everyone moving between ISPs required a BGP update, the total amount of BGP traffic would overwhelm networks’ abilities to handle the update rate.

        You need some mechanism to ration the number of routable networks and money tends to be how we ration things.

        1. 2

          I doubt this will ever be a problem in practice. Even among those who host their own servers, the number of people who want to own their address space is always going to be small.

          I’m also not advocating for making addresses free or charge, only for making them available for less than the current exorbitant prices that RIRs charge for membership.

        2. 2

          TIL, that’s really interesting. I just remember many, many years ago that people were entertaining this, but also with Sixxs and HE tunnels that kinda worked for a while.

          1. 2

            Oh, but with tunnelbroker.net and similar, the provider owns the network, you just get a temporary permission to use it and can’t take it with you.

            1. 1

              Yes of course, but at least the way it works you could in theory use it longer despite switching ISPs. And I think my Sixxs account was nearly a decade old at the end. Some people might have moved cities three times in that time.

          2. 1

            I always wish that addresses were more equitably distributed. With IPv6 there’s no reason not to. And yet ☹

            1. 1

              welp for some reason my ISP provides every customer a /64, I don’t know what the reason for that is. There is no single person the internet that needs a /64 and I’m certain no german household needs. But yeah waste tons of network space for no reason. IPv8 we’re coming..

              1. 5

                Its the minimum routing size and if you stray from it a lot of the protocol breaks, making it smaller would be insane. And its not wasteful, you could give every atom on the planet a /64, the address space is REALLY BIG. Ipv4 this is not. For it to show up in BGP it needs to be a /48, /32 is the minimum allocation. And there is as many of those as there are ip’s. It should be a /48 you’re given actually, not a /64 (or /60 /56 in comcast home/business cases)

                Why do you believe ipv8 is needed because of /64 allocations? Can you back that up with some numbers?

                I think we’re good to be honest: https://www.samsclass.info/ipv6/exhaustion.htm

                1. 1

                  I haven’t done the math but I’ll let the last apnic report speak for itself in that regard (you’ll have to serach, its long and there’s no way to mark some chapter).

                  However, before we go too far down this path it is also useful to bear in mind that the 128 bits of address space in IPv6 has become largely a myth. We sliced off 64 bits in the address span for no particularly good reason, as it turns out. We then sliced off a further 48 bits for, again, no particularly good reason. So, the vastness of the address space represented by 128 bits in IPv6 is in fact, not so vast.

                  And

                  Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                  So It’s not really a standard that breaks things, because then things would already break.

                  I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                  1. 2

                    The main reason your LAN must be a /64 is that the second half of each address can contain a MAC address (SLAAC) or a big random number (privacy extension).

                    1. 1

                      So It’s not really a standard that breaks things, because then things would already break.

                      For routing, not in general, but going below /64 does break things like SLAAC. The original guidance was a /48, its been relaxed somewhat since the original rfc but can go down to a /64. Doing work or i’d pull up the original rfc. Going below /64 does break things, but not at that level being referenced.

                      I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                      Have to get out of the ipv4 conservation mindset, a /64 is massive yes, but 64 bits of address is ipv4 to the power of ipv4, that is… a large amount of space. It also enables things like having ephemeral ip addresses that change every 8 hours. Its better to think of a /64 as the minimum addressable/routable subnet, not a single /32 like you would have in ipv4. And there is A LOT of them, we aren’t at risk of running out even if we get allocation crazy. And thats not hyperbole, we could give every single device, human, animal, place, thing a /64 and still not approach running out.

                      1. 1

                        Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                        Also just realized that you might be confusing /60 or /56 as being smaller than a /64, its unintuitive but this is the mask of the subnets not the size. So smaller than /64 would be a CIDR above 64, not below. aka a /96 would break in my example. Its also why assigning “just* a /64 is a bit evil on the part of isp’s and the allocation should be larger.

                    2. 1

                      IPv8 we’re coming

                      fun fact: it’s called 6 because it’s the 6th version (kinda). Not because of the number of bytes (which is 8 anyway). You’re rooting for IPv7!

                1. 12

                  Consume input from stdin, produce output to stdout.

                  This is certainly a good default, though it’s helpful to also offer the option of a -o flag to redirect output to a file opened by the program itself instead of by the shell via stdout redirection. While it’s a small degree of duplication of functionality (which is unfortunate), it makes your program much easier to integrate into makefiles properly.

                  Without a -o flag:

                  bar.txt: foo.txt
                  	myprog < $< > $@
                  

                  If myprog fails for whatever reason, this will still create bar.txt, resulting in subsequent make runs happily proceeding with things that depend on it.

                  In contrast, with a -o flag:

                  bar.txt: foo.txt
                  	myprog -o $@ < $<
                  

                  This allows myprog to (if written properly) only create and write to its output file once it’s determined that things are looking OK [1], preventing further make runs from spuriously continuing on after a failure somewhere upstream.

                  (You can work around the lack of -o with a little || { rm -f $@; false; } dance after the stdout-redirected version, but it’s kind of clunky and has the disadvantage of deleting an already-existing output file on failure. This in turn can also be worked around by something like myprog < $< > $@.tmp && mv $@.tmp $@ || { rm -f $@.tmp; false; } but now it’s three times as long as the original command…might be nice if make itself offered some nicer way of solving this problem, but I’m not aware of one.)

                  [1] Or preferably, write to a tempfile (unlinking it on failure) and rename it into the final output file only when completely finished so as to avoid clobbering or deleting an existing one if it fails partway through.

                  1. 9

                    might be nice if make itself offered some nicer way of solving this problem

                    GNU make has a .DELETE_ON_ERROR special target: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#index-_002eDELETE_005fON_005fERROR

                    It’s closer to your first example than the second though.

                  1. 5

                    In one of the diagrams on certificate transparency, it looks like the browser agent may be expected to query the log server as an auditor, as part of checking the validity of a certificate.

                    Doesn’t that seem like a privacy leak, if every time you visit a TLS site, you (browser) ask a 3rd party if the certificate appears in the trusted log? It seems like that 3rd party would then have the capability to associate your ip with requests for a specific TLS cert.

                    1. 2

                      Google are implementing the inclusion proof check using DNS mirrors of the log servers - see: https://docs.google.com/document/d/1FP5J5Sfsg0OR9P4YT0q1dM02iavhi8ix1mZlZe_z-ls/view

                      The idea is that you already leak the hostnames you’re visiting to your DNS resolver and leaking the requests for inclusion proofs to the same resolver isn’t too different.

                      1. 1

                        Interesting. Thanks for the link.

                        Reading it, it looks like the queries will still go to “log servers”, just using a DNS based channel.

                        Each trusted log will have a domain name that can be used to perform CT DNS queries. This domain name will be added to the CTLogInfo stored in Chrome for each log… [snip] Clients will ask for inclusion proofs over DNS, which is meant to be proxied via the clients’ DNS servers. Analysis of privacy implications for using a DNS-based protocol are documented here

                        However, in that referenced document there is some clarification..

                        Chrome clients receive Signed Tree Heads (STHs) via the component updater, so in order to check for inclusion of an entry all a client has to do is obtain an inclusion proof from the CT log that issued it. However, doing so over HTTPS will expose, to the log, which client observed which certificates.
                        To overcome this, a DNS-based protocol for obtaining inclusion proofs was devised. Google is operating mirrors which implement this DNS-based protocol of all CT logs trusted by Chromium.

                        Further in that same document it states that STHs dns queries should go over the same resolver path as standard client dns requests, using existing infrastructure.

                        That does indeed sound much more reasonable than querying the log servers directly!

                        I wonder what the ramifications of MitM’ing (either modifying or simply blocking) those specific dns queries will be. Will this use some in-channel signatures, or rely on dnssec? Seems like /lots/ of overhead (and yet more centralization) compared to HPKP.

                        1. 1

                          As for centralisation — the evolution of revocation checks (OCSP stapling) are actually scarier, here performing audit is just public service by the browser, not a precondition to connection. CT inclusion proof can be provided inside the certificate.

                          As for MitM — these requests are basically signed, STH are obtained via the browser updater, CT inclusion proof is a proof that the certificate has been included in the Merkle tree. If you can break that, you can break SHA256 or Chrome update, and you don’t care about CT anymore. Using HTTPS instead of DNS would not allow to improve the proof size.

                          As for HPKP — well, it is just too convenient for ransomware after server takeover. And inconvenient for Let’s Encrypt at the same time.

                    1. 6

                      Woah! Did not see this coming. I knew about their intermediary certificates, but I figured they wouldn’t want to go through the hassle of being a root CA as well.

                      This raises 2 questions to me:

                      • Will they have open this new root CA to the public? Or as an intermediate sub-CA? I don’t expect so, at this moment, though turning around and offering a second ACME implementation could put a check on Let’s Encrypt.
                      • The CA/Browser Forum is based on representation of CA and browser vendors. Will Google be on both sides now - reps from Google Trust Services and the Chrome teams? Let’s Encrypt already disrupted the precarious political balance simply by existing; this kind of ability to have 2 votes vs everyone else’s 1 might cause issues too.
                      1. 6

                        Will they have open this new root CA to the public?

                        In Google’s application for inclusion in the Mozilla root store (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532) they say:

                        Google is a commercial CA that will provide certificates to customers from around the world. We will offer certificates for server authentication, client authentication, email (both signing and encrypting), and code signing. Customers of the Google PKI are the general public. We will not require that customers have a domain registration with Google, use domain suffixes where Google is the registrant, or have other services from Google.

                        Complete guesswork, but I wonder if they’ll provide certificates as part of Google Cloud (like Amazon do).

                        Will Google be on both sides now - reps from Google Trust Services and the Chrome teams? Let’s Encrypt already disrupted the precarious political balance simply by existing; this kind of ability to have 2 votes vs everyone else’s 1 might cause issues too.

                        This actually already came up after the WoSign fiasco. The majority shareholder in WoSign is Qihoo 360, which is also a member of the CA/B forum as a browser. WoSign/StartCom/Qihoo 360 now only have a single vote (as a browser).

                        I imagine that Google will also only have one vote, and they’ll presumably choose to continue voting as a browser.

                      1. 4

                        There’s a slight problem in this tutorial in that it assumes ESP (the stack pointer) will be defined by the boot loader to point to an appropriate location for the stack. However, the Multiboot standard states that ESP is undefined, and that the OS should set up its own stack as soon as it needs one (here the CALL instruction uses the stack, and the compiled C code may well too).

                        An easy way to solve this is to reserve some bytes in the .bss section of the executable for the stack by adding a new section in the assembly file:

                        [section .bss align=16]
                          resb 8192
                          stack_end:
                        

                        Then before you make use of the stack (between cli and call kmain would be appropriate in this case), you need to set the stack pointer:

                        mov esp, stack_end
                        
                        1. 4

                          The post hasn’t mentioned pinning, which should reduce the damage a CA compromise could cause. There’s a standard for it being pushed forwards by some people at Google and I believe Chrome already implements it.

                          The idea is that the first time you visit a site, the browser goes through the standard procedure of verifying your certificate is signed by a trusted CA. The website includes a header specifying the “pinned” public key(s) which are used by certificates on that website. For subsequent connections (assuming the pins have not expired), the browser will only accept the certificate if its public key matches one which is pinned. If it doesn’t match a pinned key (e.g. because it was a fraudulent certificate issued by someone attacking a CA, which will have a different key to the website’s real key) the browser won’t trust it.

                          The risk of a fraudulent certificate issued by a compromised CA being accepted is therefore reduced to the first time you visit the site, or if you visit the site so infrequently so that the pins expire. Chrome solves this for popular sites (google.com, twitter.com, etc.) by including a hard coded set of pinned public keys for them.

                          In non-browser scenarios, e.g. mobile phone apps, you can simply hard code the pinned keys in your application and remove reliance on CAs entirely.

                          1. 2

                            Google also has Certificate Transparency which they plan to require for EV certs issued in 2015 in Chrome.