1. 12

    Consume input from stdin, produce output to stdout.

    This is certainly a good default, though it’s helpful to also offer the option of a -o flag to redirect output to a file opened by the program itself instead of by the shell via stdout redirection. While it’s a small degree of duplication of functionality (which is unfortunate), it makes your program much easier to integrate into makefiles properly.

    Without a -o flag:

    bar.txt: foo.txt
    	myprog < $< > $@
    

    If myprog fails for whatever reason, this will still create bar.txt, resulting in subsequent make runs happily proceeding with things that depend on it.

    In contrast, with a -o flag:

    bar.txt: foo.txt
    	myprog -o $@ < $<
    

    This allows myprog to (if written properly) only create and write to its output file once it’s determined that things are looking OK [1], preventing further make runs from spuriously continuing on after a failure somewhere upstream.

    (You can work around the lack of -o with a little || { rm -f $@; false; } dance after the stdout-redirected version, but it’s kind of clunky and has the disadvantage of deleting an already-existing output file on failure. This in turn can also be worked around by something like myprog < $< > $@.tmp && mv $@.tmp $@ || { rm -f $@.tmp; false; } but now it’s three times as long as the original command…might be nice if make itself offered some nicer way of solving this problem, but I’m not aware of one.)

    [1] Or preferably, write to a tempfile (unlinking it on failure) and rename it into the final output file only when completely finished so as to avoid clobbering or deleting an existing one if it fails partway through.

    1. 9

      might be nice if make itself offered some nicer way of solving this problem

      GNU make has a .DELETE_ON_ERROR special target: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#index-_002eDELETE_005fON_005fERROR

      It’s closer to your first example than the second though.

    1. 5

      In one of the diagrams on certificate transparency, it looks like the browser agent may be expected to query the log server as an auditor, as part of checking the validity of a certificate.

      Doesn’t that seem like a privacy leak, if every time you visit a TLS site, you (browser) ask a 3rd party if the certificate appears in the trusted log? It seems like that 3rd party would then have the capability to associate your ip with requests for a specific TLS cert.

      1. 2

        Google are implementing the inclusion proof check using DNS mirrors of the log servers - see: https://docs.google.com/document/d/1FP5J5Sfsg0OR9P4YT0q1dM02iavhi8ix1mZlZe_z-ls/view

        The idea is that you already leak the hostnames you’re visiting to your DNS resolver and leaking the requests for inclusion proofs to the same resolver isn’t too different.

        1. 1

          Interesting. Thanks for the link.

          Reading it, it looks like the queries will still go to “log servers”, just using a DNS based channel.

          Each trusted log will have a domain name that can be used to perform CT DNS queries. This domain name will be added to the CTLogInfo stored in Chrome for each log… [snip] Clients will ask for inclusion proofs over DNS, which is meant to be proxied via the clients’ DNS servers. Analysis of privacy implications for using a DNS-based protocol are documented here

          However, in that referenced document there is some clarification..

          Chrome clients receive Signed Tree Heads (STHs) via the component updater, so in order to check for inclusion of an entry all a client has to do is obtain an inclusion proof from the CT log that issued it. However, doing so over HTTPS will expose, to the log, which client observed which certificates.
          To overcome this, a DNS-based protocol for obtaining inclusion proofs was devised. Google is operating mirrors which implement this DNS-based protocol of all CT logs trusted by Chromium.

          Further in that same document it states that STHs dns queries should go over the same resolver path as standard client dns requests, using existing infrastructure.

          That does indeed sound much more reasonable than querying the log servers directly!

          I wonder what the ramifications of MitM’ing (either modifying or simply blocking) those specific dns queries will be. Will this use some in-channel signatures, or rely on dnssec? Seems like /lots/ of overhead (and yet more centralization) compared to HPKP.

          1. 1

            As for centralisation — the evolution of revocation checks (OCSP stapling) are actually scarier, here performing audit is just public service by the browser, not a precondition to connection. CT inclusion proof can be provided inside the certificate.

            As for MitM — these requests are basically signed, STH are obtained via the browser updater, CT inclusion proof is a proof that the certificate has been included in the Merkle tree. If you can break that, you can break SHA256 or Chrome update, and you don’t care about CT anymore. Using HTTPS instead of DNS would not allow to improve the proof size.

            As for HPKP — well, it is just too convenient for ransomware after server takeover. And inconvenient for Let’s Encrypt at the same time.

      1. 6

        Woah! Did not see this coming. I knew about their intermediary certificates, but I figured they wouldn’t want to go through the hassle of being a root CA as well.

        This raises 2 questions to me:

        • Will they have open this new root CA to the public? Or as an intermediate sub-CA? I don’t expect so, at this moment, though turning around and offering a second ACME implementation could put a check on Let’s Encrypt.
        • The CA/Browser Forum is based on representation of CA and browser vendors. Will Google be on both sides now - reps from Google Trust Services and the Chrome teams? Let’s Encrypt already disrupted the precarious political balance simply by existing; this kind of ability to have 2 votes vs everyone else’s 1 might cause issues too.
        1. 6

          Will they have open this new root CA to the public?

          In Google’s application for inclusion in the Mozilla root store (https://bugzilla.mozilla.org/show_bug.cgi?id=1325532) they say:

          Google is a commercial CA that will provide certificates to customers from around the world. We will offer certificates for server authentication, client authentication, email (both signing and encrypting), and code signing. Customers of the Google PKI are the general public. We will not require that customers have a domain registration with Google, use domain suffixes where Google is the registrant, or have other services from Google.

          Complete guesswork, but I wonder if they’ll provide certificates as part of Google Cloud (like Amazon do).

          Will Google be on both sides now - reps from Google Trust Services and the Chrome teams? Let’s Encrypt already disrupted the precarious political balance simply by existing; this kind of ability to have 2 votes vs everyone else’s 1 might cause issues too.

          This actually already came up after the WoSign fiasco. The majority shareholder in WoSign is Qihoo 360, which is also a member of the CA/B forum as a browser. WoSign/StartCom/Qihoo 360 now only have a single vote (as a browser).

          I imagine that Google will also only have one vote, and they’ll presumably choose to continue voting as a browser.

        1. 4

          There’s a slight problem in this tutorial in that it assumes ESP (the stack pointer) will be defined by the boot loader to point to an appropriate location for the stack. However, the Multiboot standard states that ESP is undefined, and that the OS should set up its own stack as soon as it needs one (here the CALL instruction uses the stack, and the compiled C code may well too).

          An easy way to solve this is to reserve some bytes in the .bss section of the executable for the stack by adding a new section in the assembly file:

          [section .bss align=16]
            resb 8192
            stack_end:
          

          Then before you make use of the stack (between cli and call kmain would be appropriate in this case), you need to set the stack pointer:

          mov esp, stack_end
          
          1. 4

            The post hasn’t mentioned pinning, which should reduce the damage a CA compromise could cause. There’s a standard for it being pushed forwards by some people at Google and I believe Chrome already implements it.

            The idea is that the first time you visit a site, the browser goes through the standard procedure of verifying your certificate is signed by a trusted CA. The website includes a header specifying the “pinned” public key(s) which are used by certificates on that website. For subsequent connections (assuming the pins have not expired), the browser will only accept the certificate if its public key matches one which is pinned. If it doesn’t match a pinned key (e.g. because it was a fraudulent certificate issued by someone attacking a CA, which will have a different key to the website’s real key) the browser won’t trust it.

            The risk of a fraudulent certificate issued by a compromised CA being accepted is therefore reduced to the first time you visit the site, or if you visit the site so infrequently so that the pins expire. Chrome solves this for popular sites (google.com, twitter.com, etc.) by including a hard coded set of pinned public keys for them.

            In non-browser scenarios, e.g. mobile phone apps, you can simply hard code the pinned keys in your application and remove reliance on CAs entirely.

            1. 2

              Google also has Certificate Transparency which they plan to require for EV certs issued in 2015 in Chrome.