1. 1

    OK, I have a question about this design.

    First, the key is deterministically derived using the U2F device. (It’s always the same key.) That means the key could be stolen if you’re accidentally using a compromised SSH client, for instance. Unlike a key on a smart card or a Yubikey in PIV mode, where the root key never leaves the device.

    Presumably to mitigate this risk, Github also requires a TOTP one-time token if you’re using U2F. You have to push the button on your device, it spits out a one-time token that GitHub can verify.

    But then what value does U2F add in the first place, if you still need to also use TOTP?

    Maybe I’m misunderstanding something here.

    1. 3

      The key is generated via FIDO2, and it’s not deterministic. With FIDO2 (the successor to U2F, with backwards compatibility) for registration the key takes as parameters the relaying party name (usually ssh:// for ssh keys, https://yoursite.com for websites), a challenge for attestation, the wanted key algorithm, and a few extra optional parameters. The key responds with an KeyID, an arbitrary piece of data that the should be provided to the key when wanting to use it, the public key data, and the challenge signed with that key. This data usually holds the actual private key, encrypted with an internal private key of the security key, which is then decrypted to actually use it. It’s assumed that the KeyID is unique, therefore it should(and generally is) generated using some sort of secure RNG on the security key. The flow with ssh is quite simple, when you create an SSH key with a security key, a key gets generated on it, the KeyID gets stored as the private key file, and the public key file stores the returned public key. When connecting with SSH, a challenge is issued by the server you are authenticating to, which gets passed to the key with appropriate KeyID, and the response is sent back to the server. No need for any additional TOTP tokens, which are less secure.

      1. 1

        First, the key is deterministically derived using the U2F device. (It’s always the same key.) That means the key could be stolen if you’re accidentally using a compromised SSH client, for instance. Unlike a key on a smart card or a Yubikey in PIV mode, where the root key never leaves the device.

        How can this make sense? Surely the U2F device has access to a suitable CSPRNG.

      1. 6

        Total outsider here, but my understanding is that Rust newcomers struggle with satisfying the compiler. That seems necessary because of the safety you get, so OK, and the error messages have a great reputation. I would want to design in possible fixes for each error which would compile, and a way to apply them back to source code given your choice. If that’s a tractable problem, I think it could help cut trial and error down to one step and give you meaningful examples to learn from.

        Maybe add a rusty paperclip mascot…

        1. 9

          Actually, a lot of the error messages do offer suggestions for fixes and they often (not always) do “just work”. It’s really about as pleasant as I ever would’ve hoped for from a low-level systems language.

          1. 3

            That’s great! Is it exposed well enough to, say, click a button to apply the suggestion in an editor?

            1. 4

              In some cases, yes. See https://rust-analyzer.github.io/

              1. 1

                In a lot of cases, actually. It becomes too easy sometimes, because I don’t bother trying to figure out why it works.

              2. 1

                Yeah, it seems to be. I often use Emacs with lsp-mode and “rust-analyzer” as the LSP server and IIRC, I can hit the “fix it” key combo on at least some errors and warnings. I’m sure that’s less true the more egregious/ambiguous the compile error is.

            2. 3

              rusty paperclip mascot…

              There is this but it doesn’t seem to have a logo, someone should make one!

            1. 14

              In addition to reducing the load on the root servers, QNAME minimization improves privacy at each stage of a request.

              quoting isc.org:

              “Let’s say you want to visit a blog site at https://someblogname.bloghosting.com.pl. In order to determine which IP address to connect to to reach that link, your computer sends a request to your ISP’s resolver, asking for the full name - someblogname.bloghosting.com.pl, in this case. Your ISP (or whoever is running the network you are using) will ask the DNS root, and then the top-level domain (.pl in this case), and then the secondary domain (.com.pl), for the full domain name. In fact, all you are finding out from the root is “where is .pl?” and all you are asking .pl is “where is .com.pl?” Neither of these requests needs to include the full name of the website you are looking for to answer the query, but both receive this information. This is how the DNS has always worked, but there is no practical reason for this today.”

              For BIND, it’s qname-minimization ( strict | relaxed | disabled | off ); for unbound, it’s qname-minimisation: yes.

              About 51% of DNS resolvers do QNAME minimization now: https://dnsthought.nlnetlabs.nl/#qnamemin

              1. 3

                +1 I am in the middle of switching to PowerDNS from BIND, so I thought I’d respond on how PowerDNS is doing.

                For PowerDNS, both QNAME and aggressive caching are enabled by default now.

                1. 2

                  These are nice to have features but do not function as replacements for encryption.

                  1. 4

                    What is the encryption for? In general, encryption is there to preserve two properties: confidentiality and integrity. The integrity is already handled by DNSSEC and has the nice property that the response is the same for everyone (more on this later). The confidentiality matters only when it leaks secret information. The fact that some ISP’s user has looked for a specific domain name may well leak private information (e.g. have they been looking up a site that shares information critical of the government) that can have serious real-world consequences. The fact that some ISP’s user has looked up a domain in Poland or a domain with a .com ending is not, I would suggest, leaking any information that people would care about being leaked. If my anonymity set is all of my DNS server’s users and the only information that leaks is that I’ve looked up some .com domain, I fundamentally don’t care.

                    The desire to have the same response for everyone is driven by the fact that, to scale up to the performance, the root DNS resolvers that I know about (operated by VeriSign) pre-prepare packets in memory, update the destination addresses and the checksums, and then send the DMA request. They get phenomenal throughput and latency from this approach.

                    My slight worry about this argument for confidentiality comes from the fact that 97% of root DNS responses return NXDOMAIN. They are the result in typos in the domain. For example, if you omit the .com by accident and type example instead of example.com, then the root DNS will be queried with the second-level domain name, example. In this case, any passive observer of the traffic knows that you’ve typed example. That’s probably easy to work around in the querying servers by using encrypted DNS queries for any TLD that they haven’t seen before, but that only helps performance for the 3%. The TLDs typically have quite long TTLs, but when that expires you take a TLD completely off the Internet for users of the caching resolver if there’s a denial of service on the root servers.

                    1. 4

                      The integrity is already handled by DNSSEC and has the nice property that the response is the same for everyone (more on this later).

                      DNSSEC suffers from complexity, architechtural fragility, and extremely low adoption. In fact, DNSSEC’s failings are one of the biggest arguments in favour of DoH and DoT.

                      1. 1

                        like @david_chisnall said, they are mostly trying to solve different things. DNSSEC lets the large DNS providers(read roots) the ability to scale. DoH and DoT don’t. You are single-sourcing your DNS to an exact entity, just like with an ISP, except instead of your ISP knowing what you are looking up , it’s Google and Cloudfare(or whoever your DoH & DoT provider is, but they are the defaults). Who wouldn’t want to be Google/Cloudfare. They say they are not doing bad things with your information, but we don’t actually know that, we just have to trust them.

                        So you are avoiding your ISP from seeing your lookups, you could have done that with a VPN/tunnel just as easily, without any DoH or DoT. DoH and DoT don’t fix the problem of a nation state/ISP seeing where you are going, as SNI headers are still not encrypted(last I checked). Of course this is only HTTP problems, many such exist. Information leakage on the internet happens ALL over the place, and while DoH and DoT mostly can help, they are not magic bullets.

                        DoH and DoT don’t solve the problem of your DNS provider lying to you. DNSSEC can, in theory(provided it was fully deployed, which it clearly isn’t).

                        Before DoH and DoT, your actual OS was generally 100% in charge of where your DNS requests are going, but now it’s a guessing game as to who is answering a given DNS request. Web browsers have stolen control of that too, further proving that web browsers are Operating Systems in disguise.

                        The only reason DoH/DoT has gotten such adoption is because Web Browsers forced it on us.

                        I’m not against DoH or DoT, but they aren’t magical solutions to the problem(s), but obviously DNSSEC isn’t either.

                1. 1

                  The UDOO BOLT V8 has a Ryzen V1605B 4C/8T with Vega 8 graphics. The V3 has a 2C/4T Ryzen V1202B with Vega 3 graphics.

                  BOLT V8: https://shop.udoo.org/udoo-bolt-v8.html

                  BOLT V3: https://shop.udoo.org/udoo-bolt-v3.html

                  1. 27

                    I recommend everyone switch to Firefox. Google is only going to get creepier. I finally made the switch to Firefox recently and couldn’t be much happier. I only wish Firefox had a SSB (site-specific browser) feature like Chrome does, as I still have “apps” based on Chrome.

                    1. 7

                      Yep that or Brave are great options. My only gripe with Firefox is how slow the Google suite is (forced to use it because of work). Perhaps it’s just my experience but Google Meet / Sheets / etc. are noticeably worse on Firefox.

                      I doubt this is a fault with Firefox, though.

                      1. 9

                        You should probably read lobste.rs’s experience with Brave before recommending it: https://github.com/lobsters/lobsters/issues/761, https://github.com/lobsters/lobsters-ansible/issues/45

                        I wouldn’t be comfortable using Brave. I would prefer Firefox, or, if absolutely necessary, something like Ungoogled Cromium.

                        1. 2

                          I’m not super psyched about Brave.

                          I turned to Epichrome to make SSBs now that I’m not using Chrome and I’m not sure if I’m happy using it because it depends on Brave.

                          1. 1

                            If you’re on a Mac, I was always very happy with Fluid to make SSBs.

                          2. 1

                            Thanks for sharing… didn’t know about either of these; I’ll pass it along to people I know as well, that’s a little scummy…

                        2. 4

                          Or the Chromium-based Brave which has better privacy by default than Firefox, and you get to use all the Chrome extensions. It has a built-in adblocker, and supports (unlike Chrome/ Firefox) peer-to-peer encrypted sync of passwords, bookmarks, etc. without involving server-side storage.

                          1. 18

                            So Brave themselves can monetise their users? They haven’t been transparent in the past. Firefox has all of Chrome’s features and a lower memory footprint.

                            1. 11

                              Hell, this actually affected Lobsters too.

                              1. 6

                                Wow, I had a low opinion of them, but that’s much worse than I thought. Spoofing other browsers’ user agents explicitly to avoid detection? Scraping the names and photos of site creators to make it look like you’re paying the site creators directly, and pocketing the money? Modifying the content of the website to add affiliate codes to URLs?

                                So Brave is literally just a hugely widespread scam then.

                          2. 2

                            +1. Other reasons to switch that popped-up recently: uBlock works best on Firefox (https://github.com/gorhill/uBlock/wiki/uBlock-Origin-works-best-on-Firefox), Google removed the ClearURLs add on from its store (https://news.ycombinator.com/item?id=26564638).

                            1. 2

                              I use Chromium for work stuff, which is all Google suite based, and Firefox for everything else. On the Mac, I use Safari in place of Firefox, but same idea.

                              1. 2

                                I use Chromium for work stuff, which is all Google suite based

                                Out of curiosity, why? Most of my work docs are in Google Docs / Sheets, and they seem to work nicely in Firefox.

                                1. 3

                                  I do the same, more for compartmentalization reasons than anything else. Stuff works fine in Firefox, but it’s nice to keep them separate in a really clear visual way.

                                  I started doing it before Firefox added containerized tabs though and just kept it out of habit. But I can’t ditch chromium entirely yet because Firefox won’t play audio without pulse.

                                  1. 1

                                    Compartmentalization. I block as many Google things in the browser, because life is too short. I also use Chromium-based SSBs for sites that I do not trust but still visit (Facebook, mostly).

                                  2. 1

                                    Do you use something like 1Password to manage passwords across browsers? What about bookmarks, etc.?

                                      1. 2

                                        Yep to 1Password; and for bookmarks, I … don’t really keep them? I save stuff to Pinboard that’s interesting, and I keep bookmarks in my Google account for work.

                                    1. 7

                                      We reinvent the error handling specification in Go+. We call them ErrWrap expressions:

                                      expr! // panic if err
                                      expr? // return if err
                                      expr?:defval // use defval if err
                                      

                                      Compared to corresponding Go code, It is clear and more readable.

                                      Facts not in evidence.

                                      1. 2

                                        I think this way is more expressive for the developer. Removes boilerplate and reduces verbosity, which is more useful for scripting type applications like the ones Go+ is intended for.

                                        1. 1

                                          This is probably a matter of taste. Why do you prefer the standard Go way of handling errors?

                                          1. 8

                                            This is probably a matter of taste.

                                            Exactly my point! :) The claim doesn’t hold.

                                            1. 2

                                              Don’t think the OP said they prefer the standard, just that the examples provided didn’t really support the claim that It is clear and more readable.

                                              Personally I think the more verbose form in Go is way more clear and readable, even if occasionally it feels a tad annoying to type over and over as the person writing it.

                                          1. 18

                                            Fun article! I think a lot of people missed the very last paragraph though, where the author says:

                                            Collectively, the software industry simply has no idea how to hire software developers. Factorio is probably the best technical interview we have right now, and that’s embarassing.

                                            1. 6

                                              We certainly can’t switch to using Factorio as an interviewing method - you might as well just give a candidate a take-home assignment.

                                              … and everyone is discussing how they’d react to Factorio used as an interviewing method. 😔

                                              1. 3

                                                I can’t blame people for noping out before the author got to the point, though.

                                              2. 2

                                                Collectively, the software industry simply has no idea how to hire software developers.

                                                I’ve been exposed to bad interviews and good ones. I don’t think the author sufficiently justifies this point.

                                                Factorio is probably the best technical interview we have right now

                                                Again, insufficient justification is given for this extraordinary claim. The whole article hand waves about how there’s similarities between code and playing Factorio, like for example you have to fix bugs in Factorio (“woa bugs, we have those in code too! :o”), but the article’s conclusion does not follow from its body.

                                                1. 1

                                                  If Factorio were multiplayer, and the other players just randomly rewrote the rules of the game for no well-explained reason, it would be more appropriate in evaluating how good a person was at software development.

                                                  1. 2

                                                    It is still too narrow of an environment to properly model the complexity that you can enconter while developing software. For example, the bugs that arise in Factorio arise from, essentially, a mismatch in the supply and capacity of the flow of materials. Its “complexity” arises from simply adding more of these pipelines. Software is usually not so simple.

                                              1. 8

                                                The Go code looks to be overly complicated.

                                                One small improvement would be to use Go 1.16’s embed directive:

                                                //go:embed static/*
                                                var staticFiles embed.FS
                                                var staticHandler = func() http.Handler {
                                                	fSys, err := fs.Sub(staticFiles, "static")
                                                	if err != nil {
                                                		panic(err)
                                                	}
                                                	return http.FileServer(http.FS(fSys))
                                                }()
                                                

                                                In this example staticHandler is a http.Handler that serves everything inside the ./static directory, relative to the source file. The contents of the folder are embedded into the binary at compile time. This handler is understood by the standard library.

                                                1. 2

                                                  The contents of the folder are embedded into the binary at compile time.

                                                  That violates the constraints the author states for the program though? You are supposed to be able to serve an arbitrary path specified at program startup.

                                                  1. 3

                                                    I based my comment off the first line (in bold) of “The idea” section:

                                                    why aren’t there any programs that I can download that serve literally just one file

                                                    They also later state that a benefit of using Go is that it “Produce[s] static binaries”. I interpreted this to mean a single, dependency-free, binary is desirable for easy deployment.

                                                1. 2

                                                  Interesting idea. A few thoughts:

                                                  1. The encryption key could be generated by the first device to be added to this mesh of devices. When a new device is added, any existing device will perform a new share split and the user has to copy these share tokens to every existing device as well as the new one. The same has to happen every time a device is removed. This could be cumbersome.

                                                  2. Each share token is locally encrypted with a master password that is fed through a KDF. Changing this password requires decrypting and re-encrypting each individual share on every device. This is also cumbersome.

                                                  3. It’s true that if k out of n shares are needed to reconstruct the secret, then losing access to n-k shares does not result in the data being lost. However if n is small or if k is badly chosen, the user may lose access to their data anyway.

                                                    When using a central server, or a set of distributed and replicated copies of a single payload, every copy of the data or the master password has to be lost in order to lose the data. In the case of Shamir’s Password Store, a user could feasibly remember the password but still lose data if they lose a few devices.

                                                    Perhaps a paper backup of the encryption key can be made, but it could be argued that this defeats the point of the protocol. Instead maybe k shares can be generated and stored in k different places, as a backup, but this is cumbersome.

                                                  4. The problems outlined in the Local password managers section don’t make much sense to me. A password manager that stores encrypted data on an untrusted server can use a derived encryption key to secure it. You can easily memorise a high-entropy password by selecting a sequence of random words from a dictionary. You can feed a low-entropy password to a good password KDF like Argon2id in order to make a brute-force attack infeasible. In essence, the key to decrypt the data does not have to be stored anywhere except for in your brain.

                                                  1. 4

                                                    It’s much easier to convince people with a PoC than with a paper.

                                                    1. 7

                                                      Some would even say PoC||GTFO

                                                      1. 1

                                                        That’s really not how it works for most real-world vulnerabilities today.

                                                      2. 2

                                                        A method of computing a factorisation more efficiently than current state of the art methods doesn’t imply that actually computing such a factorisation is cheap.

                                                        1. 2

                                                          Could be dangerous to submit a PoC for this kind of thing

                                                          1. 2

                                                            Probably a couple of days less dangerous than submitting the paper, at best. There’s been revisions of this paper around for a while as far as I can tell. If it posed any real threat to RSA, we’d have seen something by now.

                                                            That said, I hope I’m wrong. A world with broken RSA is a more interesting world to live in.

                                                          2. 1

                                                            There is some pseudocode in the paper. I don’t know if that counts.

                                                            1. 2

                                                              PoC

                                                              Proof of concept isn’t some pseudocode for me. Proof would be an example with code you can execute to verify it along with some measured times in this case. Should be easy enough to publish some python code that can do this.

                                                          1. 2

                                                            I like the idea, except it doesn’t work for me on Linux. It starts in WINE and listens on 8080, but never returns any replies.

                                                            1. 3

                                                              If your system is configured to use binfmt_misc then you need to run this command:

                                                              sudo sh -c "echo ':APE:M::MZqFpD::/bin/sh:' >/proc/sys/fs/binfmt_misc/register"
                                                              
                                                              1. 1

                                                                Do you execute it from the command line?

                                                                1. 1

                                                                  You probably have a binary file loader that looks for PE executable headers and lauches wine in order to run them. Maybe wine doesn’t like the binary for some reason?

                                                                1. 9
                                                                  • Build Ryzen 7 3700X System This might be my hundredth build but is my first miniITX one so I’m excited!
                                                                  • Continue learning emacs
                                                                  • Rice Arch :-D (I’m planning bspwm/rofi)
                                                                  • Research radio to put together my first HF portable rig
                                                                  1. 3

                                                                    I also built a Mini-ITX Ryzen 3700X system recently :) OptimumTech is a great resource for Mini-ITX builds in general.

                                                                  1. 2

                                                                    This is great to see, and it should also mitigate against TLS client fingerprinting attacks. However, ESNI is blocked by nation-state censors like China so I expect the same to happen for ECH.

                                                                    1. 2

                                                                      but this has been tested against nDPI and a commercial DPI engine developed by Palo Alto Networks, both of which detected TOR traffic encapsulated by Rosen as ordinary HTTPS

                                                                      That might well just be a momentary observation though. It seems likely that such engines just need a small update to recognize TOR/Rosen.

                                                                      1. 3

                                                                        The true test will be if/when censors take note. The main fingerprint that can pinpoint a Rosen client is its strange timing pattern and atypical bandwidth characteristics. These can be tweaked if needed.

                                                                        This is how researchers managed to detect meek, for example. It polls for data immediately and then decays the delay interval by 1.5x if nothing happens. Researchers fed this data to a machine learning model. However from what I found, it doesn’t look like real world censors today use techniques this advanced in order to detect circumvention tools.

                                                                      1. 8

                                                                        This is a project that I’ve been working on as part of my ongoing master’s thesis. It implements a modular, censorship-resistant proxy tunnel that encapsulates arbitrary application traffic inside some cover protocol (currently only HTTPS).

                                                                        I’ve tested it against nDPI and a commercial DPI engine developed by Palo Alto Networks. Both detected TOR traffic using Rosen as ordinary HTTPS :)

                                                                        If you can test this out and let me know your experiences, especially if you are behind a repressive firewall that implements censorship, I would really appreciate it.

                                                                        1. 3

                                                                          It looks better if you write Tor, not TOR. https://tor.void.gr/docs/faq.html.en#WhyCalledTor

                                                                        1. 5

                                                                          My big hope for HTTP push was to avoid having to bundle JS. If the server knows which JS files depend on which other JS files, they can be served together individually without there being a performance hit, and you get the benefit of avoiding a build step, easier dynamic loading of code groups and more pleasant browser side debugging.

                                                                          I’m also disappointed that it never really achieved adoption.

                                                                          1. 2

                                                                            My big hope for HTTP push was to avoid having to bundle JS

                                                                            HTTP/2 still does request and response multiplexing and stream prioritization, though (along with header compression and the binary protocol). If a web page includes a bunch of separate resources, I believe the server can utilize the same connection, thus still increasing performance over HTTP/1.x. I haven’t messed with front end sites in a while, but I’d be curious to see some benchmarks around HTTP/1.x with a big blob of JS vs. HTTP/2 with separate JS files and with and without push.

                                                                            1. 2

                                                                              Ah yes, I should have said that I was thinking of JS files that refer to other JS files. If they have to come down to the browser before you can request the next one, then that one has to come down to the browser only to discover that it needs another, etc, I would expect that to add up quite quickly.

                                                                              1. 2

                                                                                Consider when/whether it adds up:

                                                                                • If you have just one, it doesn’t.

                                                                                • If you have several that are used for some pages and others not, it doesn’t, because the next file might be cached from having been used by another page.

                                                                                • It does add up if you have several that are always used together, and you may expect that if one is cached, all are cached, and vice versa. IMO this is a stupid situation to get into. It makes some sense for images (which don’t chain to each other), IMO zero for javascript or CSS assets. If they’re always used together, pack them into one asset to begin with.

                                                                                Good performance is first and foremost a question of designing and implementing such that you don’t create problems that HTTP can make worse.

                                                                                1. 1

                                                                                  Webpack or something similar would solve this issue, as long as all the JS is served from your domain.

                                                                            1. 3

                                                                              Would be interesting to see performance compared while taking into account temperatures. Both Intel and AMD chips are heavily dependent on temperature for reaching and maintaining boost clocks.

                                                                              This article compares the M1 against previous generation MacBooks, which optimise for compactness and quietness over performance. An Intel chip in a properly cooled system would perform better. They also include Ryzen desktop CPUs into the mix, which had to have been cooled using conventional desktop parts.

                                                                              These numbers are useful for people choosing between MacBooks but in terms of actual performance they could be misleading.

                                                                              1. 3

                                                                                As far as I can tell, the article does not discuss the most important component here: the secondary storage itself. Some active benchmarking is needed here before any conclusions can be drawn.

                                                                                The new M1 apparently has an SSD that is nearly twice as fast as a previous gen mac, but its numbers are typical for an NVMe drive. I rather suspect that normalizing for storage performance would render the M1 CPU itself uninteresting.

                                                                                1. 2

                                                                                  The new M1 apparently has an SSD that is nearly twice as fast as a previous gen Mac

                                                                                  only on the MacBook Air which previously had an SSD that was 2x sower than what all other Macs had

                                                                                  1. 1

                                                                                    Fair enough. Point stands though.