1. 10

    Without further context, this just looks like RMS is being insufficiently socially aware to realize that being pedantic about rape vs. statutory rape is inappropriate in this context; wrong time and place to go into that conversation.

    But there’s not actually enough context, so this doesn’t change my opinion from “RMS is extremely pedantic and principled and sometimes that’s a problem”.

    ETA: Also, not impressed with this bit:

    and then he says that an enslaved child could, somehow, be “entirely willing”

    No, the email fragment says that she may have presented as willing. Regardless of what you think of RMS, Minsky, etc. those are extremely different statements, and it’s dangerous to conflate them. That actually makes it harder to solve the problem of human trafficking.

    1. 5

      Without further context, this just looks like RMS is being insufficiently socially aware to realize that being pedantic about rape vs. statutory rape is inappropriate in this context; wrong time and place to go into that conversation.

      I think our subculture should get over this idea that we have special privileges when it comes to social interactions. If we fuck up in code, we (usually) take the blame and move on. Why do we (some of/most of us/well, me at least sometimes and RMS apparently all the time) tend to just shrug when we fuck up socially?

      1. 6

        I guess I should be more clear: RMS being persistently socially awkward is actually plenty of reason for him not to be a figurehead. I’m mostly objecting to this article spinning that email fragment into something like “RMS is supporting pedos”.

        Reliance on heroes tends to be problematic over time.

        1.  

          The critical mistake that rms made, that you make here too, is a failure to employ code-switching when discussing matters that are inescapably emotional.

          Whoops, me too! I meant to say, that issue is so hot that you can either speak against pedos or say nothing–it ain’t necessarily logical but every fool knows that you can’t post stuff like he did on his homepage and expect to remain the leader of an international organization. Like duh. Case closed. Can’t believe it didn’t happen EARLIER.

          Seriously, if you’re* the sort of person who never lets go of precision-in-language long enough to say anything like “nope, that’s just 100% wrong and I don’t even need to explain why”, then you will inevitability get tripped up down the road by a mob of people who do. Don’t get Stallman’d, friends! Feelings MATTER.

          * I mean you the reader, not necessarily you ‘saturn’. :)

          1.  

            Yeah, it’s a topic I won’t actually discuss on the internet, in general. Nothing good can come of it. It’s probably dangerous to even say “hey, that guy over there has an unpopular opinion on this and I’m going to say literally anything other than a condemnation of him”, but I decided to cross that line this week.

            (Also, for posterity: Since this article came out, I have seen 1) more context, and 2) a lot of history of how RMS has been an utter creep to women. Bear in mind that my previous comments were not made with that information available.)

      2. 3

        Here is some additional context for you about his thoughts on pedophilia. I didn’t know this until I searched on stallman.org. It’s there in plain sight.

          1. 2

            I think it’s OK for people to disagree on this. I might not agree with them, but I’m not going to call for someone’s ouster just because they have a different belief on age of consent. Because look, it sounds like he’s talking about teenagers, which is hardly a thing to bring out the pitchforks over. US states have varying laws in the 15 to 18 range.

            1. 2

              Why do you think he is talking about teenagers?

              1. 4

                The phrase “parents who are horrified by the idea that their little baby is maturing” could have been lifted out of any number of discussions I’ve heard of parents who are uncomfortable with the idea that their teenager is experimenting with sex.

                I don’t think there’s enough information there to damn him.

                1. 2

                  Support for teenagers having sex with adults is damning still though… why are we debating this in 2019?

                  1. 6

                    19 is “teenager”, but that’s okay? 17 years and 364 days is underage, but 18 years and 1 day is age of consent. Perhaps it was different for you, but when I turned 18 there was no “magic moment” where I was somehow more wise or capable. In quite a few jurisdictions the age of consent is 17 or 16.

                    Besides, what is an “adult”? 18? 20? 25? 30?

                    The entire thing is tricky. There are no easy answers and there is an uncomfortable grey area.

                    Why do you think he is talking about teenagers?

                    Who do you think he is not talking about teenagers? The thing that disturbs me about this entire affair is that the author takes everything rms said in the most bad faith way possible, immediately assuming to all sorts of conclusions about what he meant, even though that’s not very clear from what he actually said. The claim that rms made “excuses about rape, assault, and child sex trafficking” is a very long stretch, unless you are trying to find that in his comments.

                    1. 5

                      Also, I should point out this more recent entry: https://stallman.org/archives/2019-jul-oct.html#14_September_2019_(Sex_between_an_adult_and_a_child_is_wrong)

                      « Many years ago I posted that I could not see anything wrong about sex between an adult and a child, if the child accepted it.

                      Through personal conversations in recent years, I’ve learned to understand how sex with a child can harm per psychologically. This changed my mind about the matter: I think adults should not do that. I am grateful for the conversations that enabled me to understand why. »

                      Yours was 13 years ago, and this one is this past week. People can grow and learn and change.

                      1.  

                        A half-hearted “I guess I was wrong” that refutes four or five previous comments means very little to me without some type of analysis into why you were wrong or what changed your position.

                        A half-hearted “I guess I was wrong” as you are receiving justified criticism? I don’t see growth and change and learning, I see deflection and minimization.

                        A half-hearted “I guess I was wrong” as you are receiving justified criticism, followed up by a “I didn’t really do anything wrong but I’m going to fall on my sword (how noble)”? Suuuuuure, you’ve changed!

          2.  

            realize that being pedantic about rape vs. statutory rape is inappropriate in this context

            Which context? The source of all this mess is a private conversation of his that was published without his consent. Can’t we share controversial opinions privately anymore?

            1.  

              Semi-public, really; it’s a university mailing list. The problem isn’t actually public/private here, it’s the threading context and timing, as far as I can tell from the jumbled mess of emails that Vice got ahold of. (I also don’t totally trust Vice, because they’ve been flagrantly misquoting him.)

              I actually feel bad for the guy. Minksy is a colleague of his, and people are using this phrase “sexual assault”, which means different things to different people. RMS has already denounced Minsky, but wants to clear the record; he then goes about that in a very Stallman-ish way, talking about what is and isn’t rape vs. statutory rape. That last bit is the biggest problem, I think; it might have been OK if he’d just said something like “while Minsky deserves to go to prison, let’s use the right term for what he did out of respect for the difference”. (I dunno, making stuff up on the spot here, but I think that’s an accurate description of what he meant.)

              1.  

                Do you realize that Minsky died three years ago? It makes no sense to talk about him in the present tense.

                I have read the part of the thread published on vice, and there is nothing wrong with Stallman’s words. The misquoting by vice and the other media is unbelievable. The only possible critique is that “it was not the right moment to talk about that”, but then again, so what?

                1.  

                  I did not know that! Not sure how I missed it.

          1. 8

            I love Clojure, but we need to have a talk. Specifically, the use of nil-punning is pretty terrible. (if (seq xs) ...) is not actually a good way to spell (if (not (empty? xs)) ...) but both the language designers and the community embrace this conflation of null and the empty sequence. There are other issues around nil and collections as well.

            Kotlin is my current favorite, but it’s not OK that the best way to write Kotlin currently involves a behemoth of an editor that takes like 2 GB of memory. I can write Clojure in Emacs, which only takes about 80 MB.

            1. 12

              This just seems like Total Vendor Lock-in as a Service.

              1.  

                It’s just for your backend. I think of it as just a better Firebase/Lambda/GCF/AF.

              1. 4

                And some people just complained about sr.ht blog posts being just ads.

                1. 3

                  Forbes really, aggressively, does not want people reading their website. My god

                  1. 3

                    I’ll have to take your word for it. I don’t have any scripts enabled for that page, and it looks pleasant enough. :-)

                1. 2

                  I find this really odd! There are two approaches I have used at work, both of which rely on having Jira ticket numbers that I can use.

                  The PR method

                  This is closer to the author’s approach, in that it relies on Github to answer questions. I don’t like it as much.

                  • Every change involves a PR, and the PR’s subject line contains a ticket ID (and ideally the individual commits include the ticket ID in their summary, e.g. “PROJ-123: Do a thing”
                  • The service’s version number includes the git sha1
                  • If you know what version is in production, you can ask Github what PRs are ancestors of that commit
                  The changelog method

                  This one is great, but you have to have coworkers who are on board with it.

                  • Every PR involves adding a line to the top of the CHANGELOG.md file in the repo, including the ticket number you’re working on.
                    • Changelog entries are often identical to commit messages, but not always.
                    • You can also put other things in there, like “don’t deploy this version until OtherService has changed their DB”
                  • Every time you cut a release to promote to staging and production, you make a heading in the CHANGELOG.md on top of all those more recent changes, including that release’s version string
                  • When you do the deploy, you just copy the most recent chunk (or chunks) of the changelog, maybe tweak it a little to remove non-service-impacting entries. Very easy to write meaningful deploy tickets that way!
                    • If you use something like Jira, now there’s a “mention” link between the deploy ticket and the work tickets, so you can see whether and when something was deployed
                    • After the deploy, annotate the version header with the deploy ticket’s ID and the date of deploy

                  It’s a tiny bit more work, but you have vastly better insight into what’s where and when. There’s probably a more automated approach too, but I haven’t found it worth it to improve on it.

                  1. 1

                    I have also used the PR approach. (I like to have my branches include my ticket numbers, as it helps me hunt them down if I get distracted.)

                    However, this script has the benefit of telling you “each commit’s whereabouts” (staging, prod, development) without you needing to manually ask Github or enforce PR naming behavior. So it’s similar to the PR approach, but leverages APIs to determine the head commit of each environment and then if your commit is an ancestor of that commit.

                    I think your changelog idea is great, but would be wary of the extra manual labor. Have you been able to automate the gathering of the messages (even if they require a bit of editing)?

                  1. 6

                    Some quotable stuff in here:

                    • « Hospital staff are like the Internet of Bacteria. »
                    • « AWS is bananas, and AWS permission bug exploits are banana fungus. »
                    1. 42

                      This is fear and speculation not based on facts.

                      Cloudflare’s DoH is compliant with GDPR, because there’s no PII sent or stored, apart from the technically-necessary IP of the TCP connection, and Cloudflare doesn’t even retain the IP address. It’s clearly stated in the privacy policy, which is very strict, and borderline paranoid. And compliance with the policy is audited externally by KPMG.

                      The author has written the entire article, including cutesy comic, and hasn’t even checked the one fact it is about?

                      Because the resolver doesn’t store personal info, and doesn’t store any non-aggregated logs beyond 24h, it’s pretty safe from being subpoenaed to hand the (non)data over.

                      The fear of U.S. government going as far as mandating implementation of a secret backdoor is a real one, but if it comes to this, we’re all fucked, because Firefox itself is under U.S-based Mozilla org/corp., and so is Google and Apple.

                      It would be better if the alternative was system-level DoH that uses a variety of trusted providers, but currently there’s no such thing. The actual alternative is sending unencrypted DNS packets, which we know are commonly logged and manipulated. The alternative is giving your DNS traffic to your ISP, who knows your real identity. You’ve probably clicked “Agree” on your ISP’s privacy policy that includes “sharing information with selected partners and affiliates”.

                      1. 14

                        The fear of U.S. government going as far as mandating implementation of a secret backdoor is a real one, but if it comes to this, we’re all fucked, because Firefox itself is under U.S-based Mozilla org/corp., and so is Google and Apple.

                        I don’t download Firefox from Mozilla; I get it from Debian, which is not a US-based org/corp. They have been good about stripping out the privacy-hostile gunk in browsers so far, and hopefully they will continue with this when DoH hits the versions they ship.

                        1. 2

                          I get it from Debian, which is not a US-based org/corp.

                          Software in the Public Interest is a US-based 501c3. They own the Debian trademark, domain name, and other infrastructure. They are as much “Debian” as MozFo is Mozilla.

                          1. 13

                            SPI does have no power over Debian Developers to force us to insert backdoors into Debian or anything similar. Also, the packaging process makes it very difficult for a DD to do so without other people noticing.

                            1. 1

                              They’re much more akin to MoFo.

                              1. 1

                                You’re right. Edited it.

                          2. 17

                            There is also the alternative of using DNSCrypt v2, which everyone seems to be ignoring

                            1. 13

                              I wonder why the planet wants to put everything over TCP, and then, put everything over HTTP, and then, put everything over JSON, and then, put HTTP over TLS, and then, put TLS over UDP into a merged QUIC, and then, put TLS over QUIC, and then …

                              DNSCrypt sounds a much simpler approach.

                              1. 7

                                Can whoever it was who voted incorrect please tell me why? Thanks

                              2. 16

                                Not really, I do not trust cloudflare or Google so I do not want to have DoH by default. This change literally makes your DNS requests dependent on one company.

                                On a different note, I do not believe my ISP in the Netherlands is allowed to share DNS data with third parties.

                                1. 9

                                  I don’t understand. How does this make your DNS requests dependent on one company? Even with the defaults, the standard TRR mode has failover to the system resolver. Conceptually, you can easily switch your DoH provider or even run and use your own (which is easy to do w/ dnsdist-1.4.0 for example). The choices are even in the Firefox preferences and don’t require tinkering w/ about:config.

                                  That being said, should Mozilla/Firefox prompt the user about these choices before enabling them quietly? Absolutely. But it should do these things for many other things too, such as your default search engine. Instead of disabling DoH, we should work on a better UX with these things.

                                  1. 1

                                    That’s a pretty selfish stance. Your Netherlands ISP doesn’t serve the entire world. But they do serve you, so fuck the actual billions of people using DNS outside of the EU?

                                    Every time this comes up I see people complaining about how US-centric these arguments for DoH are. But insecure DNS isn’t a US problem, it’s a whole world problem. EU people bitching about DoH / Cloudflare come off like billionaires wanting another tax cut to me. There are people in these comments that live with DNS domain blocking for country-wide censorship.

                                    1. 2

                                      People whose government is using such an ineffective censorship measure should feel lucky and protect the status quo. If your government is willing to deploy censorship, it’s a sure sign you cannot reason with that government. Better keep them unaware of their incompetence.

                                  2. 5

                                    This is fear and speculation not based on facts.

                                    No, the author is basing himself upon facts and the fact that he/she is a Swiss citizen, which, in fact, have way more protection against, and democratic control over their own government than many Americans can ever dream of. It’s the country of secrecy and bank vaults after all.

                                    Cloudflare’s DoH is compliant with GDPR, because there’s no PII sent or stored, apart from the technically-necessary IP of the TCP connection, and Cloudflare doesn’t even retain the IP address. It’s clearly stated in the privacy policy, which is very strict, and borderline paranoid. And compliance with the policy is audited externally by KPMG.

                                    The GDPR is just the “lowest common denominator”. There is quite literally nothing preventing European countries (like Swistzerland) from adding on additional requirements on top of it, as many European countries have done so already. The fact that CloudFlare is GDPR-compliant, does not mean they are in compliance with all local laws as well.

                                    Because the resolver doesn’t store personal info, and doesn’t store any non-aggregated logs beyond 24h, it’s pretty safe from being subpoenaed to hand the (non)data over.

                                    This is something we have to blindly trust CloudFlare on. Your argument flows along similar lines as the one the co-founder of AirVPN made a couple of weeks ago. I debunked that thoroughly here back then. Besides: Cloudflare falls under US-jurisdiction, which means that the traffic might be intercepted even before it reaches their servers.

                                    The fear of U.S. government going as far as mandating implementation of a secret backdoor is a real one, but if it comes to this, we’re all fucked, because Firefox itself is under U.S-based Mozilla org/corp., and so is Google and Apple.

                                    Are we now? If such backdoors exist, they will probably not end up in the open-source versions of the browser, but they will show up in the binary distributions. For example, Slackware ships the entire source code including all the tools to build firefox from scratch, on their DVD-distribution. Debian has a system in which many builds are reproducible bit-for-bit. I think this greatly improves options and trust in case such a backdoor is found and we need to remove it. Although you would not have that much luck with any Apple, Google or Microsoft products.

                                    It would be better if the alternative was system-level DoH that uses a variety of trusted providers, but currently there’s no such thing. The actual alternative is sending unencrypted DNS packets, which we know are commonly logged and manipulated.

                                    Ever heard of DNSSEC? This makes the DNS-packets tamper-resistant to a very high degree.

                                    The alternative is giving your DNS traffic to your ISP, who knows your real identity. You’ve probably clicked “Agree” on your ISP’s privacy policy that includes “sharing information with selected partners and affiliates”.

                                    Well, I’ve read, and clicked “Agree” on my ISP’s privacy policy. However it did not contain a line similar to what you’ve just mentioned. It did however contain a line along the lines: “We will not share information with selected partners and affiliates beyond the minimum of what we need to provide you with your service, or when there exists a legal requirement to do so.” A surprisingly short list of what they share with whom and for which purposes follows shortly after that statement.

                                    Granted: this also means I’m paying about €5 more per month than average, just like more than a million customers that deliberately chose the same ISP.

                                    Which brings me to one of your other statements:

                                    The author has written the entire article, including cutesy comic, and hasn’t even checked the one fact it is about?

                                    The original author appears to be right, and I am sorry to say this, but it appears to be that that author is pretty well informed…

                                    I also do not like the fact how you simply skimp over the subject of DNS-caching, which means that one ISP will simply request the DNS-records of a domain only once per TTL for all of its customers and every “home gateway” requests it once for every user behind it per TTL, while DoH would request the headers from cloudflare for each and every single client individually per TTL. This makes individual clients way more identifiable than they would have been with traditional DNS, especially once combined with DNSSEC.

                                    However it certainly is not:

                                    fear and speculation not based on facts

                                    1. 2

                                      Instead of worrying about whether Cloudflare might violate the GDPR, a Swiss citizen can simply request they show they do comply, and refer it to the national data protection agency if they don’t.

                                    2. 4

                                      The fear of U.S. government going as far as mandating implementation of a secret backdoor is a real one, but if it comes to this, we’re all fucked, because Firefox itself is under U.S-based Mozilla org/corp., and so is Google and Apple.

                                      The US government can’t mandate Mozilla include a secret backdoor because Mozilla provides Open Source software, not a service. Mozilla could try, but anyone who noticed something unusual about a change to the code could undo the whole thing. It isn’t even necessary that the auditor understand precisely what’s been done: All the auditor needs to do is notice that Mozilla suddenly dropped some unusual code into the program and it’s all over for the secret backdoor.

                                      1. 3

                                        It’s not always easy to tell the difference between malicious code and honest mistakes.

                                        https://flak.tedunangst.com/post/warning-implicit-backdoor

                                        1. 1

                                          Both malice and incompetence would be interesting to anyone watching the Firefox codebase.

                                          Also, sudden, uncharacteristic incompetence would likely be taken as a sign of malice.

                                          1. 1

                                            I think tedu’s point is that it’s nuanced. It’s easy to make mistakes that can be abused. And things that weren’t bugs/exploitable can become bugs/exploitable through good intent (cleaning up compiler warnings).

                                        2. 1

                                          How do you know that the binary they ship matches the source code? I don’t think Mozilla is doing reproducible builds yet. https://bugzilla.mozilla.org/show_bug.cgi?id=885777

                                          Someone apparently did manage to get a reproducible build for Firefox on Linux, though: https://glandium.org/blog/?p=3923

                                        3. 6

                                          Well said. Centralizing all this data isn’t ideal, but it’s incremental improvement of an internet standard. And that’s the only way they improve.

                                          1. 2

                                            Cloudflare could easily start a foundation, give it one up to date copy of each type of server they use, all the source code, two well rounded engineers, a bunch of money, etc, then kick them loose. Make their mission statement “to protect the world from technological monopolies”. Let’s call it Groundflare. They could offer every service cloudflare does, but in a non-profit manner.

                                            Is that too much? Ok.. Thinking smaller: the EFF and my ISP could operate DoH services.

                                            Then, Mozilla could put a “round robin” feature in there that let’s me tell Firefox to rotate between each service..

                                            1. 2

                                              Cloudflare has nearly 200 datacenters all over the world, with BGP-level routing to the nearest one. This requires a lot of deals for peering and colocation. It’s hard to do even if you can afford it — some of the networks will not even talk to you unless you’re the size of Cloudflare.

                                              Nothing stops you from setting up your own DoH for yourself, but handling traffic for all Firefox users is not that easy, especially if you want it to be competitive with the speed of DNS of their local ISP.

                                              1. 4

                                                Of course CloudFlare has enough redundancy to ensure to “never be down ever”. But human mistakes happen, precisely at BGP level, and precisely because they manage it at global scale.

                                                DNS is a distributed database system that is the source of security (everything runs over TLS when it needs security for public service, with certs from the usual CA). Using a “single, distributed vendor” is breaking the last component of internet that was distributed.

                                                The race never ends toward centralization, and we still have a lot to go: we can still boot our OS without connecting to an OAuth.

                                        1. 16

                                          I found the “Enable DNS over HTTPS” setting in Firefox Developer Edition 69.0b16 (64-bit). It’s not (yet) checked automatically, but if I do check it, I have the option select “Cloudflare (default)” or specify a custom provider.

                                          1. 4

                                            It would be nice if you could specify a few to use in round-robin fashion.

                                            1. 3

                                              That’s called loadbalancing and you can usually solve that either via DNS or by any other HTTPS loadbalancing method.

                                              1. 7

                                                via DNS, you say… :-)

                                            2. 2

                                              I implemented my own DNS-over-HTTP/2 implementation for use at home (RFC-8484) isn’t that hard to implement) when it became apparent that Mozilla was shoving this down our throats. The recent version of Firefox wasn’t using it, this possibly explains why.

                                            1. 4

                                              Be right back, making shouldiemail.com, which will just return “no” for everything except plaintext.

                                              On a serious note, I actually find myself in the position of making a new protocol that has some similarity with email in this regard, including all the problems of rich text. I’m planning on making a new HTML-like language that’s almost purely semantic, and encouraging useragents to render it however they like, but I fear it could devolve straight into HTML5 garbage eventually.

                                              1. 5

                                                I’ve been participating on the Gemini mailing list (it’s a protocol somewhat between gopher and HTTP with mandatory TLS) and I swear, over half the messages on the mailing list have been about formatting text.

                                                1. 2

                                                  There are basically three things I want to be able to do that plaintext does not support:

                                                  1. Preformatted sections, e.g. for code, including inline snippets (simply switching the font won’t handle the latter)
                                                  2. Image embedding, including inline images such as emoji (which is how they ought to be handled, IMO)
                                                  3. Cut tags, for spoilers and content warnings and digressions and whatnot (and in HTML, this even requires javascript)

                                                  Maybe tabular data too? But those are the big ones.

                                                  1. 3

                                                    Cut tags, for spoilers and content warnings and digressions and whatnot (and in HTML, this even requires javascript)

                                                    If you can target Safari 6, Firefox 49, and Chrome 12, and you’re not worried about IE/Edge, the <details> element will give you this without any JavaScript.

                                                    1. 2

                                                      Ah, nice!

                                                    2. 3
                                                      For 1, I use an extra indentation.
                                                      For 2, I use references [1].
                                                      For 3, I'd use a custom markup.
                                                      
                                                      Note:
                                                      | This is an example of custom markup that does
                                                      | not clash with the '>' of quoting someone and
                                                      | looks like what it is: a note.
                                                      
                                                      But it looks like what you want is sending a
                                                      formatted document.
                                                      
                                                      [PDF]
                                                        lacks semantics and extracting data out of them
                                                        is a large pain.
                                                      
                                                      [HTML]
                                                        is a never-ending growing tower that makes
                                                        communication every day more complex (I don't
                                                        imagine HTML-based SMS!).
                                                      
                                                      [Markdown]
                                                        is a sweet spot between HTML and plain text, but
                                                        will not get formatted by any mail client I know
                                                        (without extension / forking the code and doing it
                                                        yourself).
                                                      
                                                      [Plain text]
                                                        can present everything with some extra creativity
                                                        and/or manual typesetting, or text typesetters can
                                                        help making it easier.
                                                      
                                                      [Other]
                                                        there might be many more formats that works just
                                                        fine, but might really not be supported by mail
                                                        clients in use.
                                                      
                                                      _____
                                                      
                                                      [1]: Like this, or maybe [attachment:1]
                                                      
                                                      1. 2

                                                        Extra indentation assumes everything is monospace and hard-wrapped, which won’t be the case. :-/

                                                        I just remembered something for cut tags, though: On Usenet, people would use rot13 to that effect! It doesn’t help with collapsing particularly long pieces, but that’s something the client can help with anyhow.

                                                        1. 1

                                                          I did not know about rot13. Fun ad-hoc implementation of spoiler !

                                                          About hard wrapping, I remember sending a plain text calendar to my boss for telling when I would be available (easier this way than a list of ~20 date ranges), and he received it on an iPad with variable width with auto-wrapping.

                                                          Unreadable !

                                                          I thought I’d never be hired after that…

                                                        2. 1

                                                          This was an example of what plain text email can look like, and it looks not so terrible to me.

                                                          Of course, I am biased as I prefer plain email text over HTML.

                                                  1. 5

                                                    No substantive comment, but what a terrible name for a vulnerability.

                                                    1. 7

                                                      Still waiting for someone to explain what “security” this provides. They can still see the IPs you connect to. Just look for the next SYN packet after a response comes back from a known DoH endpoint…

                                                      The one thing this standard does is create a backdoor to make it harder for you to filter content on your network (as required by law in some situations) and makes it harder for your security team to detect bots/malware/intrusions by triggering on lookups to known malware C&C servers. TLS 1.3 plus this means it’s extremely difficult especially for critical infrastructure (e.g., power generation companies) to filter egress traffic effectively.

                                                      If you want to stay out of prison for dissenting, you need a VPN*. If you want privacy, use a VPN*. This doesn’t solve either; it only makes it possible to avoid naughty DNS servers that modify your responses. But we already had solutions for that.

                                                      * and make sure the VPN is trustworthy or it’s an endpoint you control.

                                                      1. 7

                                                        No need to put scare-quotes on security. It hides DNS traffic. Along with eSNI it hides the domains you’re visiting. And if the domain uses a popular CDN, this makes the traffic very hard to spy on, which is a measurable improvement in privacy.

                                                        you need a VPN

                                                        Oh no, aren’t VPNs evil, because, as you said yourself, they make “it harder for you to filter content on your network (as required by law in some situations)”?

                                                        The false-sense-of-security traffic inspection middleboxes that were always easy to bypass with a VPN or even a SOCKS proxy, were needlessly weakening TLS for decades. Fortunately, they’re dead now.

                                                        1. 1

                                                          VPNs are much easier to block. You can do it at the protocol level for most types (you’re whitelisting outbound ports and protocols, right?) then you have lists of the public VPN providers to block as well.

                                                          If you’re only allowing outbound TCP 443 and a few others someone could do TCP OpenVPN over it, but performance is terrible and it’s unreliable so most people don’t try.

                                                          Regardless there are DPI devices which can fingerprint the OpenVPN traffic and tell it apart from HTTPS traffic because behaves differently (different send/receive patterns) and then you inject RST packets to break the session.

                                                        2. 4

                                                          Seeing IP’s that you connect to isn’t always useful, e.g. attacker wouldn’t realistically gain anything if a website you connect to is served through cloudflare, which serves enough different websites that it provides little information for the attacker.

                                                          1. 4

                                                            You can easily connect to the IP and grab the list of domains on the SAN certificate that CloudFlare is using on that IP address to figure out where they’re connecting. There’s only like 25 per certificate. It’s not hard to figure out if you are targeting someone.

                                                            e.g., it would not be difficult to map 104.18.43.206 to the CloudFlare endpoint of sni229201.cloudflaressl.com and once you have that IP to CloudFlare node mapping sorted out you can craft a valid request …

                                                            Subject Alternative Names: sni229201.cloudflaressl.com, *.carryingcoder.com, *.carscoloringpages101.com, *.caudleandballatopc.com, *.coloringpages101.com, *.cybre.space, *.emilypenley.com, *.indya101.com, *.nelight.co, *.scriptthe.net, *.shipmanbildelar.se, *.teensporn.name, *.thereaping.us, *.totallytemberton.net, *.voewoda.ru, *.whatisorgone.com, carryingcoder.com, carscoloringpages101.com, caudleandballatopc.com, coloringpages101.com, cybre.space, emilypenley.com, indya101.com, nelight.co, scriptthe.net, shipmanbildelar.se, teensporn.name, thereaping.us, totallytemberton.net, voewoda.ru, whatisorgone.com
                                                            
                                                            1. 2

                                                              This list is encrypted in TLS 1.3, so you can’t easily grab it anymore (Firefox and Cloudflare also support eSNI, which plugs another hole).

                                                              1. 1

                                                                You misunderstand. I would create a database mapping of all CloudFlare nodes in existence: sniXXXXXX.cloudflaressl.com <—> IP addresses.

                                                                When I see traffic to one of these IPs, I simply make a new TLS handshake to sniXXXXXX.cloudflaressl.com, grab the certificate, read all of the domain names in the certificate. I don’t need a plaintext SNI request to see where they’re going; I can just infer it by asking the same server myself.

                                                                1. 2

                                                                  You’ll only learn that all Cloudflare customers share a handful of IP addresses, and there are millions of sites per IP.

                                                                  The certificate bundles aren’t tied to an IP, and AFAIK even the bundles aren’t constant.

                                                                  1. 1

                                                                    The server publishes a public key on a well-known DNS record, which can be fetched by the client before connecting (as it already does for A, AAAA and other records). The client then replaces the SNI extension in the ClientHello with an “encrypted SNI” extension, which is none other than the original SNI extension, but encrypted using a symmetric encryption key derived using the server’s public key, as described below. The server, which owns the private key and can derive the symmetric encryption key as well, can then decrypt the extension and therefore terminate the connection (or forward it to a backend server). Since only the client, and the server it’s connecting to, can derive the encryption key, the encrypted SNI cannot be decrypted and accessed by third parties.

                                                                    That’s fine, then someone will just excise the encrypted SNI part to use it in a crafted packet that’s almost like a replay attack. That will still get you the list of 25ish domains they could have accessed.

                                                                    Hell, this looks like you could eventually build a rainbowtables out of your captured SNI packets once you have sorted through the available metadata to see where they user went. (Assuming CF doesn’t rotate these keys regularly) Just analyze all sites on that cert, see all the 3rd party domains you need to load, and you can figure it out.

                                                                    This is a small hurdle for a state actor

                                                                    edit: I’m pretty sure you can just do a replay of the SYN to CloudFlare and not worry about trying to rip out the SNI part to get the correct certificate (TCP Fast Open)

                                                                    edit2:

                                                                    7.5.1.  Mitigate against replay attacks
                                                                    
                                                                       Since the SNI encryption key is derived from a (EC)DH operation
                                                                       between the client's ephemeral and server's semi-static ESNI key, the
                                                                       ESNI encryption is bound to the Client Hello.  It is not possible for
                                                                       an attacker to "cut and paste" the ESNI value in a different Client
                                                                       Hello, with a different ephemeral key share, as the terminating
                                                                       server will fail to decrypt and verify the ESNI value.
                                                                    

                                                                    Yeah you can’t replay the ESNI value, but if you replay the entire Client Hello I think it should work. The server won’t know the client’s “ephemeral” ESNI key was re-used.

                                                                    https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1

                                                                    1. 4

                                                                      The client Hello ideally only contains a client’s public key material, so you can’t decrypt the ESNI even if you replay the client hello. Unless you use a symmetric DH operation (which is rare and not included in TLS1.3) or break ECDH/EdDH/ECDHE.

                                                                      1. 2

                                                                        You are correct. I was going to post this after some coffee this morning. The response is encrypted with the client’s ephemeral ECDHE key.

                                                                        So this breaks this type of inspection.

                                                                        However, if you’re connecting to an endpoint that’s not on a CDN and is unique the observer can still figure out where you’re going. Is the solution we’re going to be promoting over the next few years to increase reliance on these CDN providers? I really don’t like what CloudFlare has become for many reasons, including the well known fact that nothing is free. They might have started with intentions of making a better web but wait until their IPO. Once they go public, all bets are off. All your data will be harvested and monetized. Privacy will ostensibly be gone.

                                                                        In America it’s illegal to make ethical choices if it doesn’t maximize shareholder value. (Ebay v Newmark, 2010)

                                                            2. 3

                                                              yes, and that protects exactly.. no one who needs it.

                                                              if you live somewhere where you need the security to hide your DNS requests, cloudflare will be the first thing to get blocked. the only really secure thing to do is onion routing of the whole traffic. centralizing the internet makes it more brittle.

                                                              additionally: ease of use is no argument if it means trading-off security. these tradeoffs put people in danger.

                                                              1. 3

                                                                As someone who barely knows his TCPs from his UDPs, I had to read up on DoH, and I must say that a technology must be doing something right if it elicits both your reaction and the following from the Wikipedia article:

                                                                The Internet Watch Foundation and the Internet Service Providers Association (ISPA)—a trade association representing UK ISPs, criticised Google and Mozilla for supporting DoH, as they believe that it will undermine web blocking programs in the country, including ISP default filtering of adult content, and mandatory court-ordered filtering of copyright violations.

                                                                1. 3

                                                                  i think DoH is the wrong solution for this problem, stuffing name resolution into an unrelated protocol. it may be true that it has the side-effect of removing the ISP-DNS-filters, but those can already be circumvented by using another name server.

                                                                  a better solution would be to have a better UI to change the nameservers, possibly in connection with DNS over TLS, which isn’t perfect, but at least it isn’t a mixture of protocols which DoH is.

                                                                  it could be an argument that the ISP could block port 53, and DoH would fix that. then we have another problem, namely that the internet connection isn’t worth it’s name. the problem with these “solutions” is that they will become the norm, and then the norm will be to have a blocked port 53. it’s a bit like the broken window theory, only with piling complexity and bad solutions.

                                                                  maybe that’s my problem with it: DoH feels like a weird kludge like IP-over-ICMP or IP-over-DNS to use a paid wifi without paying.

                                                                  1. 2

                                                                    maybe that’s my problem with it: DoH feels like a weird kludge like IP-over-ICMP or IP-over-DNS to use a paid wifi without paying.

                                                                    I agree with you that it feels like a kludge, it feels icky to me too.

                                                                    But it’s something that could lead to a better internet - at the moment DNS traffic is both unencrypted, but more importantly, unauthenticated. If a solution can be found that improves this, even if it’s a horrible hack, I think it’s a net win.

                                                                    Internet networking, like politics, is the art of the possible. We can all dream of a perfect world not beholden to vast corporate interests at every level of the protocol stack, but in the meantime the best we can hope for is to leverage some vast corporate interests against others.

                                                                    1. 2

                                                                      But it’s something that could lead to a better internet - at the moment DNS traffic is both unencrypted, but more importantly, unauthenticated. If a solution can be found that improves this, even if it’s a horrible hack, I think it’s a net win.

                                                                      It may be a short term win, but in the end we are stuck forever with another bad protocol because nobody took the time and effort to build a better one, or just had an agenda.

                                                                      Internet networking, like politics, is the art of the possible. We can all dream of a perfect world not beholden to vast corporate interests at every level of the protocol stack, but in the meantime the best we can hope for is to leverage some vast corporate interests against others.

                                                                      DoH is just another way of centralizing the net. sure you can set another resolver in the settings, but for how long? you’d have to do that on every device. or use the syncing functionality which is.. centralized. and even, who does that?

                                                                      i don’t think that “big players” in politics or in tech, do things out of altruistic reasoning, but, in the best case, good old dollar. that paired with most of the things being awful hacks (again in both, politics and tech) paints a bright future.

                                                                      1. 2

                                                                        I mean, the reality we live in now, where a company like Cloudflare has a de-facto veto on Internet content, just grew organically. It’s an inevitable consequence of technical progress, as stuff (like hosting, and DDoS protection) gets commoditized efficiencies of scale make large companies are the only ones who have a hope of making a profit.

                                                                        To their credit, Cloudflare seem aware and uncomfortable about their role in all this, but that’s scant consolation as they’re under the same profitability requirements as the rest of the free world. They can be sold, or move to “evil” to save their profits.

                                                                        1. 3

                                                                          Yep - even prior to DoH, Cloudflare have BGP announce privileges and can issue certificates which are trusted by browsers, which are two powers which should never have been combined in the same entity (being able to funnel a sites traffic to your servers and also generate valid certs for those requests).

                                                                          1. 2

                                                                            I mean, the reality we live in now, where a company like Cloudflare has a de-facto veto on Internet content, just grew organically.

                                                                            … and with their resolver being the default one they even have control over the rest, amazing!

                                                                            It’s an inevitable consequence of technical progress, as stuff (like hosting, and DDoS protection) gets commoditized efficiencies of scale make large companies are the only ones who have a hope of making a profit.

                                                                            the need for something like DDoS protection is more a consequence of full-throttle capitalism ;)

                                                                            1. 1

                                                                              with their resolver being the default one

                                                                              For the fraction of internet users running Firefox, sure. Google will handle the rest. No doubt MSFT will hop on board too.

                                                                              the need for something like DDoS protection is more a consequence of full-throttle capitalism

                                                                              Or technical debt inherited from a more trusting vision of the internet…

                                                                              (edit addressed Cloudflare’s role as default DoH provider for Firefox)

                                                                    2. 2

                                                                      UK ISPs have to block child porn or the CEO will be held accountable and go to prison. They do DNS filtering, because IP filtering is impossible. Now they can’t even do that.

                                                                      1. 5

                                                                        I’m aware of the legal requirements of UK ISPs (although why they feel they need to celebrate this requirement by awarding (then withdrawing) the “Internet Villain of the Year” to Mozilla is beyond me).

                                                                        I guess the “responsibility” for filtering/blocking will move up to Cloudflare.

                                                                        1. 1

                                                                          we’ve had a lengthy political discussion in germany about this topic (where “filtering” was a long time the preferred political solution) now the policy is to ask the respective hoster to delete these things. i have no good english source for this, so here is the translated german wikipedia article (original)

                                                                          1. 3

                                                                            You can push the ISP to DNS block it (though it’s harder and usually leads to years-long court cases as in Vodafone’s case).

                                                                            Telekom also loves to push their own search engine with advertisements for NXDOMAIN responses.

                                                                  2. 3

                                                                    Still waiting for someone to explain what “security” this provides. They can still see the IPs you connect to. Just look for the next SYN packet after a response comes back from a known DoH endpoint…

                                                                    It does one useful thing: It prevents them from MITMing these packets and changing them.

                                                                    I’d like encrypted DNS, but I’m very strongly against Firefox selecting my DNS resolver for me for reasons that have already been stated in threads here. I also strongly prefer keeping the web stack out of my relatively simple client-side DNS resolver. Diverse ecosystems are important, and the only way to maintain them is to keep software simple enough that it is cheap to implement.

                                                                    1. 1

                                                                      It does one useful thing: It prevents them from MITMing these packets and changing them.

                                                                      Sure, but that’s rare. It would require a targeted attack or a naughty ISP to be altering results.

                                                                      What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver. Now you have zero controls over the client devices on your network when it comes to DNS and additionally we’re about to lose HTTPS inspection in the near future. This is the wrong approach to solve the problem. Admins need controls and visibility to secure their networks.

                                                                      Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                                                      It will be almost impossible now to stop users or bad guys from accessing Dropbox, for example. “Secure the endpoints” is not the answer. You can secure them, deny BYOD, etc, but you have to assume they’re compromised and/or rooted. Only the network is your source of truth about what’s really happening and now we’re losing that.

                                                                      1. 4

                                                                        I guess I don’t have much sympathy for the argument that network administrators will lose insight into the traffic on their networks. That seems like a bonus to me, despite the frustration for blue teams.

                                                                        1. 3

                                                                          Same. I understand that in some places there are legal auditing requirements, but practically everywhere else it’s just reflexive hostility towards workers that makes us use networks that are pervasively censored and surveilled.

                                                                        2. 4

                                                                          Sure, but that’s rare. It would require a targeted attack or a naughty ISP to be altering results.

                                                                          Except that it’s not rare. You will find this in many hotel wifis. This hits you particularly hard if you have a DNSSEC validating resolver, which doesn’t take kindly to these manipulations. Having a trusted recursor is generally important if your want to be sure that you talk to a resolver you can actually trust, which is in turn important if you want to delegate validation.

                                                                          What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver.

                                                                          Just as HTTPS prevents you from forcing your clients to talk to an on-remise cache or whatever. The solution is the same in both cases. You need to intercept TLS, if this is a hard requirement for you. DoH and DoT isn’t making anything more complicated, its just bringing DNS on par with the protection level we have had for other protocols for a while.

                                                                          1. 3

                                                                            You hit the nail on the head here. Far from being rare, in the US it’s ubiquitous, whether it’s your hotel, your employer, or your residential ISP.

                                                                          2. 3

                                                                            Only the network is your source of truth about what’s really happening and now we’re losing that.

                                                                            Good. Corporate networks must die. “Secure the endpoints” is THE ONLY answer.

                                                                            https://beyondcorp.com

                                                                            If Google can pull it off at Google scale, so can you. Small teams with lots of remote people have always been Just Using The Internet with authentication. It’s the “Enterprise”™ sector that’s been suckered into buying “Security Products”™ (more like “Spying Products”) to keep trying to use this outdated model.

                                                                            1. -1

                                                                              You clearly know nothing about running critical infrastructure networks, so please refrain from making these types of comments.

                                                                            2. 2

                                                                              What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver.

                                                                              Could you please elaborate? Is this about a “non-canonical” local resolver or do you think it also has repercussions for locally hosted zones? For example *.internal.example.org locally versus *.example.org on the official internet. Or did I misunderstand you and you just meant a local forwarding resolver?

                                                                              I honestly didn’t read up enough on DoH yet, just wondering.

                                                                              1. 1

                                                                                Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                                                                Setup your own DoH server and you can once again inspect it. Ideally you use a capable and modern TLS intercepting box to inspect all traffic going in and out (as well as caching it).

                                                                                1. 1

                                                                                  Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                                                                  How? The IP or the URL of the DoH server you are talking to will stand out like a signal flare… I think that dumping the file to a cloud-service is way more efficient, easier and effective.

                                                                                  1. 1

                                                                                    The US Gov often gives early reports to security teams of Critical Infrastructure networks details on all sorts of potential attacks, including early heads up on malware that may or may not be targeted. This includes a list of C&C domains that may be accessed. If the software can hide its DNS requests by making it look like normal HTTPS traffic to CloudFlare, that makes it even harder to identify the malware’s existence on your network.

                                                                                    If you want the Russians or Chinese to hack our grid, this is a great tool for them along with TLS 1.3. The power generation utility that I worked at did HTTPS interception and logging of ALL HTTPS and DNS requests from every device everywhere for analysis (and there was a program coming online to stream it to the government for early detection) and now this is becoming impossible.

                                                                                    1. 1

                                                                                      This pertains only to firefox… So why would an installation of firefox be on one of those networks?

                                                                                      Furthermore: You know the ip of cloudflare’s DoH server. You could just block that and be done with it right? If the malware uses some other server, that will show up as well.

                                                                                      1. 2

                                                                                        Firefox won’t be on that network, but HTTPS certainly will be. Likely not on (hopefully still airgapped) SCADA, but on other sensitive networks that give some level of access into SCADA through various means.

                                                                                        The point is that as DoH thrives and becomes commonplace and someone like CloudFlare runs this service, it’s easy to hide DNS requests mixed in with normal looking HTTPS traffic. The client can be a python script with DoH capability.

                                                                                        As for CloudFlare’s DoH service – it appears to be running on separate IPs at the moment, but there’s no reason why they couldn’t put this on their normal endpoints. DoH is HTTPS, so why not share it with their normal CDN endpoints? This would not be difficult to do in Nginx. In fact this would be far simpler than running HTTPS and SSH on the same port, which is also possible.

                                                                                        Basically any normal-looking HTTPS endpoint could become a DoH provider. Hack some inconspicuous server, reconfigure their webserver to accept DoH too, and now you’ve got the backdoor you need for your malware.

                                                                                        CloudFlare and Firefox are not my concern; DoH as a whole is.

                                                                                        1. 1

                                                                                          As for CloudFlare’s DoH service – it appears to be running on separate IPs at the moment, but there’s no reason why they couldn’t put this on their normal endpoints. DoH is HTTPS, so why not share it with their normal CDN endpoints? This would not be difficult to do in Nginx. In fact this would be far simpler than running HTTPS and SSH on the same port, which is also possible.

                                                                                          Fair point…

                                                                                          But now I’m wondering why you would have access to cloudflare on such a network… Or why there won’t be a root-certificate on all the machines (and firefoxes) in the network so that the organization can MITM’s all outgoing traffic?

                                                                                          1. 1

                                                                                            There are going to be some networks running servers that need outbound HTTPS for various reasons, but a lot of that can be locked down. But what about the network that the sysadmins are on? They need full outbound HTTPS, and a collaborating piece of malware on one of their machines gives them access to the internet and to other internal sensitive networks. These types of attacks are always complex and targeted. Think of the incredible work we did with Stuxnet.

                                                                                            As for MITM the traffic… look at this thread where it’s being discussed further https://lobste.rs/s/pechdy/turn_off_doh_firefox_now#c_inbnse

                                                                                            1. 1

                                                                                              There are going to be some networks running servers that need outbound HTTPS for various reasons, but a lot of that can be locked down.

                                                                                              So why cloudflare? I doubt you’d need any high-volume sites that use cloudflare for those setups.

                                                                                              But what about the network that the sysadmins are on? They need full outbound HTTPS, and a collaborating piece of malware on one of their machines gives them access to the internet and to other internal sensitive networks. These types of attacks are always complex and targeted. Think of the incredible work we did with Stuxnet.

                                                                                              If the networks really are that sensitive, just separate them physically, give the sysadmins two machines and never transport data in digital form from the one to the other….

                                                                                              If you are not willing to take these kinds of steps, your internal networks simply aren’t that critical.

                                                                                              1. 2

                                                                                                That is not how the networks at our power utilities work. And it’s not how the employees operate either.

                                                                                                1. Many power companies refuse to implement new technologies or network topologies unless another utility does it first. Which sadly means that in certain regions like MISO you can expect most of the utilities to be using the same firewalls, etc etc. Very dumb. Can’t wait for Russia to abuse this and take down half the country.

                                                                                                2. The people that work there aren’t the brightest. “Why are user accounts being managed with a perl script that overwrites /etc/passwd, /etc/shadow, and /etc/groups?” Well because that’s the way they’ve always done it, so if your team needs to install a webserver you also need to tell them to add the www user to their database so the user account doesn’t get removed. “Why are the admins ssh-ing as root everywhere with a DSA key that has no passphrase protection?” because the admins (of 20 years experience) refuse to learn ssh-agent and use basic security practices. I had meetings with developers who needed their application to be accessible across security domains and the developer couldn’t tell me what TCP port their application used. “What’s a port?”. These are people making 6 figures and doing about 30 minutes of work a day. It’s crazy.

                                                                                                3. These are highly regulated companies with slim margins. You want these kinds of drastic changes to their infrastructure? You better start convincing people to nationalize the grid because they don’t have the money to do it. Remember, it takes about 3 years to get a utility rate change approved. It’s a long process of auditing and paperwork and more auditing and paperwork to prove to the government that they really do need to increase utility rates to be able to afford X Y and Z in the future. They’re slow moving. Very slow.

                                                                                                4. Do you think customers will want their power bills to go up just so they can hire competent IT staff? Not a chance. (What we really need to do is stop subsidizing bulk power customers and making normal residential customers pay more than their fair share, but that’s a different discussion)

                                                                                                tl;dr we can all wish hope and pray that companies around the world will do the right thing, but it’s not going to happen anytime soon, especially in Critical Infrastructure environments because they’re so entrenched in their old ways and don’t have the budgets to do it the right way regardless.

                                                                                                1.  
                                                                                                  1. In utility companies, the production networks running the power plants should simply not come into contact with the internet. There should always be a human inbetween the network and the internet. If this is not the case, they deserve what’s coming.

                                                                                                  2. Believe it or not. I can actually understand why they dump into /etc/groups, /etc/passwd and /etc/shadow. There is no chance of any machine having an outdated users by accident or by partial configuration this way, and if your network has only a few hundreds of users, which are all more or less trained to deal with complex technological systems on a basic level. Why not? It’s not like they are running a regular common office workplace.

                                                                                                  However, what you are telling me about SSH and TCP is quite shocking. That is just plain incompetence.

                                                                                                  1. I’m not living in the US. In fact; the last time I’ve been there I was at an age from which I can barely remember anything other than that the twin towers still stood. I am often told that it’s a different country now, so I can’t say anything useful about this.

                                                                                                  2. Depends…. If the outages are below about 2 short power outages per year on average, then no I wouldn’t.

                                                                                                  If it starts to escalate to one outage per month and 25% of them can be blamed on incompetent IT-staff? You’ve reached the point where I am going to install my own diesel generators as those will quickly become profitable.

                                                                                      2. 1

                                                                                        I don’t quite understand. Regardless of the TLS version, if you want to inspect https you need to intercept and decrypt outgoing https traffic via a middlebox. This applies to regular https just as it applies to DoH. If you are required to secure your network inspecting encrypted traffic, you will continue to do so just like you’ve always done. In this sense, DoH is even less intrusive than, say, DoT because your standard https intercept proxy can be adapted to deal with it.

                                                                                        1. 1

                                                                                          Wasn’t the goal of TLS 1.3 to make interception impossible? I am certain that was one of the major goals, but I didn’t follow through the RFC’s development.

                                                                                          How would interception work? With ESNI in TLS 1.3, the client does a DNS lookup to retrieve the key to encrypt the ESNI request with. The middlebox couldn’t decrypt the ESNI and generate a certificate by the local trusted CA because it doesn’t know the hostname the client wants to access. So now… a middlebox will also have to be a DNS server so it can capture the lookup for the ESNI key, generate a fake key on demand, and have it ready when the TLS connection comes through and is intercepted?

                                                                                          This is getting quite complex, and there may be additional middlebox defeat features I’m not aware of

                                                                                          1. 1

                                                                                            No, the basic handshake can still be intercepted similarly to TLS 1.2, so that’s not a problem with 1.3.

                                                                                            ESNI might be a slightly different issue. But you could just take a hardline stance and drop TLS handshakes which use ESNI and filter the ESNI-records (with a REFUSED error?) in your resolver. If you need to enforce TLS intercept, you will need to enforce interceptability of that traffic and that might mean refusing TLS handshakes which use ESNI. But I heaven’t read the RFC drafts yet, so there might be easier/better ways to achieve this. In any case, none of this should be a deal breaker. TLS intercept proxies have always been disruptive (e.g. client certificates cannot be forwarded past an intercept proxy) and this will apply to ESNI just as it has done to past aspects of TLS.

                                                                                            What I feel should be clear is that none if this will suddenly turn existing practices impossible. Restrictive environments will continue to be able to be restrictive, just as they have in the past. The major difference will hopefully be that we will be safer by default even in open networks, such as public wifis, where a large number of users are currently exposed to unnecessary risks.

                                                                                            1. 1

                                                                                              ESNI might be a slightly different issue. But you could just take a hardline stance and drop TLS handshakes which use ESNI and filter the ESNI-records (with a REFUSED error?) in your resolver. If you need to enforce TLS intercept, you will need to enforce interceptability of that traffic and that might mean refusing TLS handshakes which use ESNI.

                                                                                              I don’t think this is possible. TLS 1.3 means ESNI is a given. If half the internet uses TLS 1.3-only, you have no choice but to support it. AIUI they’ve gone to great lengths to prevent downgrade attacks which will stop the interception.

                                                                                              I have a contact at BlueCoat and am reaching out to see what the current state is because their speciality is exactly this.

                                                                                              1. 1

                                                                                                TLS 1.3 means ESNI is a given.

                                                                                                Right now, ESNI is not mandatory for TLS 1.3. TLS 1.3 is a complete and published RFC standard. ESNI is only a draft and is certainly not mandated by TLS 1.3. You don’t need to run downgrade attacks to “intercept” TLS 1.3. Intercept proxies simply complete the TLS handshake by returning a certificate for a given domain issued by a custom CA that’s (hopefully) in the client’s trust store. This works just the same for 1.3 as it does for any earlier method.

                                                                                                1. 1

                                                                                                  Do we know the failure mode is if ESNI is rejected? Everyone wants ESNI for their privacy and browsers will certainly implement it, so it will be more common than not I suspect.

                                                                                                  edit: and thanks, I was still operating under the impression that ESNI was part of the final TLS 1.3 draft. I haven’t taken the time to read through it all and there’s a lot of misinformation out there. I’ve been too busy to dig in deeper, and security is not my day job right now.

                                                                              1. 18

                                                                                Applications don’t manage the network

                                                                                This entire paragraph reeks of “King of the Castle” attitude. 5 years ago I interacted with a sysadmin who had the same arguments for why every network device should go through his local HTTP proxy.

                                                                                Just imagine the pure mess if you get different DNS results in different applications.

                                                                                Well, what does the author imagine will happen? I don’t know. I don’t think it matters if two apps resolve to different IPs. What would happen if two devices resolved some hostnames to different IPs? Probably the same thing: Nobody cares.

                                                                                btw Firefox caches DNS results independently of the OS, and I believe so does Chrome. These out of sync issues already happen and they are not a big deal.

                                                                                The chaos will be a perfect Trump made Internet.

                                                                                Oh yeah, that’s how you connect Mozilla to Trump. Nicely done. You just had to taint your best argument (US govt can’t be trusted) with that line.

                                                                                I expected nothing based on the title and was still disappointed.

                                                                                1. 9

                                                                                  I don’t think it matters if two apps resolve to different IPs. What would happen if two devices resolved some hostnames to different IPs? Probably the same thing: Nobody cares.

                                                                                  As someone who semi-regularly has to debug connectivity issues people have with various services, it’s rather annoying not to be able to get the same DNS results when manually resolving addresses using the OS settings, especially if one doesn’t know about the fact that Firefox has started doing this.

                                                                                  1. 4

                                                                                    Yes, it has annoyed me too. DoH is not introducing this problem though.

                                                                                    1. 7

                                                                                      It does introduce the problem though. Problems related to application level DNS caching are easily bypassed by just having them restart the browser. But if your browser claims to not be able to resolve a name, or resolves it to something different due to split-horizon DNS, and everything else on the machine is able to resolve the name properly, how would you debug this?

                                                                                      (So according to https://blog.mozilla.org/futurereleases/2019/09/06/whats-next-in-making-dns-over-https-the-default/ it should fall back to the OS resolver on failures, so at least ones that don’t exist externally should work, though that doesn’t really help when the public address resolve differently.)

                                                                                      1. 7

                                                                                        Pertinent example:

                                                                                        (11:42:26) om:~% dig +short @8.8.8.8 archive.is
                                                                                        62.192.168.106
                                                                                        (11:42:35) om:~% dig +short @9.9.9.9 archive.is
                                                                                        51.15.97.128
                                                                                        (11:42:43) om:~% dig +short @1.1.1.1 archive.is
                                                                                        127.0.0.3
                                                                                        

                                                                                        uh oh!

                                                                                        1. 3

                                                                                          Apparently archive.is is refusing to respond to 1.1.1.1: https://twitter.com/archiveis/status/1018691421182791680

                                                                                  2. 3

                                                                                    Oh yeah, that’s how you connect Mozilla to Trump. Nicely done. You just had to taint your best argument (US govt can’t be trusted) with that line.

                                                                                    Exactly! This article felt nothing more than a scare tactic due to the author having un-moving opinions on the matter, resulting in him using points that aren’t even accurate. Author’s need to learn to have more perspective, share their own perspective, and back their sources

                                                                                  1. 45

                                                                                    Most of this is “what can a human do”, with a brief note that software developers can often get additional available time for the important political work.

                                                                                    Here’s a thing that software developers in particular can do: Support older hardware.

                                                                                    • Code for older versions of browsers. Yeah, you won’t get the latest convenient features. That’s OK, people made great websites 15 years ago too. If you only ever code for the most recent version of browsers, you’re forcing people to buy newer hardware that supports those browsers.
                                                                                    • Make your code efficient in disk space, memory, and CPU.
                                                                                    • Support Linux, which can make better use of old laptops than Mac or Windows can.

                                                                                    If people can use older hardware, they’re buying less new electronics. This is not going to make a massive difference in terms of climate change, but it’s part of a larger pattern of putting a stop to growthism.

                                                                                    1. 14

                                                                                      Perhaps somewhat paradoxically, older hardware can be substantially less power efficient than newer hardware for the same tasks. It’s not always a good trade off particularly if you’re focusing on reducing your dependence on fossil fuel based power generation.

                                                                                      1. 29

                                                                                        It takes a lot for the power efficiency benefit to outweigh the cost of manufacturing and transport for new hardware. I suspect nothing in the consumer electronics realm comes close to break-even; datacenter a different matter.

                                                                                        1. 2

                                                                                          I tend to think of that more as an issue with washing machines and refrigerators. How true is that of laptops?

                                                                                          1. 7

                                                                                            It’s an anecdotal and unfair comparison, but my 9-year old Dell Vostro laptop consumes 25+ Watt in idle state, whereas a Dell XPS from 2018 can be tuned to 4+ Watt of idle consumption (the same Linux distro, both have Core i7 from different generations; the differences: HDD vs SDD and 17”/dual graphics against 13” Intel graphics).

                                                                                            1. 6

                                                                                              Very. Modern CPUs are much faster and more power-efficient than older ones, and they can clock down much more, just making better use of each clock cycle.

                                                                                          2. 2

                                                                                            I can see where you are coming from and I really agree with the sentiment. Still, it’s probably a lot more impactful to locally advocate for a right to repair legislation. People could then replace dead batteries of old phones so they can run your software in the first place.

                                                                                            It’s so tempting to keep to yourself and just work on the small stuff but systemic problems need systemic remedies.

                                                                                            1. 1

                                                                                              True, although that’s something non-developers can do too!

                                                                                          1. 3

                                                                                            I still feel like I don’t understand the Stellar model. The anchors can revoke your money, right? And who are the anchors?

                                                                                            It all seems much more complicated than Bitcoin (although at least it isn’t predicated on burning resources.)

                                                                                            1. 4

                                                                                              I clean my work keyboard a few keys at a time during long conference calls. This method is not particularly effective. It’s also not entirely ineffective.

                                                                                              1. 2

                                                                                                I occasionally do sewing or other repairs during conference calls. I’ve even patched jeans in an in-person meeting. I find I can listen better if I have something to do with my hands.

                                                                                                1. 1

                                                                                                  This is genius, I need to do that :)

                                                                                                1. 8

                                                                                                  Ad company complains about adblockers and wants you to use their adware; more at 11.

                                                                                                  1. 29

                                                                                                    Relevant links: duckduckgo.com, firefox.com, github.com/gorhill/uBlock, eff.org/privacybadger.

                                                                                                    1. 9

                                                                                                      Very apropos, I think, is Privacy Badger’s recent upgrade to detect and block the first-party cookie sharing that Google Analytics does: https://www.eff.org/deeplinks/2019/07/sharpening-our-claws-teaching-privacy-badger-fight-more-third-party-trackers

                                                                                                      1. 5
                                                                                                      1. -7

                                                                                                        It is very sad, that all this story started with some called “security researcher”, who invented this method which any scriptkiddy can use, wrote 75 pages of a bachelor thesis about making his method the most efficient possible, and brag abput his discovery, such as every criminal will know about. https://incolumitas.com/2016/06/08/typosquatting-package-managers/ (Yes betwwen the publication of his bachelor thesis march 17th, 2016 and his blogpost in june 06th, 2016, nobody noticed that he created a close to deadly weapon). There is so much way to analyse existing criminal behavior, that I have no idea, why one thinks, that it is necessary to invent new criminal behavior.

                                                                                                        1. 3

                                                                                                          The idea here is to get package repositories (and the larger community) to take the problem seriously by demonstrating how easy it is. I guess it didn’t work in this case, but you can look back at many examples of things that did help, despite also having malicious potential. For instance, Firesheep was a Firefox extension that allowed you to log in to the Facebook account of anyone on the same Wifi as you who was browsing Facebook; it got a huge amount of press, and I would argue it was the final push that made Facebook (and then other websites) finally pick up SSL for their entire site, not just the login page. Upside-down-ternet was a wifi router mod that would screw with the web traffic of any unauthorized users (e.g. turning graphics on websites upside-down), and made individuals understand the danger of using random wifi access points they stumbled across.

                                                                                                          In this case, I guess it didn’t work so well; package repositories have been very slow to address these issues, and almost entirely reactive rather than proactive. 🤷

                                                                                                        1. 1

                                                                                                          Any time your API has load() and safe_load(), you’ve got a dangerous API. It needs to be unsafe_load() and load().