I’ve been using Matrix as a glorified IRC bouncer for over a year, it’s pretty good, but Synapse still occasionally chokes on “forward extremities” and becomes completely unresponsive so you have to run a SQL query to clean up and wait for a while for it to become responsive again :(
worst offenders seem to be IRC-bridged rooms with a high join/part turnover. Such as #mozilla_#rust:matrix.org, #mozilla_#rust-offtopic:matrix.org, and #haskell:matrix.org
Riot-web has been fast enough for me, but I prefer Fractal, because GTK :)
It’s one of the big issues where no alternative for IRC really exists yet.
Riot also starts choking once the rooms grow over a few thousand memberd that join and part constantly — while even the simplest IRC clients handle it fine.
It’ll be interesting to see how this develops in the next years, but for now it looks like the time for Matrix to replace IRC isn’t just quite ready yet.
From the client/user point of view, riot is certainly as optimal as it is subotimal. It is fairly usable and nice, but also incredibly ressource hungry and slow at times. I would like to see more native clients (in particular console clients), but this would certainly increase friction in terms of client support for features and changes.
This also extends to the operational point of view: It’s not just that matrix/synapse is simply slow at times, it’s that the design is by default way more ressource intensive than IRC. An ircd requires basically nothing in terms of ressources to serve quite a seizable number of users. synapse on the other hand requires quite a lot of CPU power in addition to metric ton of space in it’s database (especially if your users join large rooms). Joining the main matrix channel is almost certain to cause hours of full CPU usage and increase the db size by a few hundred MB.
Of course matrix and irc provide different featuresets, but right now I feel that matrix may never be ideal for large group chats simply by design. I can’t quite see how rooms like the matrix main channel will ever be “ok” for a matrix server.
All this being said, matrix works nicely for one-on-one and small group chats, which is what most of my users do.
The actual design of the Matrix spec doesn’t have any issues that I have seen but the current software seems more like a prototype in production. Hopefully dendrite and some updates to riot can speed everything up because thats one of the main issues I see with it now.
Yeah, that’s what I’ve seen so far, too. The spec is great, but the implementation is rather meh. Which means that at least it should be easy to fix later on.
The spec does require a lot more resources than IRC, though, specifically in the form of maintaining logs and allowing searching of them. I wouldn’t be surprised if there are other implementations/settings that come out to auto-kill logs after a month or something (I don’t think that necessarily violates the spec and is pretty handy for GDPR)
Anybody have ideas why IPv6 adoption is consistently higher on the weekends?
It’s almost certainly personal use. This is due to many company networks not having DSlite, but also due to people being on mobile data plans on phones and tablets, which are a big driver for IPv6 usage as well.
Exactly. It’s the time when folks have the leisure time to take advantage of Comcrap offering a fully native IPV6 experience to consumers (like I did).
To the person who marked this comment a troll: Was it my use of “Comcrap” which I’ll admit could be seen as inflammatory? Because I was dead serious. Most of us working stiffs don’t have time to embark on a project like IPV6-ifying our home network except on the weekends when we can afford to screw it up and be down for an hour or two while we google solutions on our phones :)
Yes.
Thank you for your candor.
I of all people should be more sensitive about tarring employees of large corps with one brush, working for AWS myself and cringing on a regular basis when every Tom, Dick & Harry in the known universe blames me for every bad experience they ever had when the UPS guy spat on their package.
Similarly, you (whoever you are, and assuming you’re a Comcast employee) aren’t responsible for the numerous install and provisioning experiences that left my wife & I ready the climb the walls.
I will sincerely try to be better about this.
For what it’s worth I really appreciate Comcast’s consumer IPV6 rollout. I was able to flip a switch and convert my entire LAN, and everything worked like a charm.
A simple guess based on my observations in Germany right now: Many people have IPv6 connectivity at home, but not at work. Even many (most?) universites don’t provide IPv6 in their networks, especially on wireless. However, my private ISP does, and so do many others. The situation is similar in Austria, and (to my knowledge in France), so I’d expect it to hold in other places too and probably account much of this difference.
I can confirm that the situation is similar in France. IPv4 at work, IPv6 at home and on my phone over LTE.
There’s more IPv6 at home than at work.
Why, I hear you ask. I don’t really know. My theory is that home users tend to get v6 when an ISP decides to upgrades everyone’s DSL/fiber middleboxes, while companies often have more hardware. If any of the hops from my desk to the network is v4-only, then so I am I. Home users wait for an ISP to enable v6, business users wait for all of n hops to enable v6, where n≥1. Even if n is only slightly greater than 1 on average, that will introduce some random lag compared to home users.
In the US, at least one major cell carrier (T-Mobile) uses IPv6 for their mobile data network, so all handsets connect with IPv6.
Oddly enough, I’ve heard an anecdote involving that that selfsame name. As you may know, the IPv4 allocation rules were tightened several times at predefined points in time as the End grew near. One of the 2-3 relevant Deutsche Telekom companies was a bit late with requesting more address space from RIPE a while ago, and was one of the first ISPs to have a big address space application rejected.
All of its competitors had enough address space to grow for a while without any v6 deployment hurry, having gotten allocations in just before the rule change, but not Deutsche Telekom.
Maybe personal ISP plans are more likely to use DSlite or similar technologies? They can adapt faster while company plans still “need” fixed IPv4 addresses for VPN and other services
My cable provider is dual stack, but by default the IPv4 is carrier grade NAT and the v6 is native. Modern OS prefer v6 over v4, but there is more v6 in customer network than in offices.
I do still agree with Thomas Ptacek: DNSSEC is not necessarily.
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
From that article:
Is that true, though?
He would have been in a position to take over the public key, if he were willing to do so visibly. The DNS isn’t like a CA — a CA can issue n certificates for the same domain, but the DNS makes it difficult to give me one set of answers and you quite another, particularly if either of us is suspicious, as a monitoring service might be.
Bit.ly controlled its own private key. Gaddafi’s possibility was to take over control of the domain and publish other RRs, in full view of everyone. A concealed or targeted attack… I don’t think so.
Read the article, the quote comes from the part about DANE that extends DNSSEC, which is about putting public keys for TLS in TLSA resource records.
Certificate Transparency has pretty much solved this for the CA system, where it is directly visible if unauthorized certificates are being used.
I read it when it was new… I suppose things have changed a bit. The trened towards JSON-over-HTTPS has been very strong and gone very far, so securing only application protocols like HTTP isn’t as much of a problem as it was.
DNSSEC and DANE provide assurance that a given IP address is what I asked for. But if IP addresses aren’t relevant, assurances about them aren’t either…
So what do you think about DNS-over-HTTPS, which AIUI is also motivated by much the same thing, but only secures the path from the endpoint to the caching DNS server?
I once saw advertising for some $%⚠雷𝀲☠⏣☡☢☣☧⍾♏♣⚑⚒⏁ game on my own website while holding a presentation. The venue’s WLAN “enhanced” my site. Both DNS-over-HTTPS and DNSSEC would have prevented that attack, at least if I had used google’s or cloudflare’s resolvers instead of the presentation venue’s.
I do like that, although I would prefer that all authoritative DNS servers would implement TLS, so that my own recursor could do secure look-ups instead of only having a few centralized DoH resolvers.
Oh, in that case you’d still have much the same bottleneck: You’d need to do DoH/DoT to the root/tld name servers, of which there aren’t many.
Correct. But I’d like to see that development, which would be far better than DNSSEC.
I feel like many arguments in this article are misleading and/or omit important details.
Except.. it’s not. You can use ECDSA keys just fine for signing. Sure, you can use insecure keys. Just like you can use insecure keys or methods in TLS or pretty much anywhere else. We’ve come to distrust insecure configurations in TLS and we will probably have to move in that direction in DNSSEC. But first we should at least halfway get there.
That seems to depend a lot on your point of view. A client trusting a validating recursor only needs to check a single flag in the DNS response to know if a record was signed correctly. Insecure results are therefore clearly visible and incorrectly signed results won’t be returned by the resolver. For clients, very little seems to need changing, but this is also the place where the least adaption has happened up until now.
Two or three lines of configuration in knot-dns w/ automatic zone signing. No extra configuration on any of my nsd secondary servers. Not sure if I’d call that expensive to deploy. For a small zone, getting basic signing going is easier than configuring a Letsencrypt acme client. The biggest pain point is finding a registrar that allows you to set DS records for your zone.
Securing the “last mile” is not what DNSSEC tries to do. We’ve got DoT and DoH for this, so that’s a different issue from a DNSSEC point of view.
This is the only truely interesting point and it’s a difficult and interesing one for sure. Not sure if I’d open that can of worms right away, because the TLS CA system is also far from ideal. But I suppose it is true that DNSSEC has one central anchor for trust, which would usually be the keys for the root zone. It is of course also true that any local registrar might be influenced by a local government. But all of this is true today. The implications this has for DANE should probably discussed in the context of DANE and not of DNSSEC, but that’s just my 2 cents on this.
This is not an apology for Comcast, but my gut tells me that wrapping yet another protocol in HTTPS is maybe not the best idea. To be more technical, TCP overhead and SNI loopholes make DoH seem like a half-solution–which could be worse than no solution at all.
Also, I think DoH is yet another Google power-play–just like AMP–to build yet another moat around the castle.
Yea .. I mean, the slides aren’t wrong. And once Firefox is DoH->CloudFlair and Chrome is DoH->Google, who is to say either one wouldn’t just decide to delist a DNS entry they don’t like claiming it’s hate speech. Keep in mind, both companies have already done this to varying extents and it should be deeply troubling.
I run a local DNS server on my router that I control. Still, it queries root servers plain-text and my ISP could see that (even though I don’t use my ISPs DNS .. not sure if they’re set to monitor raw DNS traffic or not). I could also pump that through one of my hosting providers (Vultr or DigitalOcean) and it’s less likely they’d be monitoring and selling DNS data (but they still could if they wanted).
Ultimately the right legal argument that should be lobbied for is banning ISPs from collecting DNS data or altering DNS requests at all (no more redirects to a Comcast search page for non-existent domains!) That feels like it’s the more correct solution than centralizing control in Google/CloudFlare’s DNS.
I also run a local resolver (a pihole – for dns based ad filtering), but also use DoT (dns over tls) between my resolver and an upstream resolver.
It seems like host OS resolvers natively (and opportunistically) supporting DoT would solve a lot of problems, vs this weird frankenstein per-app DoH thing we seem to be moving towards.
They most certainly do, and a few less scrupulous ISPs have been shown to be MITM’ing DNS responses for various reasons but usually $$$.
Isn’t the real problem here the users choice of ISP? Or has so much of the internet become extremely monopolized around the world?
In the USA, there is basically zero choice in who your ISP can be, many even big urban areas have only 1 ISP provider. Perhaps if SpaceX can get their starlink stuff commercialized next year, the effective number will grow to 2…. maybe. I can’t speak for other countries, but in my experience they aren’t generally better in terms of options, but they do tend to be better in price. US ISP’s know they are the only game in town and charge accordingly.
That’s understandable, but DoH is not the answer here. Addressing the lack of choice is the answer. If Google and Firefox/CF get a free pass in the US, it affects the rest of the world.
I totally agree with you.
I am considering myself lucky then. I can choose between anything that can run over POTS, cable and fiber. The POTS and fiber networks being required to open up their network for other ISP’s as well.
This is not strictly true at all. Most urban areas in the US of A are a duopoly insofar as the internet goes. You usually have a choice for the internet between the cableco or the telco. In addition, telcos are often required to provide CLECs with some sort of access to the copper lines as well, so, there’s some potential for a additional choices like Sonic DSL, although those become more rare because often the telco charges CLECs more for access to this copper than the price of their internet service directly to the consumer, so, Sonic is one of the few remaining independent CLECs out there.
Some areas do have extra third choices like PAXIO, Webpass, Google Fiber, as well as local municipal networks in some areas.
10% of the US at any speed have more than 2 providers. When you get into slower speeds, there are 2 choices(telco and cable company).
“At the FCC’s 25Mbps download/3Mbps upload broadband standard, there are no ISPs at all in 30 percent of developed census blocks and only one offering service that fast in 48 percent of the blocks. About 55 percent of census blocks have no 100Mbps/10Mbps providers, and only about 10 percent have multiple options at that speed.” - https://arstechnica.com/information-technology/2016/08/us-broadband-still-no-isp-choice-for-many-especially-at-higher-speeds/
Figure 5 in the linked article above pretty much sums it up. So we are both correct, depending on perspective. :) The FCC thinks all is fine and dandy in the world of US internet providers. Something tells me the Cable companies are encouraging that behaviour :)
Once the standards are in place for DHCP (et al) to report a default DoH endpoint to use, and OSes can propagate its own idea, informed by DHCP or user configuration, to clients (or do the resolving for them via DoH), there’s little reason for Firefox or Chrome not to use that data.
That issue is regularly mentioned in the draft RFCs, so there will be some solution to that. But given that there’s hijacking going on, browser vendors seem to be looking for a solution now instead of waiting that this part of the puzzle propagated through systems they don’t control.
Also, web browsers have a culture of “implement first, standardize once you experienced the constraints”, so this is well within their regular modus operandi - just outside their regular field of work.
Lobbying work isn’t as effective as just starting to use DoH because you have to do it in each of the nearly 200 jurisdictions around the globe.
Not holding my breath on a legal solution. US gov has not been a friend of privacy, and other governments are far worse.
Only thing coming to mind here is some sort of privacy-oriented low-profit/non-profit organization to pool and anonymize queries over many different clients. Even that’s not so great when most setups are 8.8.8.8, admin/password, and absolutely DNGAF.
Like Cloudflare not supporting edns.. :/
The TCP/TLS overhead can be minimized with keep-alive, which DoT clients like stubby already do. You can simply reuse an established connection for multiple queries. This has worked very well for me in my own setups.
As others have probably pointed out, the SNI loophole can be closed with eSNI. How soon and if this is going to take hold is anyones guess at this point. But I personally see privacy as more of a side effect as I simply care that my queries are not manipulated by weird networks.
I would love to agree with you here (and I do so in principle), but from my own experience with DoT and DoH I can tell you that many networks simply don’t allow a direct DoT port, leaving you with either DoH or plain DNS to an untrusted (and probably non-validating) resolver. The shift to “X over HTTPS” is but a reaction to real world limitations, where almost everything but HTTP(s) is likely to be unreachable in many networks. I’d love to use DoT and do so whenever I can. But I need to disable it more often than I’d like to. :(
A minor fun fact regarding DoH: Since a http(s) server can redirect to different endpoints, it’s in principle possible for clients to choose different “offers” - a DoH server may offer a standard resolution on
/query
and filter out ad networks on/pihole
or whatever. And using dnsdist, this is easy to setup and operate yourself. DoH doesn’t really mean DNS centralization but the opportunity for quite the opposite: You could now take your own resolver with you wherever you go.I’m fine with DoH as a configurable system-level feature, but application-level resolvers are bad news, and that seems to be where all of this is headed.
If that’s where it goes, many of applications will default to their own favorite DoH provider for some kind of kickback. The prospect of having to find the “use system resolver” check box for every single application after every other update does not bring joy.
HTTPS is upgrading to QUIC, so we’ll eventually have DNS back on UDP, but with proper encryption this time.
The arguments in this article focus around the same premise, for the most part: this isn’t a complete fix, and it adds extra centralisation. So let’s deal with the centralisation because I feel this is the main issue with Mozilla’s implementation.
As somebody who lives in the UK, it’s no secret that my ISP is already actively engaged in reading and modifying my plaintext DNS queries.
In fact, Mozilla have agreed not to turn on DoH by default in the UK after the IWF publicly claimed that encrypting DNS would facilitate spreading child pornography. Think of the children!
While I’d prefer not to send all this data to Cloudflare, the fact is that unless I run my own DNS server, ultimately I am always going to be sending my DNS requests to an untrusted third party.
For me and other users in the UK, DNS-over-HTTPS is an improvement, even with the faults of this implementation.
In the US, am I correct in thinking that this is the other way around - there’s no (tin foil hat moment - no public) program forcing ISPs to filter or report DNS queries? Meaning that if you must trust someone, your ISP is a good a bet as Cloudflare or better? Genuine question - IIRC this was shot down along with SOPA, but I’m not 100% sure of the state of it.
All of this by way of saying that I wish we could change the focus of this conversation. How do we keep the benefits of DNS-over-HTTPS and avoid unnecessary centralisation? What steps can Mozilla take, or can we take as private individuals, to fix this implementation?
I believe that the main problem is that Mozilla/Firefox has a long history of avoiding presenting a user with necessary choices. It doesn’t ask you for a preferred search engine on profile creation. Neither does it ask you if everything you type in the address bar should be sent to that search engine (usually google). And, quite obviously, it doesn’t ask you for a preferred DoH resolver.
All of this means that the vast majority of people will feed data to these preselected companies because Firefox intentionally avoids asking its users. You can of course change all of these things afterwards in the settings. But many people - quite rightfully in my opinion - worry that most users will not touch these settings at all. What’s odd is that this discussion now arises as an issue around DoH rather than as a general problem that should be addressed by itself and that expands way beyond Mozilla’s preferred DoH resolver..
Excellent point. At least for search, Mozilla has a bad incentive, in that their primary revenue stream is Google paying for the privilege of being a default provider.
Perhaps I’m missing something obvious, but… why can’t we have DoH without Cloudflare? What’s to stop me from running my own DoH server?
Absolutely nothing. Cloudflare is a red herring.
It’s true that the DoH configuration interface in the Nightly and Beta releases of Firefox already presents the option to use another provider.
But it’s … nearly disingenuous to consider this issue in the abstract…
May I rephrase the question?
Now for this one:
The D in DNS stands for Distributed. You may operate a DNS server. By design, your DNS server will respond to queries when it knows the answer, it will speak to peers DNS servers to learn new answers, and it will speak to superior DNS servers when peers don’t know the answer. (Or something like that! I’m no DNS expert.)
There are many gratis, libre, proprietary, and commercial DNS server software options available for every OS and hardware combination that could conceivably become connected to a network.
But Firefox DoH is operated by Cloudflare. Full stop. It was designed to be operated by a single large entity from day one.
The authors’ presentation of their arguments may be flawed, but I believe that their position is correct in principle: A Firefox default-on DoH via Cloudflare will be bad for the Internet.
No, it stands for Domain. Domain Name Service.
I agree with you, but please don’t promote my side of the argument using bad information.
Hm, that’s weird. Uh… Hm. ?:)
There are several: https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers
You’re understanding of DNS seems flawed here (aside from it standing for Domain Name System instead of Distributed… Name System?). I’d do more reading. DoH itself isn’t evil, and I think that’s one of the unfortunate side-effects of this whole controversy – the technology being blamed for the defaults Firefox is setting.
You can have a resolver in your network that speaks plain DNS do DoH to the outside world if you wanted. Due to the distributed nature of DNS, DoH servers themselves can be recursive resolvers, and ask other DNS servers for answers that it doesn’t know about.
The distributed nature of DNS doesn’t end with DoH. The problem (as I see it, at least) is that it centralizes client DNS comms to Cloudflare. If Firefox shipped a list of DoH servers that weren’t connected to Cloudflare, and randomly distributed customers across them, then I think this would be a very different conversation.
I even agree with not wanting the default to be cloudflare, but this whole answer seems pretty ill-informed.
That’s not true. You can configure Firefox to use any DoH server out there. It “just” uses Cloudflare’s DoH servers by default. You can change this in the settings and you can modify the behaviour even more by going the about:config route.
And you can run your own DoH server. I’m doing this right now. You still have the freedom to use your own services for this, just like you did with plain DNS. For some odd reason the whole DoH discussion has become tied up with the myth that it is somehow forcibly connected to Cloudflare. It isn’t. It’s just a bad default. We should discuss the bad default and DoH seperately, but this has become a tangled up and overly emotional mess for most people.
I believe Google is also running a DoH service, presumably for the future benefit of Chrome.
Not gonna configure my Firefox to point there, though.
Still waiting for someone to explain what “security” this provides. They can still see the IPs you connect to. Just look for the next SYN packet after a response comes back from a known DoH endpoint…
The one thing this standard does is create a backdoor to make it harder for you to filter content on your network (as required by law in some situations) and makes it harder for your security team to detect bots/malware/intrusions by triggering on lookups to known malware C&C servers. TLS 1.3 plus this means it’s extremely difficult especially for critical infrastructure (e.g., power generation companies) to filter egress traffic effectively.
If you want to stay out of prison for dissenting, you need a VPN*. If you want privacy, use a VPN*. This doesn’t solve either; it only makes it possible to avoid naughty DNS servers that modify your responses. But we already had solutions for that.
* and make sure the VPN is trustworthy or it’s an endpoint you control.
No need to put scare-quotes on security. It hides DNS traffic. Along with eSNI it hides the domains you’re visiting. And if the domain uses a popular CDN, this makes the traffic very hard to spy on, which is a measurable improvement in privacy.
Oh no, aren’t VPNs evil, because, as you said yourself, they make “it harder for you to filter content on your network (as required by law in some situations)”?
The false-sense-of-security traffic inspection middleboxes that were always easy to bypass with a VPN or even a SOCKS proxy, were needlessly weakening TLS for decades. Fortunately, they’re dead now.
VPNs are much easier to block. You can do it at the protocol level for most types (you’re whitelisting outbound ports and protocols, right?) then you have lists of the public VPN providers to block as well.
If you’re only allowing outbound TCP 443 and a few others someone could do TCP OpenVPN over it, but performance is terrible and it’s unreliable so most people don’t try.
Regardless there are DPI devices which can fingerprint the OpenVPN traffic and tell it apart from HTTPS traffic because behaves differently (different send/receive patterns) and then you inject RST packets to break the session.
Seeing IP’s that you connect to isn’t always useful, e.g. attacker wouldn’t realistically gain anything if a website you connect to is served through cloudflare, which serves enough different websites that it provides little information for the attacker.
You can easily connect to the IP and grab the list of domains on the SAN certificate that CloudFlare is using on that IP address to figure out where they’re connecting. There’s only like 25 per certificate. It’s not hard to figure out if you are targeting someone.
e.g., it would not be difficult to map 104.18.43.206 to the CloudFlare endpoint of sni229201.cloudflaressl.com and once you have that IP to CloudFlare node mapping sorted out you can craft a valid request …
This list is encrypted in TLS 1.3, so you can’t easily grab it anymore (Firefox and Cloudflare also support eSNI, which plugs another hole).
You misunderstand. I would create a database mapping of all CloudFlare nodes in existence: sniXXXXXX.cloudflaressl.com <—> IP addresses.
When I see traffic to one of these IPs, I simply make a new TLS handshake to sniXXXXXX.cloudflaressl.com, grab the certificate, read all of the domain names in the certificate. I don’t need a plaintext SNI request to see where they’re going; I can just infer it by asking the same server myself.
You’ll only learn that all Cloudflare customers share a handful of IP addresses, and there are millions of sites per IP.
The certificate bundles aren’t tied to an IP, and AFAIK even the bundles aren’t constant.
That’s fine, then someone will just excise the encrypted SNI part to use it in a crafted packet that’s almost like a replay attack. That will still get you the list of 25ish domains they could have accessed.
Hell, this looks like you could eventually build a rainbowtables out of your captured SNI packets once you have sorted through the available metadata to see where they user went. (Assuming CF doesn’t rotate these keys regularly) Just analyze all sites on that cert, see all the 3rd party domains you need to load, and you can figure it out.
This is a small hurdle for a state actor
edit: I’m pretty sure you can just do a replay of the SYN to CloudFlare and not worry about trying to rip out the SNI part to get the correct certificate (TCP Fast Open)
edit2:
Yeah you can’t replay the ESNI value, but if you replay the entire Client Hello I think it should work. The server won’t know the client’s “ephemeral” ESNI key was re-used.
https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1
The client Hello ideally only contains a client’s public key material, so you can’t decrypt the ESNI even if you replay the client hello. Unless you use a symmetric DH operation (which is rare and not included in TLS1.3) or break ECDH/EdDH/ECDHE.
You are correct. I was going to post this after some coffee this morning. The response is encrypted with the client’s ephemeral ECDHE key.
So this breaks this type of inspection.
However, if you’re connecting to an endpoint that’s not on a CDN and is unique the observer can still figure out where you’re going. Is the solution we’re going to be promoting over the next few years to increase reliance on these CDN providers? I really don’t like what CloudFlare has become for many reasons, including the well known fact that nothing is free. They might have started with intentions of making a better web but wait until their IPO. Once they go public, all bets are off. All your data will be harvested and monetized. Privacy will ostensibly be gone.
In America it’s illegal to make ethical choices if it doesn’t maximize shareholder value. (Ebay v Newmark, 2010)
yes, and that protects exactly.. no one who needs it.
if you live somewhere where you need the security to hide your DNS requests, cloudflare will be the first thing to get blocked. the only really secure thing to do is onion routing of the whole traffic. centralizing the internet makes it more brittle.
additionally: ease of use is no argument if it means trading-off security. these tradeoffs put people in danger.
As someone who barely knows his TCPs from his UDPs, I had to read up on DoH, and I must say that a technology must be doing something right if it elicits both your reaction and the following from the Wikipedia article:
i think DoH is the wrong solution for this problem, stuffing name resolution into an unrelated protocol. it may be true that it has the side-effect of removing the ISP-DNS-filters, but those can already be circumvented by using another name server.
a better solution would be to have a better UI to change the nameservers, possibly in connection with DNS over TLS, which isn’t perfect, but at least it isn’t a mixture of protocols which DoH is.
it could be an argument that the ISP could block port 53, and DoH would fix that. then we have another problem, namely that the internet connection isn’t worth it’s name. the problem with these “solutions” is that they will become the norm, and then the norm will be to have a blocked port 53. it’s a bit like the broken window theory, only with piling complexity and bad solutions.
maybe that’s my problem with it: DoH feels like a weird kludge like IP-over-ICMP or IP-over-DNS to use a paid wifi without paying.
I agree with you that it feels like a kludge, it feels icky to me too.
But it’s something that could lead to a better internet - at the moment DNS traffic is both unencrypted, but more importantly, unauthenticated. If a solution can be found that improves this, even if it’s a horrible hack, I think it’s a net win.
Internet networking, like politics, is the art of the possible. We can all dream of a perfect world not beholden to vast corporate interests at every level of the protocol stack, but in the meantime the best we can hope for is to leverage some vast corporate interests against others.
It may be a short term win, but in the end we are stuck forever with another bad protocol because nobody took the time and effort to build a better one, or just had an agenda.
DoH is just another way of centralizing the net. sure you can set another resolver in the settings, but for how long? you’d have to do that on every device. or use the syncing functionality which is.. centralized. and even, who does that?
i don’t think that “big players” in politics or in tech, do things out of altruistic reasoning, but, in the best case, good old dollar. that paired with most of the things being awful hacks (again in both, politics and tech) paints a bright future.
I mean, the reality we live in now, where a company like Cloudflare has a de-facto veto on Internet content, just grew organically. It’s an inevitable consequence of technical progress, as stuff (like hosting, and DDoS protection) gets commoditized efficiencies of scale make large companies are the only ones who have a hope of making a profit.
To their credit, Cloudflare seem aware and uncomfortable about their role in all this, but that’s scant consolation as they’re under the same profitability requirements as the rest of the free world. They can be sold, or move to “evil” to save their profits.
Yep - even prior to DoH, Cloudflare have BGP announce privileges and can issue certificates which are trusted by browsers, which are two powers which should never have been combined in the same entity (being able to funnel a sites traffic to your servers and also generate valid certs for those requests).
… and with their resolver being the default one they even have control over the rest, amazing!
the need for something like DDoS protection is more a consequence of full-throttle capitalism ;)
For the fraction of internet users running Firefox, sure. Google will handle the rest. No doubt MSFT will hop on board too.
Or technical debt inherited from a more trusting vision of the internet…
(edit addressed Cloudflare’s role as default DoH provider for Firefox)
UK ISPs have to block child porn or the CEO will be held accountable and go to prison. They do DNS filtering, because IP filtering is impossible. Now they can’t even do that.
I’m aware of the legal requirements of UK ISPs (although why they feel they need to celebrate this requirement by awarding (then withdrawing) the “Internet Villain of the Year” to Mozilla is beyond me).
I guess the “responsibility” for filtering/blocking will move up to Cloudflare.
we’ve had a lengthy political discussion in germany about this topic (where “filtering” was a long time the preferred political solution) now the policy is to ask the respective hoster to delete these things. i have no good english source for this, so here is the translated german wikipedia article (original)
You can push the ISP to DNS block it (though it’s harder and usually leads to years-long court cases as in Vodafone’s case).
Telekom also loves to push their own search engine with advertisements for NXDOMAIN responses.
It does one useful thing: It prevents them from MITMing these packets and changing them.
I’d like encrypted DNS, but I’m very strongly against Firefox selecting my DNS resolver for me for reasons that have already been stated in threads here. I also strongly prefer keeping the web stack out of my relatively simple client-side DNS resolver. Diverse ecosystems are important, and the only way to maintain them is to keep software simple enough that it is cheap to implement.
Sure, but that’s rare. It would require a targeted attack or a naughty ISP to be altering results.
What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver. Now you have zero controls over the client devices on your network when it comes to DNS and additionally we’re about to lose HTTPS inspection in the near future. This is the wrong approach to solve the problem. Admins need controls and visibility to secure their networks.
Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.
It will be almost impossible now to stop users or bad guys from accessing Dropbox, for example. “Secure the endpoints” is not the answer. You can secure them, deny BYOD, etc, but you have to assume they’re compromised and/or rooted. Only the network is your source of truth about what’s really happening and now we’re losing that.
I guess I don’t have much sympathy for the argument that network administrators will lose insight into the traffic on their networks. That seems like a bonus to me, despite the frustration for blue teams.
Same. I understand that in some places there are legal auditing requirements, but practically everywhere else it’s just reflexive hostility towards workers that makes us use networks that are pervasively censored and surveilled.
Except that it’s not rare. You will find this in many hotel wifis. This hits you particularly hard if you have a DNSSEC validating resolver, which doesn’t take kindly to these manipulations. Having a trusted recursor is generally important if your want to be sure that you talk to a resolver you can actually trust, which is in turn important if you want to delegate validation.
Just as HTTPS prevents you from forcing your clients to talk to an on-remise cache or whatever. The solution is the same in both cases. You need to intercept TLS, if this is a hard requirement for you. DoH and DoT isn’t making anything more complicated, its just bringing DNS on par with the protection level we have had for other protocols for a while.
You hit the nail on the head here. Far from being rare, in the US it’s ubiquitous, whether it’s your hotel, your employer, or your residential ISP.
Good. Corporate networks must die. “Secure the endpoints” is THE ONLY answer.
https://beyondcorp.com
If Google can pull it off at Google scale, so can you. Small teams with lots of remote people have always been Just Using The Internet with authentication. It’s the “Enterprise”™ sector that’s been suckered into buying “Security Products”™ (more like “Spying Products”) to keep trying to use this outdated model.
Could you please elaborate? Is this about a “non-canonical” local resolver or do you think it also has repercussions for locally hosted zones? For example
*.internal.example.org
locally versus*.example.org
on the official internet. Or did I misunderstand you and you just meant a local forwarding resolver?I honestly didn’t read up enough on DoH yet, just wondering.
Setup your own DoH server and you can once again inspect it. Ideally you use a capable and modern TLS intercepting box to inspect all traffic going in and out (as well as caching it).
How? The IP or the URL of the DoH server you are talking to will stand out like a signal flare… I think that dumping the file to a cloud-service is way more efficient, easier and effective.
The US Gov often gives early reports to security teams of Critical Infrastructure networks details on all sorts of potential attacks, including early heads up on malware that may or may not be targeted. This includes a list of C&C domains that may be accessed. If the software can hide its DNS requests by making it look like normal HTTPS traffic to CloudFlare, that makes it even harder to identify the malware’s existence on your network.
If you want the Russians or Chinese to hack our grid, this is a great tool for them along with TLS 1.3. The power generation utility that I worked at did HTTPS interception and logging of ALL HTTPS and DNS requests from every device everywhere for analysis (and there was a program coming online to stream it to the government for early detection) and now this is becoming impossible.
This pertains only to firefox… So why would an installation of firefox be on one of those networks?
Furthermore: You know the ip of cloudflare’s DoH server. You could just block that and be done with it right? If the malware uses some other server, that will show up as well.
Firefox won’t be on that network, but HTTPS certainly will be. Likely not on (hopefully still airgapped) SCADA, but on other sensitive networks that give some level of access into SCADA through various means.
The point is that as DoH thrives and becomes commonplace and someone like CloudFlare runs this service, it’s easy to hide DNS requests mixed in with normal looking HTTPS traffic. The client can be a python script with DoH capability.
As for CloudFlare’s DoH service – it appears to be running on separate IPs at the moment, but there’s no reason why they couldn’t put this on their normal endpoints. DoH is HTTPS, so why not share it with their normal CDN endpoints? This would not be difficult to do in Nginx. In fact this would be far simpler than running HTTPS and SSH on the same port, which is also possible.
Basically any normal-looking HTTPS endpoint could become a DoH provider. Hack some inconspicuous server, reconfigure their webserver to accept DoH too, and now you’ve got the backdoor you need for your malware.
CloudFlare and Firefox are not my concern; DoH as a whole is.
Fair point…
But now I’m wondering why you would have access to cloudflare on such a network… Or why there won’t be a root-certificate on all the machines (and firefoxes) in the network so that the organization can MITM’s all outgoing traffic?
There are going to be some networks running servers that need outbound HTTPS for various reasons, but a lot of that can be locked down. But what about the network that the sysadmins are on? They need full outbound HTTPS, and a collaborating piece of malware on one of their machines gives them access to the internet and to other internal sensitive networks. These types of attacks are always complex and targeted. Think of the incredible work we did with Stuxnet.
As for MITM the traffic… look at this thread where it’s being discussed further https://lobste.rs/s/pechdy/turn_off_doh_firefox_now#c_inbnse
So why cloudflare? I doubt you’d need any high-volume sites that use cloudflare for those setups.
If the networks really are that sensitive, just separate them physically, give the sysadmins two machines and never transport data in digital form from the one to the other….
If you are not willing to take these kinds of steps, your internal networks simply aren’t that critical.
That is not how the networks at our power utilities work. And it’s not how the employees operate either.
Many power companies refuse to implement new technologies or network topologies unless another utility does it first. Which sadly means that in certain regions like MISO you can expect most of the utilities to be using the same firewalls, etc etc. Very dumb. Can’t wait for Russia to abuse this and take down half the country.
The people that work there aren’t the brightest. “Why are user accounts being managed with a perl script that overwrites /etc/passwd, /etc/shadow, and /etc/groups?” Well because that’s the way they’ve always done it, so if your team needs to install a webserver you also need to tell them to add the www user to their database so the user account doesn’t get removed. “Why are the admins ssh-ing as root everywhere with a DSA key that has no passphrase protection?” because the admins (of 20 years experience) refuse to learn ssh-agent and use basic security practices. I had meetings with developers who needed their application to be accessible across security domains and the developer couldn’t tell me what TCP port their application used. “What’s a port?”. These are people making 6 figures and doing about 30 minutes of work a day. It’s crazy.
These are highly regulated companies with slim margins. You want these kinds of drastic changes to their infrastructure? You better start convincing people to nationalize the grid because they don’t have the money to do it. Remember, it takes about 3 years to get a utility rate change approved. It’s a long process of auditing and paperwork and more auditing and paperwork to prove to the government that they really do need to increase utility rates to be able to afford X Y and Z in the future. They’re slow moving. Very slow.
Do you think customers will want their power bills to go up just so they can hire competent IT staff? Not a chance. (What we really need to do is stop subsidizing bulk power customers and making normal residential customers pay more than their fair share, but that’s a different discussion)
tl;dr we can all wish hope and pray that companies around the world will do the right thing, but it’s not going to happen anytime soon, especially in Critical Infrastructure environments because they’re so entrenched in their old ways and don’t have the budgets to do it the right way regardless.
In utility companies, the production networks running the power plants should simply not come into contact with the internet. There should always be a human inbetween the network and the internet. If this is not the case, they deserve what’s coming.
Believe it or not. I can actually understand why they dump into /etc/groups, /etc/passwd and /etc/shadow. There is no chance of any machine having an outdated users by accident or by partial configuration this way, and if your network has only a few hundreds of users, which are all more or less trained to deal with complex technological systems on a basic level. Why not? It’s not like they are running a regular common office workplace.
However, what you are telling me about SSH and TCP is quite shocking. That is just plain incompetence.
I’m not living in the US. In fact; the last time I’ve been there I was at an age from which I can barely remember anything other than that the twin towers still stood. I am often told that it’s a different country now, so I can’t say anything useful about this.
Depends…. If the outages are below about 2 short power outages per year on average, then no I wouldn’t.
If it starts to escalate to one outage per month and 25% of them can be blamed on incompetent IT-staff? You’ve reached the point where I am going to install my own diesel generators as those will quickly become profitable.
I don’t quite understand. Regardless of the TLS version, if you want to inspect https you need to intercept and decrypt outgoing https traffic via a middlebox. This applies to regular https just as it applies to DoH. If you are required to secure your network inspecting encrypted traffic, you will continue to do so just like you’ve always done. In this sense, DoH is even less intrusive than, say, DoT because your standard https intercept proxy can be adapted to deal with it.
Wasn’t the goal of TLS 1.3 to make interception impossible? I am certain that was one of the major goals, but I didn’t follow through the RFC’s development.
How would interception work? With ESNI in TLS 1.3, the client does a DNS lookup to retrieve the key to encrypt the ESNI request with. The middlebox couldn’t decrypt the ESNI and generate a certificate by the local trusted CA because it doesn’t know the hostname the client wants to access. So now… a middlebox will also have to be a DNS server so it can capture the lookup for the ESNI key, generate a fake key on demand, and have it ready when the TLS connection comes through and is intercepted?
This is getting quite complex, and there may be additional middlebox defeat features I’m not aware of
No, the basic handshake can still be intercepted similarly to TLS 1.2, so that’s not a problem with 1.3.
ESNI might be a slightly different issue. But you could just take a hardline stance and drop TLS handshakes which use ESNI and filter the ESNI-records (with a REFUSED error?) in your resolver. If you need to enforce TLS intercept, you will need to enforce interceptability of that traffic and that might mean refusing TLS handshakes which use ESNI. But I heaven’t read the RFC drafts yet, so there might be easier/better ways to achieve this. In any case, none of this should be a deal breaker. TLS intercept proxies have always been disruptive (e.g. client certificates cannot be forwarded past an intercept proxy) and this will apply to ESNI just as it has done to past aspects of TLS.
What I feel should be clear is that none if this will suddenly turn existing practices impossible. Restrictive environments will continue to be able to be restrictive, just as they have in the past. The major difference will hopefully be that we will be safer by default even in open networks, such as public wifis, where a large number of users are currently exposed to unnecessary risks.
I don’t think this is possible. TLS 1.3 means ESNI is a given. If half the internet uses TLS 1.3-only, you have no choice but to support it. AIUI they’ve gone to great lengths to prevent downgrade attacks which will stop the interception.
I have a contact at BlueCoat and am reaching out to see what the current state is because their speciality is exactly this.
Right now, ESNI is not mandatory for TLS 1.3. TLS 1.3 is a complete and published RFC standard. ESNI is only a draft and is certainly not mandated by TLS 1.3. You don’t need to run downgrade attacks to “intercept” TLS 1.3. Intercept proxies simply complete the TLS handshake by returning a certificate for a given domain issued by a custom CA that’s (hopefully) in the client’s trust store. This works just the same for 1.3 as it does for any earlier method.
Do we know the failure mode is if ESNI is rejected? Everyone wants ESNI for their privacy and browsers will certainly implement it, so it will be more common than not I suspect.
edit: and thanks, I was still operating under the impression that ESNI was part of the final TLS 1.3 draft. I haven’t taken the time to read through it all and there’s a lot of misinformation out there. I’ve been too busy to dig in deeper, and security is not my day job right now.
This is fear and speculation not based on facts.
Cloudflare’s DoH is compliant with GDPR, because there’s no PII sent or stored, apart from the technically-necessary IP of the TCP connection, and Cloudflare doesn’t even retain the IP address. It’s clearly stated in the privacy policy, which is very strict, and borderline paranoid. And compliance with the policy is audited externally by KPMG.
The author has written the entire article, including cutesy comic, and hasn’t even checked the one fact it is about?
Because the resolver doesn’t store personal info, and doesn’t store any non-aggregated logs beyond 24h, it’s pretty safe from being subpoenaed to hand the (non)data over.
The fear of U.S. government going as far as mandating implementation of a secret backdoor is a real one, but if it comes to this, we’re all fucked, because Firefox itself is under U.S-based Mozilla org/corp., and so is Google and Apple.
It would be better if the alternative was system-level DoH that uses a variety of trusted providers, but currently there’s no such thing. The actual alternative is sending unencrypted DNS packets, which we know are commonly logged and manipulated. The alternative is giving your DNS traffic to your ISP, who knows your real identity. You’ve probably clicked “Agree” on your ISP’s privacy policy that includes “sharing information with selected partners and affiliates”.
I don’t download Firefox from Mozilla; I get it from Debian, which is not a US-based org/corp. They have been good about stripping out the privacy-hostile gunk in browsers so far, and hopefully they will continue with this when DoH hits the versions they ship.
Software in the Public Interest is a US-based 501c3. They own the Debian trademark, domain name, and other infrastructure. They are as much “Debian” as MozFo is Mozilla.
SPI does have no power over Debian Developers to force us to insert backdoors into Debian or anything similar. Also, the packaging process makes it very difficult for a DD to do so without other people noticing.
They’re much more akin to MoFo.
You’re right. Edited it.
There is also the alternative of using DNSCrypt v2, which everyone seems to be ignoring
I wonder why the planet wants to put everything over TCP, and then, put everything over HTTP, and then, put everything over JSON, and then, put HTTP over TLS, and then, put TLS over UDP into a merged QUIC, and then, put TLS over QUIC, and then …
DNSCrypt sounds a much simpler approach.
Can whoever it was who voted incorrect please tell me why? Thanks
Not really, I do not trust cloudflare or Google so I do not want to have DoH by default. This change literally makes your DNS requests dependent on one company.
On a different note, I do not believe my ISP in the Netherlands is allowed to share DNS data with third parties.
I don’t understand. How does this make your DNS requests dependent on one company? Even with the defaults, the standard TRR mode has failover to the system resolver. Conceptually, you can easily switch your DoH provider or even run and use your own (which is easy to do w/ dnsdist-1.4.0 for example). The choices are even in the Firefox preferences and don’t require tinkering w/ about:config.
That being said, should Mozilla/Firefox prompt the user about these choices before enabling them quietly? Absolutely. But it should do these things for many other things too, such as your default search engine. Instead of disabling DoH, we should work on a better UX with these things.
That’s a pretty selfish stance. Your Netherlands ISP doesn’t serve the entire world. But they do serve you, so fuck the actual billions of people using DNS outside of the EU?
Every time this comes up I see people complaining about how US-centric these arguments for DoH are. But insecure DNS isn’t a US problem, it’s a whole world problem. EU people bitching about DoH / Cloudflare come off like billionaires wanting another tax cut to me. There are people in these comments that live with DNS domain blocking for country-wide censorship.
People whose government is using such an ineffective censorship measure should feel lucky and protect the status quo. If your government is willing to deploy censorship, it’s a sure sign you cannot reason with that government. Better keep them unaware of their incompetence.
No, the author is basing himself upon facts and the fact that he/she is a Swiss citizen, which, in fact, have way more protection against, and democratic control over their own government than many Americans can ever dream of. It’s the country of secrecy and bank vaults after all.
The GDPR is just the “lowest common denominator”. There is quite literally nothing preventing European countries (like Swistzerland) from adding on additional requirements on top of it, as many European countries have done so already. The fact that CloudFlare is GDPR-compliant, does not mean they are in compliance with all local laws as well.
This is something we have to blindly trust CloudFlare on. Your argument flows along similar lines as the one the co-founder of AirVPN made a couple of weeks ago. I debunked that thoroughly here back then. Besides: Cloudflare falls under US-jurisdiction, which means that the traffic might be intercepted even before it reaches their servers.
Are we now? If such backdoors exist, they will probably not end up in the open-source versions of the browser, but they will show up in the binary distributions. For example, Slackware ships the entire source code including all the tools to build firefox from scratch, on their DVD-distribution. Debian has a system in which many builds are reproducible bit-for-bit. I think this greatly improves options and trust in case such a backdoor is found and we need to remove it. Although you would not have that much luck with any Apple, Google or Microsoft products.
Ever heard of DNSSEC? This makes the DNS-packets tamper-resistant to a very high degree.
Well, I’ve read, and clicked “Agree” on my ISP’s privacy policy. However it did not contain a line similar to what you’ve just mentioned. It did however contain a line along the lines: “We will not share information with selected partners and affiliates beyond the minimum of what we need to provide you with your service, or when there exists a legal requirement to do so.” A surprisingly short list of what they share with whom and for which purposes follows shortly after that statement.
Granted: this also means I’m paying about €5 more per month than average, just like more than a million customers that deliberately chose the same ISP.
Which brings me to one of your other statements:
The original author appears to be right, and I am sorry to say this, but it appears to be that that author is pretty well informed…
I also do not like the fact how you simply skimp over the subject of DNS-caching, which means that one ISP will simply request the DNS-records of a domain only once per TTL for all of its customers and every “home gateway” requests it once for every user behind it per TTL, while DoH would request the headers from cloudflare for each and every single client individually per TTL. This makes individual clients way more identifiable than they would have been with traditional DNS, especially once combined with DNSSEC.
However it certainly is not:
Instead of worrying about whether Cloudflare might violate the GDPR, a Swiss citizen can simply request they show they do comply, and refer it to the national data protection agency if they don’t.
The US government can’t mandate Mozilla include a secret backdoor because Mozilla provides Open Source software, not a service. Mozilla could try, but anyone who noticed something unusual about a change to the code could undo the whole thing. It isn’t even necessary that the auditor understand precisely what’s been done: All the auditor needs to do is notice that Mozilla suddenly dropped some unusual code into the program and it’s all over for the secret backdoor.
It’s not always easy to tell the difference between malicious code and honest mistakes.
https://flak.tedunangst.com/post/warning-implicit-backdoor
Both malice and incompetence would be interesting to anyone watching the Firefox codebase.
Also, sudden, uncharacteristic incompetence would likely be taken as a sign of malice.
I think tedu’s point is that it’s nuanced. It’s easy to make mistakes that can be abused. And things that weren’t bugs/exploitable can become bugs/exploitable through good intent (cleaning up compiler warnings).
How do you know that the binary they ship matches the source code? I don’t think Mozilla is doing reproducible builds yet. https://bugzilla.mozilla.org/show_bug.cgi?id=885777
Someone apparently did manage to get a reproducible build for Firefox on Linux, though: https://glandium.org/blog/?p=3923
Well said. Centralizing all this data isn’t ideal, but it’s incremental improvement of an internet standard. And that’s the only way they improve.
Cloudflare could easily start a foundation, give it one up to date copy of each type of server they use, all the source code, two well rounded engineers, a bunch of money, etc, then kick them loose. Make their mission statement “to protect the world from technological monopolies”. Let’s call it Groundflare. They could offer every service cloudflare does, but in a non-profit manner.
Is that too much? Ok.. Thinking smaller: the EFF and my ISP could operate DoH services.
Then, Mozilla could put a “round robin” feature in there that let’s me tell Firefox to rotate between each service..
Cloudflare has nearly 200 datacenters all over the world, with BGP-level routing to the nearest one. This requires a lot of deals for peering and colocation. It’s hard to do even if you can afford it — some of the networks will not even talk to you unless you’re the size of Cloudflare.
Nothing stops you from setting up your own DoH for yourself, but handling traffic for all Firefox users is not that easy, especially if you want it to be competitive with the speed of DNS of their local ISP.
Of course CloudFlare has enough redundancy to ensure to “never be down ever”. But human mistakes happen, precisely at BGP level, and precisely because they manage it at global scale.
DNS is a distributed database system that is the source of security (everything runs over TLS when it needs security for public service, with certs from the usual CA). Using a “single, distributed vendor” is breaking the last component of internet that was distributed.
The race never ends toward centralization, and we still have a lot to go: we can still boot our OS without connecting to an OAuth.
Am I the only one that thinks that these tools are really toxic? Folks will just copy-paste all of these things without realising that they’re precluding their users from being able to access the sites. There’s a good reason most real companies (Google included) are still happy to serve you over TLSv1.0. Mozilla markets such configuration as “Old”, with a note that it “should be used only as a last resort”. I guess Google is using a last resort. ¯\_(ツ)_/¯
But it defaults to “Intermediate” and there are short explanations of each on the radio box. “Modern” does say “[…] and don’t need backward compatibility”.
Which up-to-date browsers do not support TLS v1.3? Sure, you could run IE7 or FF 3.0, etc, but I’d want to do everything in my power to discourage folks who are running outdated browsers from using them to browse the web, including denying them access to any website(s) I am running.
Google has different motives: show ads to and collect info from everyone.
It seems to be a common misconception that the internet’s sole reason of existence is now to deliver content to Firefox and Chrome. While this is perhaps true for some people - and may be true for you - it’s certainly not a base assumption you should operate on. There are still TLS libraries out there who don’t support TLSv1.3 (such as libressl) and thus there are tools which can’t yet use TLSv1.3. There is - as far as I’m aware - little need from a security POV to prefer TLSv1.3 over v1.2 if the server provides a secure configuration. If you want to discourage people from using old browsers, display some dialogue box on your website based on their user agent string or whatever.
Removing support for TLS versions prior to 1.2 is most certainly a good idea, but removing support for TLSv1.2 is just jumping the gun, especially if you look at the postfix configuration. If you want to enforce TLSv1.3 for your users, fine. But to enforce it when other mailservers try to deliver email is just asking for them to fall back to unencrypted traffic, effectively making the situation even worse.
On a completely unrelated note: It’s funny that server side cipher ordering is now seemingly discouraged in intermediate/modern configurations. I guess that’s probably because ever cipher supported is deemed “sufficiently secure”, but it’s still a funny detail considering all the tools that will berate you for not forcing server cipher order.
Thanks for the reminder that some libraries (e.g. libressl) still do not support TLS v1.3. Since practically every browser I use (which extends beyond the chrome/FF combo) supports it, I hadn’t considered libraries like that.
I was also surprised when I noticed this. I’d used this site before, but then “Modern” meant only supporting TLS 1.2+, which I think is suiting.
This discussion is a false-choice between “should Google/CloudFlare violate my privacy” and “should ISPs violate my privacy”.
Then please enlighten us, what are the other options?
I’m sorry, but… what just happened to simply running your own local resolver? You can easily setup a local unbound to resolve names for you. You can just as easily rent a VM somewhere, setup unbound as a DNS-over-TLS/DNS-over-HTTPS resolver and use that as your own private DNS server.
There seems to be no need to turn to something obscure when the answer might as well be simple. This doesn’t even require innovation. It just requires you to care enough to take matters into your own hands, or come together as collectives and run DoT/DoH resolvers yourself.
This is exactly how it works today. My understanding is that the DoH stuff that Firefox wants to do will undermine this by disrespecting the DHCP info with the DNS server info.
The “DoH stuff” can indeed be configured to “undermine” the DNS Server provided by DHCP. But that’s not a bad thing. You have the choice of setting up your own DoT/DoH-capable DNS resolver and configure your system and/or firefox to use this. You can also tell firefox not to care about using DoH at all. “Disrespecting” the settings aquired by DHCP is, in general, a feature, not a bug. I don’t want to trust DNS resolvers provided by e.g. hotels or other public wifi networks. I want to use my own resolver via a secure connection. DNS-over-TLS and DNS-over-HTTPS allow me to do just that.
You might do that, you nerd, but nobody else, including your grandma or most of your friends and family will.
We need to design better systems for them, or there will be a revolution.
No one has forgotten innovation is a thing, everyone is concerned about how to actually launch a DNS replacement that gets widespread adoption for the average user, with minimal breakage.
Some people are. (Like the ones behind the projects I mentioned.)
Others seem more focused on discussing (and justifying) whether it’s better to send everyone’s DNS to CloudFlare or to Comcast.
We’re having this discussion because DoH / DoC is the first solution that actually seems to have any meaningful chance of getting traction, and privacy is the major concern people have with it.
I’m sorry, I refuse to participate in disingenuous discussions. If you genuinely see zero advantages for end users, you should reread the original post.
The advantage is that DoH lets me and more importantly, my friends, evade South Korean censorship of North Korean websites.
Cloudflare already has (lots of) my data, so I guess that is an advantage. More than my isps, since they’re terminating ssl on a lot of sites I use.