Wait, Windows has had automatic root updates since XP? And nobody else does? Why is everything worse than Windows XP?!
I’m not sure what you mean by automatic updates, but ~every Linux distro has a ca-certificates package which gets updated all the time. (And even when it’s EOL, you can use it from newer versions since it doesn’t have dependencies)
Except for Android, I guess, which doesn’t get them updated until you get a whole new image from the manufacturer.
But as stated in the official LE post, you can (and probably should) add the root cert to your app-local trust storage (and probably even use cert pinning). That won’t help with browsers, though I’d guess you just install firefox for android (or something like that) and they ship their complete own TLS stack (and certs). Because you can’t run any relevant TLS on android 4.4, which you want to support for many apps..
FYI: you can still order stuff from Amazon and use Google Search on Android 4.4. Mozilla’s own website works, too, even though they suggest yours shouldn’t.
If your own website, and your employer’s website, don’t work on Android 4.4 because TLSv1.0 iNsEcUrE — it sounds like you’ve been sold the snake oil!
4 years… really doesn’t seem like a long time. It’s a bit worrying to me that the group which controls whether people can access the web or not really think that “people whose devices haven’t been updated in 4 years can’t access the web” is acceptable.
This has been the path all along. It’s the same people who deprecate TLSv1.0 because “not secure”, and which earlier have deprecated HTTP also because “no secure”.
It’s especially hypocritical given how many perfectly capable hardware devices (with gigabytes memory) are being deprecated by these policies. Note how Amazon.com, Google Search and Mozilla.org all themselves still work in old browsers, yet your blog with pictures of your cat is not allowed to be served over HTTP or even TLSv1.0 because not secure.
There’s only one way here: don’t participate in these planned obsolescence experiments. If you own any personal websites, make sure they don’t support HTTPS. If they do have to support HTTPS, make sure TLSv1.0 is still enabled and HTTP doesn’t redirect to HTTPS. The best way is to simply not support HTTPS at all, because otherwise anyone who clicks on any links with an “unsupported” browser will simply get an error message, and likely won’t know that HTTP is still available.
Note the way we got here is the Snowden leaks effectively showed that if an adversary can control content that your browser receives, the adversary owns your device. The lesson the industry took from this is that encryption of every cat picture is needed to ensure adversaries can’t tamper with it. The lesson I took from it is browsers today are too complex to be a robust sandbox. Somehow the industry view is this is clients are so unfixable that the only pragmatic solution is to encrypt in transit, which neatly ignores that if an adversary controls any endpoint, the problem is still there. Encryption can only secure your client if you trust the server.
It also seems odd that the whole reason for a browser to exist is to implement a sandbox. If it can’t do that, why not download binary code and execute it from every site that you visit?
So yeah, my site is http-only, even though I know that makes it easier for adversaries to tamper with content and take over clients. But was adding encryption to my site really going to prevent that?
So yeah, my site is http-only, even though I know that makes it easier for adversaries to tamper with content and take over clients. But was adding encryption to my site really going to prevent that?
https://grugq.github.io/presentations/COMSEC%20beyond%20encryption.pdf might help you mentally model things. Regardless, encrypting http helps both ends reduce the set of adversaries, even if that is self-signed. SSL MiM is not something the kids are taught in school and MiMproxy is quirky.
Not encrypting HTTP opens up for the uglier tiers of ad-tech and everyone is a target for those. You need to be somewhat more “interesting” for national interests to spend analyst/tailored access budget on you or your site visitors.
This is a very wrong threat analysis, and suggesting the use of self-signed certificates is naive at best. What we were supposed to have is opportunistic encryption, but because politics, the whole thing was shelved.
HTTPS and SSL also open up a whole extra can of worms other than mere compatibility issues galore. It opens up a huge extra attack vector. Without HTTPS, the only way to intercept the traffic is if you control the connection between the server and the client. With HTTPS, it’s possible to intercept traffic without such a requirement (e.g., Heartbleed). How’s that more secure for your cat blog if anyone can see what anyone else across the world is reading on your blog?
The improvements in Firefox for Android don’t just stop here: they even go way beyond the surface as Firefox for Android is now based on GeckoView, Mozilla’s own mobile browser engine. What does that mean for users?
Wait, Firefox for Android wasn’t using Gecko all along?
This must be some sort of a mistake of someone thinking that anyone cares about their Chromium-based stint by the misleading name of Firefox Focus, right?
Firefox for Android was using Gecko all along, but previously (I think) like Firefox for Desktop the UI was all rendered in XUL/HTML, with the performance and battery-life consequences that implies. GeckoView is a new, different port of Gecko to Android that allows more modern features like process separation and hardware acceleration, and also works as an ordinary widget that can be used with the standard Android UI framework.
I still remember those times in the early noughties where MSIE6, MSIE5 and MSIE4 was always a concern when doing the JavaScript tricks.
However, as I’m reading this in 2020, I cannot stop thinking of Google (incl. YouTube.com) doing the exact same things to all the browser vendors that Microsoft and MSN did to Opera and Mozilla back in the day.
Not to mention the inconvenient fact that Chromium pretty much has the monopoly nowadays; Vivaldi itself is based on Chromium, so, they can’t quite go around badmouthing it, either. The bigger issue today is with the User-Agent
strings – neither Vivaldi nor Brave have their own, and even SeaMonkey is forced to pretend to be Firefox, else, none of the major websites would work.
IE 5.5 on Mac was the worst. Not from a technical standpoint, but because it was a) the most widespread one on Macs and b) it simply rendered stuff differently than everything else.
Also this was before virtualization so a good chunk of my time was spent manually testing first on 2 browsers on Windows and then grabbing a Mac (OS 9) and starting all over.
So, uhh…. what now? Shut down the Internet until this is fixed? Disconnect your wifi router? Never log on to another web site again?
It doesn’t matter at all unless you trust that certificate, or whoever published it. It’s just a self-signed certificate that is valid for any domain. If you don’t trust it, then you don’t trust it, and it will be invalid for any use where you come across it.
Gotcha; I missed the critical detail that it’s self-signed. So to use this in an attack you’d have to trick someone into trusting the cert for some trivial site first.
Exactly. And then they would have to serve some content with that cert that the target would access. There’s essentially no practical way this could be used in an attack except for a man-in-the-middle attack, but you would still need to get the target to trust the certificate first.
Trusting the cert is easy with technical people. I link you guys to my site, with a self signed cert like this. You accept it because you want to see my tech content.
This is a huge issue.
Here’s what I think @indirection is getting at:
However, I would hope SSL overrides are hostnane-specific to prevent this type of attack…
I missed the critical detail that it’s self-signed
You didn’t quite miss it, it’s been misleadingly described by the submitter — they never explicitly mention that this is merely a self-signed certificate, neither in the title here, nor in the GitHub repository. To the contrary, “tested working in Chrome, Firefox” is a false statement, because this self-signed certificate won’t work in either (because, self-signed, duh).
I never say that it’s signed by a CA either 😅 I wasn’t trying to mislead folks, but some seem to have interpreted “SSL certificate” as meaning “CA-issued SSL certificate”. It does work in Chrome and Firefox insofar as it is correctly matched against domain names and is valid for all of them.
This isn’t signed by a trusted CA, so this specific cert can’t intercept all your traffic. However, all it takes is one bad CA to issue a cert like this and… yeah, shut down the Internet.
Or any CA operating under a hostile government, or any CA that’s been hacked. See DigiNotar for just one example of a CA that has issued malicious wildcard certs.
And as you can see it was removed from all browser’s trust stores and soon declared bankrupt (hence, death wish). And that wasn’t even deliberate. I can’t see a CA willfully destroying their own business. Yes, it’s a huge problem if this happens though, and isn’t announced to the public, as the case in the article.
Normally, certificates are doing three separate things here:
Most people who are against HTTPS ignore the second point by banging on about how nobody’s reading your webpages and nobody cares, when ISPs have, historically, been quite happy to inject ads into webpages, which HTTPS prevents. This strikes at the third point… except that it doesn’t. It’s self-signed, which defeats the whole mechanism by which you use a certificate to ensure you’re communicating with the entity you think you are. The weird wildcard stuff doesn’t make it any less secure on that front, since anyone can make their own self-signed certificate without wildcards and it would be just as insecure.
If you could get a CA to sign this, it would be dangerous indeed, but CAs have signed bad certificates before. Again, a certificate can be bad and can get signed by an incompetent or corrupt CA without any wildcards.
So this is a neat trick. I’m not sure it demonstrates any weakness which didn’t exist already.
I think a lot of people are outraged about the privacy implications, but my personal outrage would be that every vendor doing this exact thing means that my browsing halts to a crawl on all of these integration-heavy websites.
Why is noone thinking about the environment? How many processing cycles are wasted for all this tracking that hardly adds any value to user experience? What is all this tracking even for? I don’t think anyone can really explain it with a straight face. They’re just doing the tracking merely because it’s technologically possible, and might be useful for something in the remote future.
Sounds like a great opportunity for someone else to fill in the soon-to-be market gap.
EDIT: Apparently much of the base data is public.
There’s really nothing unique about all these weather apps. It’s a perfect lifestyle project for anyone looking for one. No networking effect required, no user data to moderate; very little front-end work; mostly just a backend optimisation to make a successful pivot, plus, a good UI.
I wonder if I should monetize https://github.com/dmbaturin/pytaf/ as “most precise weather forecast you will ever find”. ;)
I think a problem is that this is always being shown in the media from the perspective of a surveillance state, but what about legitimate use to find actual criminals?
This has already been a problem since long ago. If you’ve ever had anything stolen in California that’s easily trackable, unless tracking is something that you can do on your own, you’re basically out of luck. AT&T won’t give you or the police the triangulated location, unless they get a warrant, and they aren’t going to get a warrant because out-of-budget; I’ve heard it’s better outside of California, though, which perhaps explains why they still do try to solve crimes back in Florida.
I mean, 1000:1 is well past the point where the collateral damage is acceptable to most of society.
Calling for 10:1 (in favor of protecting innocents) was famously controversial long, long ago.
download and self-host whatever font you want to use. Here’s a hassle-free way to self-host Google fonts.
This is so ridiculous! Please don’t host your own fonts on your own website!
Can anyone explain to me why a website needs their own fonts, in place of the system ones, in the first place? Does anyone with a custom /etc/hosts
NOT block all of these useless fonts?
::2 fonts.googleapis.com fonts.gstatic.com
::2 use.fontawesome.com
::2 hello.myfonts.net fast.fonts.net
The system fonts might not have all the required characters for the text/language in question. The website author might want to have the website have a certain look.
There are many valid reasons. But feel free to tell your browser not to load them (and get a possibly degraded experience), it shouldn’t make any difference whether they are self-hosted or not.
The system fonts might not have all the required characters for the text/language in question.
I think this is a valid concern. However, my feeling is that remote font loading is mostly used to aesthetic reasons.
I agree with you about not using third-party fonts at all and I don’t use them myself while I block them using uBlock origin for my browsing. The worst are those sites that use third-party fonts to display icons in their menus etc as blocking say Google Fonts in those cases breaks their site! The idea with that section was to mention a little bit of a better alternative to those who insist on using Google Fonts (self-hosting them does speed things up and perhaps has a privacy benefit too). My main recommendation is to use web safe fonts and this should be the way to go for all sites.
Can anyone explain to me why a website needs their own fonts, in place of the system ones, in the first place?
IMO most default system fonts are harder to read than something like e.g. Merriweather.
Is there some trustworthy entity to provide DoH until it is more common place at ISPs and others?
With trustworthy I mean preferably a non-profit, privacy focused, with know-how, fund, resources, etc. I am thinking about maybe Mozilla themselves, the Chaos Computer Club, EFF or something like Let’s Encrypt where institutions come together. In a best case scenario it also wouldn’t be yet another case of centralization in the US.
This is a list of public providers: https://github.com/curl/curl/wiki/DNS-over-HTTPS
Is there some trustworthy entity to provide DoH until it is more common place at ISPs and others?
I really like your question, because it shows the profound issue with the whole idea of DoH.
If you trust your ISP — and there’s no good reason you should trust the centralised too-big-to-fail NSA-dream Cloudflare more than you’d trust your local ISP subject to the oversight of your local community — then you basically don’t gain much from DoH, because the likelihood that someone can tap into your traffic between your secure WPA2 WiFi at home and your ISP is rather small.
The alternative, of course, is using a national provider, which will then be capable of tracking your activities across all of your upstreams at home, work and coffee shops, and quietly delivering all said content to the intelligence agencies, through the secret court orders and such.
I think folks get too tied up with the idea of encrypting everything at all costs, and ignore the long-term opportunity costs associated with all these actions:
HTTPS-Everywhere eliminates a whole class of Internet firewalls and malware scanners capable of filtering out ads and malware outside of having to enlist the help of your browser, and ensuring your browser doesn’t do any funky stuff, because now with ubiquitous HTTPS everywhere you can’t easily see what sort of traffic is going out of your network, or which page is making a request to which other page through examining the Referer
headers in tcpdump
without having to enlist the support of your browser, or which headers and what metadata is being sent back to the mothership.
DoH is likewise acting in the same way by leaving you with less choice to filter out and examine your own traffic, especially if DoH is implemented not in the operating system or home router, but on the application layer in your browser. Does this mean that now with a new Firefox, I’ll be back to seeing all those useless GDPR notices from their-party megabyte-sized JavaScript that are blocked in my /etc/hosts
, as well as all the experience trackers and megabyte-sized A/B testing scripts from Optimizely that have likewise been blocked in my /etc/hosts
as well? What’s so great about that? Why is eliminating my choice to block these things in /etc/hosts
is a good thing?
Keep in mind that even if you’re using both HTTPS-Everywhere and DoH, where all your traffic is encrypted, it’s still possible to figure out that you’ve visited Wikipedia (due to IP address correlations that are impossible to hide without centralising the web behind someone like Cloudflare (gosh, I wonder why they’re pushing for all these things!)) and viewed a page named Censorship in the United States (due to the unique sizing of the content, as well as timing-based attacks, where the timing-based attacks are likewise near-impossible to fully mitigate, if the continued emergence of the various Meltdown upon Meltdown bugs and research is to teach us anything).
no good reason you should trust […] more than you’d trust your local ISP subject to the oversight of your local community
How about when “local community” means “relatively authoritarian government”. (Really in any situation the word “community” feels very dishonest here lol)
I trust any U.S. company way more, because the U.S. does not have power over me.
HTTPS-Everywhere eliminates a whole class of Internet firewalls and malware scanners
Yeah, and prevents ISPs from injecting their damn ads and prevents e.g. your employer from reading all the content you see in plaintext.
Any filtering should happen in the browser because of the end-to-end principle. Any kind of tampering in between the servers and the browser is fundamentally broken and stupid.
Should be folded into https://lobste.rs/s/towcaw/firefox_turns_encrypted_dns_on_by_default @alynpost.
I’ve merged story towcaw in to story h2t3qa, the opposite direction of what you requested gerikson. cnst observed story h2t3qa is the primary source with story towcaw responding to it. The stories were submitted so close in time (1-2 hours) to each other that I’m persuaded by the primary source claim.
The opposite — the other article has a title that’s very one-sided and misleading, plus, this one is the original source.
A very bad day for privacy and internet freedom everywhere, great victory for the NSA. All your DNS traffic will now go to a single monopoly under US jurisdiction — Cloudflare.
Here’s a useful comment showing the way to block this malignant traffic from leaving your network:
Here’s the prior discussions for the issues with DoH in general and with Cloudflare in particular:
It’s especially ironic that Mozilla is turning it on first in the US of A — literally a country comprised of collection of independent states, now all tracked under a single monopoly DoH provider. The only hope is that someone in the government will eventually wake up and see the issue where a single entity controls so much of consumer and business traffic that AT&T could only dream of; in an ideal world, Cloudflare should be the prime target of the antitrust legislation in the next decade.
Mozilla advertises ‘privacy’ on literally the second sentence of the Firefox download page
And yet they continue to depend on proprietary google tracking bits in order to generate a UUID (lol), and now this. Mozilla needs a major change in direction if they’re going to actually provide a product that respects user privacy.
This feels like the typical response from the geek world where if Firefox only gets 99.99% of things right instead of exactly 100%, they will be portrayed and talked about as if they actually managed 0%.
The threats involved in using your ISP’s DNS are pretty clear, and pretty clearly are attacks on your privacy. DoH is a significant upgrade over that, and the provider they chose to go with has taken steps to try to make it verifiable that they will not present the same kind of threat as your ISP.
But because it only gets part of the way to where certain people would like to be, we get threads like this one, where the perfect is not just the enemy of the good, but is actively seeking to hinder and impair the good by any means available.
The threats involved in using your ISP’s DNS are pretty clear, and pretty clearly are attacks on your privacy.
The threats involving the world’s largest ad company and the threats involving a leading collector of internet traffic are pretty clear, and clearly are attacks on your privacy. So by your standards, ‘good’ is choosing one bad actor over another bad actor, when ‘good’ should really be avoiding all bad actors. ‘Perfect’ would be something like seemlessly integrating Tor, etc (which no one here is asking for).
I don’t much like Cloudflare, but Mozilla seems to have used their leverage to enforce terms which are far more favorable to your privacy than anything a widely-available consumer ISP is going to offer. So, again, this seems to be a “they only got to 99.99% of what I want, not 100%”, and from there is being spun as complete failure.
If you have actual demonstrable proof that Cloudflare is not abiding by those terms, feel free to bring it up.
A company made some mistake, pissed off user assumes malice and posts their rant to ‘hacker’ ‘news’, and the company ‘makes it right’. Why does this belong on lobste.rs?
A company made some mistake, pissed off user assumes malice and posts their rant to ‘hacker’ ‘news’, and the company ‘makes it right’. Why does this belong on lobste.rs?
Did you not read past the erroneous tone disclaimer, or are you just trolling here? Where did Cloudflare make it right? What’s this whole “‘hacker’ ‘news’” in individual quotation marks that you’re referring to? Where did the user assume “malice”? What “mistake” did the company make, when it’s clearly written by the OP that all Cloudflare did was follow its own known policy of not notifying about the nuking of the site of a paying customer, where a simple Google Search query reveals it’s an issue known to the public at large since like two years ago?
Most importantly, why do you assume malice on part of the victim in this story, and believe Cloudflare the perpetrator, and why do you scorn the victim for doing exactly what Cloudflare told them to do — post the question in public forums, because they’re no longer a priority customer after having had the product they bought removed from their profile?
The OP has had their whole website and email nuked, potentially lost a lucrative contract with a client (10k USD+?), at the very least potentially lost several days worth of billable time (at kilobuck per day?), and here the tone-police are telling him he’s too quick to assume bad faith (???) on Cloudflare’s part when Cloudflare’s CTO chimed in for damage control, empty-saying they’re “investigating”? (As their CTO always does on HN, BTW.)
Note that Cloudflare’s CTO still never disputed nor apologised for Cloudflare’s blackbox policy of nuking your whole DNS without any notification (email or otherwise). This is probably the biggest complaint by the user, that Cloudflare didn’t even bother to tell him about this intentional takedown on Cloudflare’s part. It’s been almost a day now, with no updates; will Cloudflare be making it right by reimbursing the OP for the lost opportunities? Or is the victim supposed to issue a full official apology for being a victim of this awesome registrar with such a great CTO that’s “investigating” all issues that hit the media?
How do you mean?
Obviously the use of your email for crimes is out of the picture as a subpoena solves that. And global passive adversaries are always going to be watching.
But for the average person my-pseudonymous-address@emailprovider.com should be sufficient for communicating to other people who lack subpoena or NSA powers, no?
An awful lot of people have subpoena powers.
For instance, if you have ever used your personal email address for any communications with your work colleagues, a case involving your workplace could subpoena your emails. You might even be so lucky as to have them semi-publicly accessible afterwards.
So at this point we assume that there are more nasty bugs in OpenSMTPD and that people wearing various colours of hat are looking for them.
I mean, I assume that about everything. From the machines that make my shoes to the laptop I’m typing on now. ;-P
Vein attempts at comedy aside, I really do think it’s safe to assume there’s many vulnerabilities in all complex systems (I would classify MTAs as complex). And if there truly is no vulnerability in <insert doohickey here>
, there’s likely a vulnerability in <this other doohickey>
deployed on the same server.
I’m a pessimistic realist who realizes we’re all human and prone to mistakes.
Well this is one that’s getting some attention right now :)
What’s most disappointing is that OpenSMTPD doesn’t seem to do much in the way of privilege separation. There’s no reason for the MTA to be running as root or having world writable directories or any of that mess unless you’re trying to preserve the 90s UNIX desktop experience of your mbox in /var/spool/mail and procmail “cleverness”. I’m sure there’s an audience for that by why is that in OpenBSD’s default MTA?
Are they running fingerd and ytalk too? If we’re going for the retro experience over security let’s just use telnet! :)
It is privsep’d to some degree:
$ ps axu | grep smtpd
2083 root 0:00 /usr/sbin/smtpd -F
2085 smtpd 0:00 smtpd: klondike
2086 smtpd 0:00 smtpd: control
2087 smtpd 0:15 smtpd: lookup
2088 smtpd 0:03 smtpd: pony expres
2089 smtpq 0:00 smtpd: queue
2090 smtpd 0:00 smtpd: scheduler
I’m not familiar enough with OpenSMTPD to tell you why this specific code isn’t in one of the privsep’d parts.
Anyone actually uses it outside of OpenBSD? I’d imagine noone really does, so, not that many people would be looking for these; OTOH, finding a bug in OpenBSD software always adds extra points to the rep, doesn’t it? (I guess it might not anymore if these reports are to continue.)
On Linux, and on a forum there was a thread recently, and many reported in as moving to OpenSMTPD or have already moved to it from exim/postfix, as they found it easy to work with, and the security responses are impressively quick.
I guess there will be quite some secholes uncovered as nowadays OpenBSD and its sibling projects are getting more attention from security people (probably because they are an easy win as not utilizing as many mitigations/defense-in-depth methods used by other operating systems, and has having been neglected for their relatively small user base).
I’m also using it on a few machines, though only for mail forwarding (Linux and OpenBSD), but I plan to set up a complete mail infra based on it in the near future, to evaluate a complex setup.
It’s available on pretty much all Linux distros as a package, so I’d say yes. I’ve been using it for years myself on FreeBSD and Linux.
I’m just a couple weeks away from deploying an OpenSMTPD installation for HardenedBSD’s build infrastructure. It’ll be an internal-only deployment, though, just to pass emails between systems to a centralized internal mbox.
I did use it for a while, but not on my main mail server. It was nice to work with, but I didn’t look at the code and I’m not really able to audit any c code, really.
Securing MTA must be a cursed job.
Back in the old days we had near weekly RCEs in sendmail and exim and these days it’s OpenSMTPD with strong ties to the f’ing OpenBSD project. That’s the one project I expect an RCE the least from; much less two in as many months.
Email is hard.
It’s actually 3 — this one has two separate CVE’s in a single release, including a full local escalation to root on Fedora due to Fedora-specific bugs adding an extra twist (CVE-2020-8793).
The other bug here (CVE-2020-8794) is a remote one in the default install; although the local user still has to initiate an action to trigger an outgoing connection to an external mail server of the attacker, so, I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.
I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.
I feel like that would be disingenuous. I realize it’s not enabled by default in a way that’s exploitable but in the default install there’s literally nothing running that’s even listening really (you can enable OpenSSH in a default install, I suppose); this is of course the correct way to configure things by default. However, the statement degenerates to “no remotely exploitable bugs in our TCP/IP stack and OpenSSH”…which is awesome, but…
(Also, it’s easy to criticize: I’ve never written enterprise grade software used by millions.)
Can you explain more about why you think that’s disingenuous? OpenBSD making this claim doesn’t seem different to me than folks saying that this new bug is remotely exploitable. It’s very specific and if something doesn’t meet the specific criteria then it doesn’t apply. Does that make sense?
It is my opinion that the statement should be removed – not because it’s not accurate but because I just think it’s tacky.
IMHO it’s disingenuous because it implies that there are only two remote holes in a heck of a long time on a working server. It’s like saying “this car has a 100% safety record in its default state,” that is, turned off.
(I’m reminded of Microsoft bragging about Windows NT’s C2 security rating, while neglecting to mention that it got that rating only on a system that didn’t have a network card installed and its floppy drive glued shut.)
I’m not sure if they include OpenSSH in their “default state” (I think it is enabled by default), but other than OpenSSH there’s nothing else running that’s remotely reachable. Most people want to use OpenBSD for things other than just an OpenSSH server (databases, mail servers, web servers, etc), and they might get an inflated sense of security from statements like that
(Note that OpenBSD is remarkably secure and their httpd and other projects are excellent and more secure than most alternatives, but that’s not quite the point. Again, it’s easy for me to criticize, sitting here having not written software that has been used by millions.)
I appreciate you taking the time to elaborate. I think the claim is tacky as it seems to be more provocative than anything else – whether true or not. I don’t think it’s needed because I think what OpenBSD stands for speaks for itself. I think I understand why the claim was used in the past but this conversation about it comes up every time there’s a bug – whether remote or not. The whole thing is played out.
AFAIK OpenSMTPD is enabled by default, but does local mail delivery only with the default config. This makes the claim about “only 2 remote holes” still stand still, though I agree with your analysis of bullshit-o-meter of this slogan. But hey, company slogans are usually even more bullshit-ridden, so I don’t care.
You’re saying a local user has to do something to make it remote? Can you explain how that makes it remote?
One of the exploitation paths is parsing responses from remote SMTP servers, so you need to request that OpenSMTP connect out to an attacker-controlled server (e.g. by sending email).
It looks like on some older versions there’s a remote root without local user action needed…
I reckon I’ll go back and read the details again. However, if something requires that a local user do a very specific thing under very specific circumstances (attacker controlled server, etc.) in order to exploit – that does not jive with my definition of remote.
Step zero is don’t run as root and don’t have world writable directories.
.
.
.
Sorry, was I yelling?
Mail is hard that way in that the daemon needs to listen to privileged ports and the delivery agent needs to write into directories only readable and writable by a specific user.
Both of these parts require root rights.
So your step zero is impossible to accomplish for an MTA. You can use multiple different processes and only run some privileged, but you cannot get away with running none of them as root if you want to work within the framework of traditional Unix mail.
Using port redirection and virtual users exposing just IMAP you can work around those issues, but them you’re leaving the traditional Unix setup and you’re adding more moving parts to the mix (like a separate imap daemon) which might or might not bring additional security concerns
At least on Linux there’s a capability for binding into privileged ports that is (the cap) not equivalent to root.
yes. or you redirect the port. but that still leaves mail delivery.
As I said in my original comment: email is hard and that’s ok. I take issue with people reducing these vulnerabilities (or any issue they don’t fully understand) to “just do X - it’s so easy” (which is a strong pointer they don’t understand the issue)
Which is why I sit in my rant about still using C for (relatively) new projects when safer languages exist, though - oh boy is it tempting to be dropping a quick “buffer overflows are entirely preventable in as-performant but more modern languages like rust. why did you have to write OpenSMPTD in C”, but I’m sure there were good reasons - especially for people as experienced and security focused as the OpenBSD folks.
It’s hard if you impose the constraint that you need to support the classical UNIX model of email that was prevalent from the late 70s to the mid 90s. I was once very attached to this model but it’s based on UNIX file-system permissions that are hard to reason about and implement safely and successfully. The OpenSMTPD developers didn’t make these mistakes because they’re stupid, it’s really really hard. But it’s an unfortunate choice for a security focused system to chose to implement a hard model for email rather than making POP/IMAP work well, or some other approach to getting email under the control of a the recipient without requiring priviledges.
Not sure any of these are true, but more of a self-imposed traditional limitation.
Lower ports being bindable by root only could easily be removed; given linux has better security mechanisms to restrict lower port binding, like selinux, I’m not even sure why the kernel still imposes this moronic concept on people. Mail delivery (maildir, mbox, whatever zany construct) can also be done giving limited rw access to the specific user and the MDA. hell, MAIL on my system just points to /var/spool/mail which is owned by root anyhow.
FWIIW, I just noticed the following amusing snippet on openbsd.org:
http://www.openbsd.org/security.html#reporting
If you wish to PGP encode it (but please only do so if privacy is very urgent, since it is inconvenient) use this pgp key.
I cannot say that I disagree with the statement or the sentiment. We’re supposed to be PGP aware in NetBSD, but from the looks of it, most folk do seem to find it as pointless as the author of the above statement.
For context, Max wrote NVMM, so, presumably does know what he’s talking about.
If any criminal interests procure this domain name, the US government will simply confiscate it like they’ve done with countless of other .com domain names over the years.
This is exactly how I feel.
Way too much software is being developed in the Resume Driven Development these days.
Touch Bar is a prime example of this. It solves a nonexisting problem in an imperfect way. It’s a mere copycat from Optimus Keyboard, except that in Optimus each individual key was still a physical key with a small display.
The same goes for the vast majority of the web 2.0 front end frameworks, which reduce user experience, but add to the resume of people who implement all of that nonsense. The new versions of Slashdot and Reddit are prime examples — slower, less usable and accessible, but, hey, all the newest frameworks and buzzwords!