1. 7

    So, uhh…. what now? Shut down the Internet until this is fixed? Disconnect your wifi router? Never log on to another web site again?

    1. 30

      It doesn’t matter at all unless you trust that certificate, or whoever published it. It’s just a self-signed certificate that is valid for any domain. If you don’t trust it, then you don’t trust it, and it will be invalid for any use where you come across it.

      1. 5

        Gotcha; I missed the critical detail that it’s self-signed. So to use this in an attack you’d have to trick someone into trusting the cert for some trivial site first.

        1. 3

          Exactly. And then they would have to serve some content with that cert that the target would access. There’s essentially no practical way this could be used in an attack except for a man-in-the-middle attack, but you would still need to get the target to trust the certificate first.

          1. 3

            Trusting the cert is easy with technical people. I link you guys to my site, with a self signed cert like this. You accept it because you want to see my tech content.

            This is a huge issue.

            1. 4

              How is this different from using any other self-signed certificate?

              1. 4

                Here’s what I think @indirection is getting at:

                1. Your connection to the net is MITMed.
                2. You visit sometechgeek.com, which is serving this wildcard certificate
                3. You think “weird, crazy tech bloggers can never take proper care of their servers” and click through the SSL warning
                4. Your browser trusts the wildcard cert. Next, you visit yourbank.com
                5. Since the wildcard cert is trusted by your browser, the holder of the key for that cert can intercept your communication with yourbank.com

                However, I would hope SSL overrides are hostnane-specific to prevent this type of attack…

                1. 2

                  Yep that’s exactly it! Thank you.

          2. 2

            I missed the critical detail that it’s self-signed

            You didn’t quite miss it, it’s been misleadingly described by the submitter — they never explicitly mention that this is merely a self-signed certificate, neither in the title here, nor in the GitHub repository. To the contrary, “tested working in Chrome, Firefox” is a false statement, because this self-signed certificate won’t work in either (because, self-signed, duh).

            1. 2

              I never say that it’s signed by a CA either 😅 I wasn’t trying to mislead folks, but some seem to have interpreted “SSL certificate” as meaning “CA-issued SSL certificate”. It does work in Chrome and Firefox insofar as it is correctly matched against domain names and is valid for all of them.

        2. 11

          This isn’t signed by a trusted CA, so this specific cert can’t intercept all your traffic. However, all it takes is one bad CA to issue a cert like this and… yeah, shut down the Internet.

          1. 4

            For any CA that has a death wish sure!

            1. 8

              Or any CA operating under a hostile government, or any CA that’s been hacked. See DigiNotar for just one example of a CA that has issued malicious wildcard certs.

              1. 3

                And as you can see it was removed from all browser’s trust stores and soon declared bankrupt (hence, death wish). And that wasn’t even deliberate. I can’t see a CA willfully destroying their own business. Yes, it’s a huge problem if this happens though, and isn’t announced to the public, as the case in the article.

          2. 2

            Normally, certificates are doing three separate things here:

            1. Ensuring nobody can read your communications.
            2. Ensuring nobody can modify your communications.
            3. Ensuring you’re communicating with the entity which validly owns the domain.

            Most people who are against HTTPS ignore the second point by banging on about how nobody’s reading your webpages and nobody cares, when ISPs have, historically, been quite happy to inject ads into webpages, which HTTPS prevents. This strikes at the third point… except that it doesn’t. It’s self-signed, which defeats the whole mechanism by which you use a certificate to ensure you’re communicating with the entity you think you are. The weird wildcard stuff doesn’t make it any less secure on that front, since anyone can make their own self-signed certificate without wildcards and it would be just as insecure.

            If you could get a CA to sign this, it would be dangerous indeed, but CAs have signed bad certificates before. Again, a certificate can be bad and can get signed by an incompetent or corrupt CA without any wildcards.

            So this is a neat trick. I’m not sure it demonstrates any weakness which didn’t exist already.

          1. 4

            I think a lot of people are outraged about the privacy implications, but my personal outrage would be that every vendor doing this exact thing means that my browsing halts to a crawl on all of these integration-heavy websites.

            Why is noone thinking about the environment? How many processing cycles are wasted for all this tracking that hardly adds any value to user experience? What is all this tracking even for? I don’t think anyone can really explain it with a straight face. They’re just doing the tracking merely because it’s technologically possible, and might be useful for something in the remote future.

            1. 5

              Sounds like a great opportunity for someone else to fill in the soon-to-be market gap.

              EDIT: Apparently much of the base data is public.

              1. 9

                There’s really nothing unique about all these weather apps. It’s a perfect lifestyle project for anyone looking for one. No networking effect required, no user data to moderate; very little front-end work; mostly just a backend optimisation to make a successful pivot, plus, a good UI.

                1. 3

                  I wonder if I should monetize https://github.com/dmbaturin/pytaf/ as “most precise weather forecast you will ever find”. ;)

              1. 2

                I think a problem is that this is always being shown in the media from the perspective of a surveillance state, but what about legitimate use to find actual criminals?

                This has already been a problem since long ago. If you’ve ever had anything stolen in California that’s easily trackable, unless tracking is something that you can do on your own, you’re basically out of luck. AT&T won’t give you or the police the triangulated location, unless they get a warrant, and they aren’t going to get a warrant because out-of-budget; I’ve heard it’s better outside of California, though, which perhaps explains why they still do try to solve crimes back in Florida.

                1. 1

                  Who cares if they solve a thousand crimes, if they ever wrongly convict an innocent person?

                  1. 2

                    I mean, 1000:1 is well past the point where the collateral damage is acceptable to most of society.

                    Calling for 10:1 (in favor of protecting innocents) was famously controversial long, long ago.

                1. 1

                  download and self-host whatever font you want to use. Here’s a hassle-free way to self-host Google fonts.

                  This is so ridiculous! Please don’t host your own fonts on your own website!

                  Can anyone explain to me why a website needs their own fonts, in place of the system ones, in the first place? Does anyone with a custom /etc/hosts NOT block all of these useless fonts?

                  ::2	fonts.googleapis.com	fonts.gstatic.com
                  ::2	use.fontawesome.com
                  ::2	hello.myfonts.net	fast.fonts.net
                  
                  1. 5

                    The system fonts might not have all the required characters for the text/language in question. The website author might want to have the website have a certain look.

                    There are many valid reasons. But feel free to tell your browser not to load them (and get a possibly degraded experience), it shouldn’t make any difference whether they are self-hosted or not.

                    1. 2

                      The system fonts might not have all the required characters for the text/language in question.

                      I think this is a valid concern. However, my feeling is that remote font loading is mostly used to aesthetic reasons.

                    2. 3

                      I agree with you about not using third-party fonts at all and I don’t use them myself while I block them using uBlock origin for my browsing. The worst are those sites that use third-party fonts to display icons in their menus etc as blocking say Google Fonts in those cases breaks their site! The idea with that section was to mention a little bit of a better alternative to those who insist on using Google Fonts (self-hosting them does speed things up and perhaps has a privacy benefit too). My main recommendation is to use web safe fonts and this should be the way to go for all sites.

                      1. 1

                        Can anyone explain to me why a website needs their own fonts, in place of the system ones, in the first place?

                        IMO most default system fonts are harder to read than something like e.g. Merriweather.

                      1. 3

                        Is there some trustworthy entity to provide DoH until it is more common place at ISPs and others?

                        With trustworthy I mean preferably a non-profit, privacy focused, with know-how, fund, resources, etc. I am thinking about maybe Mozilla themselves, the Chaos Computer Club, EFF or something like Let’s Encrypt where institutions come together. In a best case scenario it also wouldn’t be yet another case of centralization in the US.

                        This is a list of public providers: https://github.com/curl/curl/wiki/DNS-over-HTTPS

                        1. 4

                          Is there some trustworthy entity to provide DoH until it is more common place at ISPs and others?

                          I really like your question, because it shows the profound issue with the whole idea of DoH.

                          If you trust your ISP — and there’s no good reason you should trust the centralised too-big-to-fail NSA-dream Cloudflare more than you’d trust your local ISP subject to the oversight of your local community — then you basically don’t gain much from DoH, because the likelihood that someone can tap into your traffic between your secure WPA2 WiFi at home and your ISP is rather small.

                          The alternative, of course, is using a national provider, which will then be capable of tracking your activities across all of your upstreams at home, work and coffee shops, and quietly delivering all said content to the intelligence agencies, through the secret court orders and such.


                          I think folks get too tied up with the idea of encrypting everything at all costs, and ignore the long-term opportunity costs associated with all these actions:

                          • HTTPS-Everywhere eliminates a whole class of Internet firewalls and malware scanners capable of filtering out ads and malware outside of having to enlist the help of your browser, and ensuring your browser doesn’t do any funky stuff, because now with ubiquitous HTTPS everywhere you can’t easily see what sort of traffic is going out of your network, or which page is making a request to which other page through examining the Referer headers in tcpdump without having to enlist the support of your browser, or which headers and what metadata is being sent back to the mothership.

                          • DoH is likewise acting in the same way by leaving you with less choice to filter out and examine your own traffic, especially if DoH is implemented not in the operating system or home router, but on the application layer in your browser. Does this mean that now with a new Firefox, I’ll be back to seeing all those useless GDPR notices from their-party megabyte-sized JavaScript that are blocked in my /etc/hosts, as well as all the experience trackers and megabyte-sized A/B testing scripts from Optimizely that have likewise been blocked in my /etc/hosts as well? What’s so great about that? Why is eliminating my choice to block these things in /etc/hosts is a good thing?

                          Keep in mind that even if you’re using both HTTPS-Everywhere and DoH, where all your traffic is encrypted, it’s still possible to figure out that you’ve visited Wikipedia (due to IP address correlations that are impossible to hide without centralising the web behind someone like Cloudflare (gosh, I wonder why they’re pushing for all these things!)) and viewed a page named Censorship in the United States (due to the unique sizing of the content, as well as timing-based attacks, where the timing-based attacks are likewise near-impossible to fully mitigate, if the continued emergence of the various Meltdown upon Meltdown bugs and research is to teach us anything).

                          1. 1

                            no good reason you should trust […] more than you’d trust your local ISP subject to the oversight of your local community

                            How about when “local community” means “relatively authoritarian government”. (Really in any situation the word “community” feels very dishonest here lol)

                            I trust any U.S. company way more, because the U.S. does not have power over me.

                            HTTPS-Everywhere eliminates a whole class of Internet firewalls and malware scanners

                            Yeah, and prevents ISPs from injecting their damn ads and prevents e.g. your employer from reading all the content you see in plaintext.

                            Any filtering should happen in the browser because of the end-to-end principle. Any kind of tampering in between the servers and the browser is fundamentally broken and stupid.

                          1. 3

                            Thank you @gerikson, @cnst.

                            I’ve merged story towcaw in to story h2t3qa, the opposite direction of what you requested gerikson. cnst observed story h2t3qa is the primary source with story towcaw responding to it. The stories were submitted so close in time (1-2 hours) to each other that I’m persuaded by the primary source claim.

                            1. 2

                              The opposite — the other article has a title that’s very one-sided and misleading, plus, this one is the original source.

                            1. 27

                              A very bad day for privacy and internet freedom everywhere, great victory for the NSA. All your DNS traffic will now go to a single monopoly under US jurisdiction — Cloudflare.

                              Here’s a useful comment showing the way to block this malignant traffic from leaving your network:

                              Here’s the prior discussions for the issues with DoH in general and with Cloudflare in particular:

                              It’s especially ironic that Mozilla is turning it on first in the US of A — literally a country comprised of collection of independent states, now all tracked under a single monopoly DoH provider. The only hope is that someone in the government will eventually wake up and see the issue where a single entity controls so much of consumer and business traffic that AT&T could only dream of; in an ideal world, Cloudflare should be the prime target of the antitrust legislation in the next decade.

                              1. 12

                                Mozilla advertises ‘privacy’ on literally the second sentence of the Firefox download page

                                And yet they continue to depend on proprietary google tracking bits in order to generate a UUID (lol), and now this. Mozilla needs a major change in direction if they’re going to actually provide a product that respects user privacy.

                                1. 7

                                  This feels like the typical response from the geek world where if Firefox only gets 99.99% of things right instead of exactly 100%, they will be portrayed and talked about as if they actually managed 0%.

                                  The threats involved in using your ISP’s DNS are pretty clear, and pretty clearly are attacks on your privacy. DoH is a significant upgrade over that, and the provider they chose to go with has taken steps to try to make it verifiable that they will not present the same kind of threat as your ISP.

                                  But because it only gets part of the way to where certain people would like to be, we get threads like this one, where the perfect is not just the enemy of the good, but is actively seeking to hinder and impair the good by any means available.

                                  1. 3

                                    The threats involved in using your ISP’s DNS are pretty clear, and pretty clearly are attacks on your privacy.

                                    The threats involving the world’s largest ad company and the threats involving a leading collector of internet traffic are pretty clear, and clearly are attacks on your privacy. So by your standards, ‘good’ is choosing one bad actor over another bad actor, when ‘good’ should really be avoiding all bad actors. ‘Perfect’ would be something like seemlessly integrating Tor, etc (which no one here is asking for).

                                    1. 2

                                      I don’t much like Cloudflare, but Mozilla seems to have used their leverage to enforce terms which are far more favorable to your privacy than anything a widely-available consumer ISP is going to offer. So, again, this seems to be a “they only got to 99.99% of what I want, not 100%”, and from there is being spun as complete failure.

                                      If you have actual demonstrable proof that Cloudflare is not abiding by those terms, feel free to bring it up.

                              1. 10

                                A company made some mistake, pissed off user assumes malice and posts their rant to ‘hacker’ ‘news’, and the company ‘makes it right’. Why does this belong on lobste.rs?

                                1. 0

                                  A company made some mistake, pissed off user assumes malice and posts their rant to ‘hacker’ ‘news’, and the company ‘makes it right’. Why does this belong on lobste.rs?

                                  Did you not read past the erroneous tone disclaimer, or are you just trolling here? Where did Cloudflare make it right? What’s this whole “‘hacker’ ‘news’” in individual quotation marks that you’re referring to? Where did the user assume “malice”? What “mistake” did the company make, when it’s clearly written by the OP that all Cloudflare did was follow its own known policy of not notifying about the nuking of the site of a paying customer, where a simple Google Search query reveals it’s an issue known to the public at large since like two years ago?

                                  Most importantly, why do you assume malice on part of the victim in this story, and believe Cloudflare the perpetrator, and why do you scorn the victim for doing exactly what Cloudflare told them to do — post the question in public forums, because they’re no longer a priority customer after having had the product they bought removed from their profile?

                                  The OP has had their whole website and email nuked, potentially lost a lucrative contract with a client (10k USD+?), at the very least potentially lost several days worth of billable time (at kilobuck per day?), and here the tone-police are telling him he’s too quick to assume bad faith (???) on Cloudflare’s part when Cloudflare’s CTO chimed in for damage control, empty-saying they’re “investigating”? (As their CTO always does on HN, BTW.)

                                  Note that Cloudflare’s CTO still never disputed nor apologised for Cloudflare’s blackbox policy of nuking your whole DNS without any notification (email or otherwise). This is probably the biggest complaint by the user, that Cloudflare didn’t even bother to tell him about this intentional takedown on Cloudflare’s part. It’s been almost a day now, with no updates; will Cloudflare be making it right by reimbursing the OP for the lost opportunities? Or is the victim supposed to issue a full official apology for being a victim of this awesome registrar with such a great CTO that’s “investigating” all issues that hit the media?

                                  1. 0

                                    So you can complain about it =)

                                  1. 2

                                    No, it’s not. It just gives you a false sense of privacy.

                                    1. 1

                                      How do you mean?

                                      Obviously the use of your email for crimes is out of the picture as a subpoena solves that. And global passive adversaries are always going to be watching.

                                      But for the average person my-pseudonymous-address@emailprovider.com should be sufficient for communicating to other people who lack subpoena or NSA powers, no?

                                      1. 4

                                        An awful lot of people have subpoena powers.

                                        For instance, if you have ever used your personal email address for any communications with your work colleagues, a case involving your workplace could subpoena your emails. You might even be so lucky as to have them semi-publicly accessible afterwards.

                                    1. 1

                                      So at this point we assume that there are more nasty bugs in OpenSMTPD and that people wearing various colours of hat are looking for them.

                                      1. 5

                                        I mean, I assume that about everything. From the machines that make my shoes to the laptop I’m typing on now. ;-P

                                        Vein attempts at comedy aside, I really do think it’s safe to assume there’s many vulnerabilities in all complex systems (I would classify MTAs as complex). And if there truly is no vulnerability in <insert doohickey here>, there’s likely a vulnerability in <this other doohickey> deployed on the same server.

                                        I’m a pessimistic realist who realizes we’re all human and prone to mistakes.

                                        1. 2

                                          Well this is one that’s getting some attention right now :)

                                          What’s most disappointing is that OpenSMTPD doesn’t seem to do much in the way of privilege separation. There’s no reason for the MTA to be running as root or having world writable directories or any of that mess unless you’re trying to preserve the 90s UNIX desktop experience of your mbox in /var/spool/mail and procmail “cleverness”. I’m sure there’s an audience for that by why is that in OpenBSD’s default MTA?

                                          Are they running fingerd and ytalk too? If we’re going for the retro experience over security let’s just use telnet! :)

                                          1. 1

                                            It is privsep’d to some degree:

                                            $ ps axu | grep smtpd
                                             2083 root      0:00 /usr/sbin/smtpd -F
                                             2085 smtpd     0:00 smtpd: klondike
                                             2086 smtpd     0:00 smtpd: control
                                             2087 smtpd     0:15 smtpd: lookup
                                             2088 smtpd     0:03 smtpd: pony expres
                                             2089 smtpq     0:00 smtpd: queue
                                             2090 smtpd     0:00 smtpd: scheduler
                                            

                                            I’m not familiar enough with OpenSMTPD to tell you why this specific code isn’t in one of the privsep’d parts.

                                        2. 0

                                          Anyone actually uses it outside of OpenBSD? I’d imagine noone really does, so, not that many people would be looking for these; OTOH, finding a bug in OpenBSD software always adds extra points to the rep, doesn’t it? (I guess it might not anymore if these reports are to continue.)

                                          1. 3

                                            On Linux, and on a forum there was a thread recently, and many reported in as moving to OpenSMTPD or have already moved to it from exim/postfix, as they found it easy to work with, and the security responses are impressively quick.

                                            I guess there will be quite some secholes uncovered as nowadays OpenBSD and its sibling projects are getting more attention from security people (probably because they are an easy win as not utilizing as many mitigations/defense-in-depth methods used by other operating systems, and has having been neglected for their relatively small user base).

                                            I’m also using it on a few machines, though only for mail forwarding (Linux and OpenBSD), but I plan to set up a complete mail infra based on it in the near future, to evaluate a complex setup.

                                            1. 2

                                              It’s available on pretty much all Linux distros as a package, so I’d say yes. I’ve been using it for years myself on FreeBSD and Linux.

                                              1. 2

                                                Yes, on Linux.

                                                1. 2

                                                  I’m just a couple weeks away from deploying an OpenSMTPD installation for HardenedBSD’s build infrastructure. It’ll be an internal-only deployment, though, just to pass emails between systems to a centralized internal mbox.

                                                  1. 1

                                                    I did use it for a while, but not on my main mail server. It was nice to work with, but I didn’t look at the code and I’m not really able to audit any c code, really.

                                                1. 9

                                                  Securing MTA must be a cursed job.

                                                  Back in the old days we had near weekly RCEs in sendmail and exim and these days it’s OpenSMTPD with strong ties to the f’ing OpenBSD project. That’s the one project I expect an RCE the least from; much less two in as many months.

                                                  Email is hard.

                                                  1. 5

                                                    It’s actually 3 — this one has two separate CVE’s in a single release, including a full local escalation to root on Fedora due to Fedora-specific bugs adding an extra twist (CVE-2020-8793).

                                                    The other bug here (CVE-2020-8794) is a remote one in the default install; although the local user still has to initiate an action to trigger an outgoing connection to an external mail server of the attacker, so, I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.

                                                    1. 2

                                                      I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.

                                                      I feel like that would be disingenuous. I realize it’s not enabled by default in a way that’s exploitable but in the default install there’s literally nothing running that’s even listening really (you can enable OpenSSH in a default install, I suppose); this is of course the correct way to configure things by default. However, the statement degenerates to “no remotely exploitable bugs in our TCP/IP stack and OpenSSH”…which is awesome, but…

                                                      (Also, it’s easy to criticize: I’ve never written enterprise grade software used by millions.)

                                                      1. 1

                                                        Can you explain more about why you think that’s disingenuous? OpenBSD making this claim doesn’t seem different to me than folks saying that this new bug is remotely exploitable. It’s very specific and if something doesn’t meet the specific criteria then it doesn’t apply. Does that make sense?

                                                        It is my opinion that the statement should be removed – not because it’s not accurate but because I just think it’s tacky.

                                                        1. 4

                                                          IMHO it’s disingenuous because it implies that there are only two remote holes in a heck of a long time on a working server. It’s like saying “this car has a 100% safety record in its default state,” that is, turned off.

                                                          (I’m reminded of Microsoft bragging about Windows NT’s C2 security rating, while neglecting to mention that it got that rating only on a system that didn’t have a network card installed and its floppy drive glued shut.)

                                                          I’m not sure if they include OpenSSH in their “default state” (I think it is enabled by default), but other than OpenSSH there’s nothing else running that’s remotely reachable. Most people want to use OpenBSD for things other than just an OpenSSH server (databases, mail servers, web servers, etc), and they might get an inflated sense of security from statements like that

                                                          (Note that OpenBSD is remarkably secure and their httpd and other projects are excellent and more secure than most alternatives, but that’s not quite the point. Again, it’s easy for me to criticize, sitting here having not written software that has been used by millions.)

                                                          1. 2

                                                            I appreciate you taking the time to elaborate. I think the claim is tacky as it seems to be more provocative than anything else – whether true or not. I don’t think it’s needed because I think what OpenBSD stands for speaks for itself. I think I understand why the claim was used in the past but this conversation about it comes up every time there’s a bug – whether remote or not. The whole thing is played out.

                                                            1. 2

                                                              AFAIK OpenSMTPD is enabled by default, but does local mail delivery only with the default config. This makes the claim about “only 2 remote holes” still stand still, though I agree with your analysis of bullshit-o-meter of this slogan. But hey, company slogans are usually even more bullshit-ridden, so I don’t care.

                                                        2. 1

                                                          You’re saying a local user has to do something to make it remote? Can you explain how that makes it remote?

                                                          1. 2

                                                            One of the exploitation paths is parsing responses from remote SMTP servers, so you need to request that OpenSMTP connect out to an attacker-controlled server (e.g. by sending email).

                                                            It looks like on some older versions there’s a remote root without local user action needed…

                                                            1. 1

                                                              I reckon I’ll go back and read the details again. However, if something requires that a local user do a very specific thing under very specific circumstances (attacker controlled server, etc.) in order to exploit – that does not jive with my definition of remote.

                                                              1. 3

                                                                Apparently you can remotely exploit the server by triggering a bounce message.

                                                        3. 2

                                                          Step zero is don’t run as root and don’t have world writable directories.

                                                          .

                                                          .

                                                          .

                                                          Sorry, was I yelling?

                                                          1. 4

                                                            Mail is hard that way in that the daemon needs to listen to privileged ports and the delivery agent needs to write into directories only readable and writable by a specific user.

                                                            Both of these parts require root rights.

                                                            So your step zero is impossible to accomplish for an MTA. You can use multiple different processes and only run some privileged, but you cannot get away with running none of them as root if you want to work within the framework of traditional Unix mail.

                                                            Using port redirection and virtual users exposing just IMAP you can work around those issues, but them you’re leaving the traditional Unix setup and you’re adding more moving parts to the mix (like a separate imap daemon) which might or might not bring additional security concerns

                                                            1. 2

                                                              At least on Linux there’s a capability for binding into privileged ports that is (the cap) not equivalent to root.

                                                              1. 3

                                                                yes. or you redirect the port. but that still leaves mail delivery.

                                                                As I said in my original comment: email is hard and that’s ok. I take issue with people reducing these vulnerabilities (or any issue they don’t fully understand) to “just do X - it’s so easy” (which is a strong pointer they don’t understand the issue)

                                                                Which is why I sit in my rant about still using C for (relatively) new projects when safer languages exist, though - oh boy is it tempting to be dropping a quick “buffer overflows are entirely preventable in as-performant but more modern languages like rust. why did you have to write OpenSMPTD in C”, but I’m sure there were good reasons - especially for people as experienced and security focused as the OpenBSD folks.

                                                                1. 3

                                                                  It’s hard if you impose the constraint that you need to support the classical UNIX model of email that was prevalent from the late 70s to the mid 90s. I was once very attached to this model but it’s based on UNIX file-system permissions that are hard to reason about and implement safely and successfully. The OpenSMTPD developers didn’t make these mistakes because they’re stupid, it’s really really hard. But it’s an unfortunate choice for a security focused system to chose to implement a hard model for email rather than making POP/IMAP work well, or some other approach to getting email under the control of a the recipient without requiring priviledges.

                                                              2. 1

                                                                Not sure any of these are true, but more of a self-imposed traditional limitation.

                                                                Lower ports being bindable by root only could easily be removed; given linux has better security mechanisms to restrict lower port binding, like selinux, I’m not even sure why the kernel still imposes this moronic concept on people. Mail delivery (maildir, mbox, whatever zany construct) can also be done giving limited rw access to the specific user and the MDA. hell, MAIL on my system just points to /var/spool/mail which is owned by root anyhow.

                                                                1. 1

                                                                  selinux isn’t everywhere.

                                                          1. 4

                                                            FWIIW, I just noticed the following amusing snippet on openbsd.org:

                                                            http://www.openbsd.org/security.html#reporting

                                                            If you wish to PGP encode it (but please only do so if privacy is very urgent, since it is inconvenient) use this pgp key.

                                                            I cannot say that I disagree with the statement or the sentiment. We’re supposed to be PGP aware in NetBSD, but from the looks of it, most folk do seem to find it as pointless as the author of the above statement.

                                                            1. 9

                                                              For context, Max wrote NVMM, so, presumably does know what he’s talking about.

                                                              1. 5

                                                                www.xkcd.com/538/

                                                                If any criminal interests procure this domain name, the US government will simply confiscate it like they’ve done with countless of other .com domain names over the years.

                                                                1. 5

                                                                  Only big companies have R&D money for developer tooling. The important part is the product that you are trying to build.

                                                                  I have been in companies that have a good setup, yet everything they use is off-the-shelf or open source tooling. This is a much better situation to be in, because all the knowledge gathered is transferable to the next company.

                                                                  1. 2

                                                                    I have been in companies that have a good setup, yet everything they use is off-the-shelf or open source tooling. This is a much better situation to be in, because all the knowledge gathered is transferable to the next company.

                                                                    Exactly. If you’re using these proprietary tools, you’re gaining knowledge that has no applicability at the new place. It’s the same as programming in a proprietary language used by a single workplace, or using anything proprietary. It should be a big deal-breaker for anyone who’s interested in portable knowledge and workplace mobility.

                                                                    Even if it’s OSS, if it’s not popular-enough, it could still interfere with the marketability of your skills, and if it’s full-on proprietary, then most people wouldn’t even know much about it in the first place, and would have not much way of knowing even if they kind of wanted to, leaving you with fewer tickboxes for applicable experience.

                                                                    The more you think about it — the premise of the article is interesting, but the conclusion and title are just really couldn’t be more wrong.

                                                                  1. 7

                                                                    A very one-sided post. The premises are nice, but the conclusion is wrong.

                                                                    Using all software-as-a-service? I wish more people would call on this; it’s just that most companies are not considered “infrastructure” shops; so, all these tools get delegated outside of the org.

                                                                    I also think there’s a pretty big difference between using someone else’s software that’s closed and proprietary, and/or provided only as a service, versus using OSS, possibly with a support contract. I guess if you’ve got a support contract, you might not be justified to hack on it yourself, too, but this all depends.

                                                                    Also, the solution would be to release more OSS from within the org, and to open up any and all proprietary software that you may have developed in-house, so that, if necessary, you could continue benefit from these tools even once you depart. However, this has to be done in a smart way: once you get the permission from the supervisor chain and the legal to open up a tool, you have to do it in one go without notifying all internal customers, with a public announcement loud enough to not make it possible to revert the decision — the announcement has to be wide, and there have to be many clones of the repo, such that if sudden corporate detractors are found that would want to undo the release, they wouldn’t have any practical way of doing so. E.g., if you’re part of a very big org, make sure to have a HN post and Twitter announcements ready BEFORE you change the GitHub permissions; otherwise, some random people from other teams of your big org, which might be mere non-developer consumers of your product, might ruin your show. Source: it did happen to a project I worked on.

                                                                    1. 3

                                                                      Lol, that plan sounds oddly specific. You did it before and got burned? Or you did it before and got it right.

                                                                      1. 2

                                                                        Yeah, we in the server team got the permission to open-source a server implementation, but the owner of the desktop client team/experience saw the new permissions on GitHub, and moved against it. Lesson learned: HN early, HN often.

                                                                        And, it was a worry for Erlang in Ericsson, too; gladly, they did better than us:

                                                                        https://www.erlang-solutions.com/blog/twenty-years-of-open-source-erlang.html

                                                                        Until Erlang was out, many did not believe it would happen. There was a fear that, at the last minute, Ericsson was going to pull the plug on the whole idea. Open Source, a term which had been coined a few months earlier, was a strange, scary new beast large corporations did not know how to handle. The concerns Ericsson had of sailing in uncharted territory, rightfully so, were many. To mitigate the risk of Erlang not being released, urban legend has it that our friend Richard O’Keefe, at the time working for the University of Otago in New Zealand, came to the rescue. Midnight comes earlier in the East, so as soon as the clocks struck midnight in New Zealand, the erlang.org website went online for a few minutes. Just long enough for an anonymous user to download the very first Erlang release, ensuring its escape. When the download was confirmed, the website went offline again, only to reappear twelve hours later, at midnight Swedish time. I was in Dallas, fast asleep, so I can neither confirm nor deny if any of this actually happened. But as with every legend, I am sure there is a little bit of truth behind it.

                                                                    1. 6

                                                                      This just made me realise that these chargers are probably not running on Free Software.

                                                                      Is it possible to update their firmware? If so, Richard Stallman would not approve of using these non-free chargers. We should not even mention them, lest anyone would think it’s acceptable to use them for anything.

                                                                      1. 8

                                                                        Is it really NetBSD policy to take cheap shots at RMS?

                                                                        1. 1

                                                                          We should not even mention them, lest anyone would think it’s acceptable to use them for anything.

                                                                          Miserable heretic! May Stallman break down your door at 3 AM and beat you savagely with the head of a dead penguin.

                                                                          (/s)

                                                                        1. 11

                                                                          I’m very skeptical of the numbers. A fully charged iPhone has a battery of 10-12 Wh (not kWh), depending on the model. You can download more than one GB without fully depleting the battery (in fact, way more than that). The 2.9 kWh per GB is totally crazy… Sure, there are towers and other elements to deliver the data to the phone. Still.

                                                                          The referenced study doesn’t show those numbers, an even their estimation of 0.1 kWh/GB (page 6 of the study) is taking into account a lot of old infrastructure. In the same page they talk about numbers of 2010, but even then the consumption using broadband was estimated as 0.08 kWh/GB and only 2.9 kWh for 3G access. Again, in 2010.

                                                                          Taking into account that consumption for 2020 is totally unrealistic to me… It’s probably a factor of at least 30 times less… Of course, this number will go down as well as more efficient transfers are rolled out, which seems to be happening already, at an exponential rate.

                                                                          So don’t think that shaving a few kbytes here and there is going to make a significant change…

                                                                          1. 7

                                                                            I don’t know whether the numbers are right or wrong, but I’m very happy with the alternative direction here, and another take at the bloat that the web has become today.

                                                                            It takes several seconds on my machine to load the website of my bank, a major national bank used by millions of folks in the US (Chase). I looked at the source code, and it’s some sort of encrypted (base64-style, not code minimisation style) JavaScript gibberish, which looks like it uses several seconds of my CPU time each time it runs, in addition to making the website and my whole browser unbearably slow, prompting the slow-site warning to come in and out, and often failing to work at all, requiring a reload of the whole page. (No, I haven’t restarted my browser in a while, and, yes, I do have a bunch of tabs open — but many other sites still work fine as-is, but not Chase.)

                                                                            I’m kind of amazed how all these global warming people think it’s OK to waste so many of my CPU cycles on their useless fonts and megabytes of JavaScript on their websites to present a KB worth of text and an image or two. We need folks to start taking this seriously.

                                                                            The biggest cost might not be the actual transmission, but rather the wasted cycles from having to rerender complex designs that don’t add anything to the user experience — far from it, make it slow for lots of people who don’t have the latest and greatest gadgets and don’t devote their whole machine to running a single website in a freshly-reloaded browser. This also has a side effect of people needing to upgrade their equipment on a regular basis, even if the amount of information you require accessing — just a list of a few dozen of transactions from your bank — hasn’t changed that much over the years.

                                                                            Someone should do some math on how much a popular bank contributes to global warming with its megabyte-sized website that requires several seconds of CPU cycles to see a few dozen transactions or make a payment. I’m pretty sure the number would be rather significant. Add to that the amount of wasted man-hours of folks having to wait several seconds for the pages to load. But mah design and front-end skillz!

                                                                            1. 3

                                                                              Chase’s website was one of two reasons I closed my credit card with them after 12 years. I was traveling and needed to dispute a charge, and it took tens of minutes of waiting for various pages to load on my smartphone (Nexus 5x, was connected to a fast ISP via WiFi).

                                                                              1. 2

                                                                                The problem is that Chase, together with AmEx, effectively have a monopoly on premium credit cards and travel rewards. It’s very easy to avoid them as a bank otherwise, because credit unions often provide a much better product, and still have outdated-enough websites that simply do the job without whistling at you all the time, but if you’re into getting the best out of your travel, dealing with the subpar CPU-hungry websites of AmEx and Chase is often a requirement for getting certain things done.

                                                                                (However, I did stop using Chase Ink for many of my actual business transactions, because the decline rate was unbearable, and Chase customer service leaves a lot to be desired.)

                                                                                What’s upsetting is that with every single redesign, they make things worse, yet the majority of bloggers and reviewers only see the visual “improvements” in graphics, and completely ignore the functional and usability deficiencies and extra CPU requirements of each redesign.

                                                                            2. 9

                                                                              Sure, there are towers and other elements to deliver the data to the phone. Still.

                                                                              Still what? If you’re trying to count the total amount of power required to deliver a GB, then it seems like you should count all the computers involved, not just the endpoint.

                                                                              1. 4

                                                                                “still, is too big of a difference”. Of course you’re right ;-)

                                                                                The study estimates the consumption as 0.1 kWh in 2020. The 2.9 kWh is an estimation in 2010.

                                                                                1. 2

                                                                                  I see these arguments all the time about “accuracy” of which study’s predictions are “correct” but it must be known that these studies are predictions of the average consumption for just transport, and very old equipment is still in service in many many places in the world; you could very easily be hitting some of that equipment on some requests depending on where your data hops around! We all know an average includes many outliers, and perhaps the average is far less common than the other cases. In any case, wireless is not the answer! We can start trusting numbers once someone develops the energy usage equivalent of dig

                                                                                2. 3

                                                                                  Yes. Let’s count a couple.

                                                                                  I have a switch (an ordinary cheap switch) here that’ll receive and forward 8Gbps on 5W, so it can forward 3600000 gigabytes per kWh, or 0.0000028kWh/GB. That’s the power supply rating, so it’ll be higher than the peak power requirement, which is in turn will be higher than the sustained, and big switches tend to be more efficient than this small one, so the real number may have another zero. Routers are like switches wrt power (even big fast routers tend to have low-power 40MHz CPUs and do most routing in a switch-like way, since that’s how you get a long MTBF), so if you assume that the sender needs a third of that 0.1kWh/GB, the receiver a third, and the networking a third, then… dumdelidum… the average number of routers and switches between the sender and receiver must be at least 10000. This doesn’t make sense.

                                                                                  The numbers don’t make sense for servers either. Netflix recently announced getting ~200Gbps out of its new hardware. At 0.03kWh/GB, that would require 22kW sustained, so probably a 50kW power supply. Have you ever seen such a thing? A single rack of servers would would need 1MW of power.

                                                                                  1. 1

                                                                                    There was a study that laid out the numbers, but the link seems to have died recently. It stated that about 50% the energy cost for data transfer was datacenter costs, the rest was spread out thinly over the network to get to its destination. Note that datacenter costs does not just involve the power supply for the server itself, but also all related power consumption like cooling, etc.

                                                                                    1. 2

                                                                                      ACEEE, 2012… I seem to remember reading that study… I think I read it when it was new, and when I multiplied the numbers in that with Google’s size and with a local ISP’s size, I found that both of them should have electricity bills far above 100% of their total revenue.

                                                                                      Anyway, if you change the composition that way, then at least 7000 routers/switches on the way, or else some of the switches must use vastly more energy than the ones I’ve dealt with.

                                                                                      And on the server side, >95% of the power must go towards auxiliary services. AIUI cooling isn’t the major auxiliary service, preparing data to transfer costs more than cooling. Netflix needs to encode films, Google needs to run Googlebot, et cetera. Everyone who transfers a lot must prepare data to transfer.

                                                                                3. 4

                                                                                  I ran a server at Coloclue for a few years, and the pricing is based on power usage.

                                                                                  I stopped in 2013, but I checked my old invoices and monthly power usage fluctuated between 23.58kWh and 18.3kWh, with one outlier at 14kWh. That’s quite a difference! This is all on the same machine (little Supermicro Intel Atom 330) with the same system (FreeBSD).

                                                                                  This is from 2009-2014, and I can’t go back and correlate this with what the machine was doing, but fluctuating activity seems the most logical response? Would be interesting if I had better numbers on this.

                                                                                  1. 2

                                                                                    With you on the skeptic train: would love to see where this estimate:

                                                                                    Let’s assume the average website receives about 10.000 unique visitors per month

                                                                                    it seems way high. We probably will be looking to a pareto distribution, and I don’t know if my intuition is wrong, but I’ve the feeling that your average wordpress site sees way way lower visitors than that.

                                                                                    Very curious about this now, totally worth some more digging

                                                                                  1. 3

                                                                                    Ouch.

                                                                                    The only saving grace is that only Mbox delivery is affected; if you’re doing Maildir, you don’t have to worry about having to rebuild the whole box from scratch: