1. 21
  1. 27

    Everyone is focused on google, but it seems to me the core problem is let’s encrypt. It’s a perversion of the certificate model that they should be checking for malware at all. The cert verifies the domain name, not that the content is organic shade grown goodness. This shouldn’t be happening even if they are hosting malware.

    1. 6

      I know people here think I’m irresponsible in saying this, but my website has no SSL, because I can’t bring myself to use Let’s Encrypt without guaranteed renewal - I don’t want links to an SSL version which can randomly become untrusted at any time. Things like this make that point pretty well.

      I’ve also come to realize that “the cert verifies the domain name”, and it seems kind of odd that domain registration and cert issuance aren’t one and the same thing. The only explanation I have for why these two businesses aren’t completely fused at this point is the money made from “real” (not Let’s Encrypt) certificates, and cert issuers want to retain that cash flow. It’s hard to believe they’re really validating identity, since that’s time consuming and hard, and the number of bogus certificates issued is well known.

      1. 2

        I’ve also come to realize that “the cert verifies the domain name”, and it seems kind of odd that domain registration and cert issuance aren’t one and the same thing. The only explanation I have for why these two businesses aren’t completely fused at this point is the money made from “real” (not Let’s Encrypt) certificates, and cert issuers want to retain that cash flow.

        I think that it’s more that at one point in time folks thought that there needed to be a check on the ICANN hierarchy: how could I know that ibm.com is really IBM? After all, the com registrar might have just given that name to Intelligent Bicycle Market. But 20+ years of experience have shown, I think, that that was ultimately a dead end: users end up trusting Google more than CAs anyway, and very few people are typing in domains by hand. And now, with Let’s Encrypt, certificates are just validating that the site is served by the owner of the domain name.

        Given that, it does indeed make sense that receiving a certificate should be part of the domain-registration process. I’d go further than that: receiving a certificate should be part of the host-registration process: every server should have (at least) two certificates, one attesting that it is allowed to use a DNS name, and one attesting that it is allowed to use an IP address. If these two certificates are applied to the key which signs a communication, then every communication could be properly signed and MITM attacks would be prevented.

      2. 1

        That sounds more like a value add to me.

        1. 4

          And if a site uses naughty words? Should they get a special adults only cert? Keep the children safe.

          1. 2

            You said malware. There’s already tools that assess site’s reputation by considering malware. Reputation and authenticity can be tied together to present an even better picture of whether users should interact with the site. Many already do this using multiple plugins. Security suites also bundle togethet protections as standard practice.

            Now, Im not saying they should but that there’s potential value and precedent.

            1. 3

              How should a user opt out of this “protection”?

              1. 1

                For one, users cant easily opt out of much of what runs the web on infrastructure or tracking. Let’s keep that in mind. From there, one might not install the plugin, disable the settting, use different CA, ignore that kind of warning, and so on.

                Will depend on how implementers handle it on each side. I cant tell you how it would play out.

                1. 2

                  What plugin? What setting? How does a visitor to a site use a different CA? Ignore the warning that says the cert has expired? But the warning says nothing about why LE chose to force it to expire.

                  There’s a specific problem here and you’re kinda “but maybe it’s a good thing” hand waving.

                  1. 1

                    Let’s roll this back a bit.

                    “It’s a perversion of the certificate model that they should be checking for malware at all.”

                    This is what I was responding to saying they might do more value ads. It was hypothetical. You had a lot of specific questions which I wasnt even trying to explore that I was basically reacting to.

                    My main intent was to counter your claim that the certificate authorities should never offer extra services/benefits. My main evidence is it’s standard practice in security industry to do value-adds, esp for anti-malware. CA’s might try to do this, too, for reasons of profit or public benefit.

                    I have no position on the OP issue since I havent done enough research on it for very-informed opinion. Low priority for me for now.

                    1. 2

                      Ok, fair enough. Just pretend I wrote “objectionable content” instead of malware. :)

                      1. 2

                        In that case, I’d agree with you. I definitely dont want CA’s involved in such filtering. :)

          2. 2

            I’m not so certain, because I don’t think that’s the role of a certificate authority: their business is identity, not sorting the goats from the sheep.

            1. 1

              Business is whatever makes money. If you can sell more products, you sell more products. If public-benefit, you might offer more services to offer more benefit. Companies even sometimes diversify into totally different markets so a low in one doesn’t impact them overall. Those are conglomerates.

              So, the distinction you’re making is sensible but not a hard limit on CA’s. They can do whatever they want with their money to try to make money. They can even sell cheeseburgers. The Comodo Burger with Black Angus beef sounds tempting. Realistically, they’ll stay close to their current market. Assessing site’s instead of just identifying them is already a sub-market with lots of competing solutions commercially and FOSS. I use at least one at any given time.

        2. 13

          Gmail casually puts all my email to spam, despite having SPF and DKIM and owning the IP address for almost 3 years now.

          There is no way to fix that. Whenever I try their tools, it seems that I’d need to become a bulk sender first.

          1. 3

            I noticed this years ago and wrote a post about it:


            Both Google and Microsoft require you to send a lot of e-mail from one address for it to get white listed. It’s really quite bizarre. I even moved my e-mail server recently to OpenBSD on a Vultr node. They don’t allow SMTP by default and I had to request the block be removed from my account. So in theory I shouldn’t be on any SMTP-noisy subnets.

            Outbound e-mail still ends up in spam for some of my Microsoft accounts (but not all of them). I don’t get it. They really have imposed a huge barrier to reliable e-mail.

            1. 2

              I gave up trying to fight. Google doesn’t work in China, and their hosted crap was the same price as Microsoft’s but MS gives you full Office access, so I went with Office 365. They have a feature in which you can have them front a domain, but then send all the messages to a SMTP server. Just as you can relay mail through them as a smart host.

              Doing this I can still run my own email server, and kind of pretend to be self hosting, but at least people get my email now, as it’s been sent from MS instead of being sent by me. Of course this means that for so many people who try to self host their email it gets flagged as spam. It’s amazing how anti-competitive open things like email are.

              1. 1

                Gmail casually puts a small amount of my emails in spam but lets the rest through. All my emails are pretty much the same plain text emails to people who have usually contacted me first but every now and then I have to send a follow up email to one that was marked as spam.

                1. 1

                  Now that sounds rather weird.

              2. 12

                I feel very uneasy about the safe browsing thing. Not only it’s opaque and hostile to webmasters, it’s outright anti-competitive.

                I’ve seen it blacklist a number of independent file sharing websites, like the late pomf.se, allegedly for distributing malware. Google Drive is not immune to it either, not just because not yet known malware would not be identified, but also due to not running checks on files over certain size, so an ISO image of a game or a livecd with malware embedded in it would be ignored. Same with other big names. I haven’t seen any of them blacklisted though.

                It also blocked entire web archiving websites for the same reason.

                I could understand if it was a warning like that for untrusted certificates, but it also makes it nearly impossible to access the affected website.

                1. [Comment from banned user removed]

                  1. 13

                    Please stop spamming lobsters with links to the same blog post over and over again. The article iis about Google’s safe browsing tool, just like the parent comment is.

                    It seems to me you are bending this towards its literal meaning just as an excuse to link to your blog post.

                    Hence, I can’t help but call your comment spam. In fact, most of your comments link to the same post.

                    1. 2

                      Thanks for sharing your opinion.

                      I think you got it wrong: I’m not trying to drive attention to my article; I’m trying to inform people interested in web security and its tradeoff (as @dmbaturin seems to be) about a vulnerability that, to my knowledge, affects millions of people, companies and governments.

                      I’m eager to share on Lobsters more studies and exploits about this technical issue and the cultural problems that it has shown. And I will share them, as soon as more will be written.

                      For now I’m forced to link my own articles (or the bug report you have closed) despite the risk of being qualified as a spammer.
                      Fortunately I do not care much about internet points and thus I can be freely downvoted.

                      1. 5

                        Fortunately I do not care much about internet points and thus I can be freely downvotes.

                        In any time period, 90% of commenters receive zero downvotes total. The rest of the users have an exponential distribution and a handful out at the end total many dozen because of both high rates of commenting and high percentage of those comments earning downvotes. In the last two months I’ve been opening private conversations with the handful of extreme users asking them to recognize and reflect on their behavior, because the eventual consequence of not just failing to meet basic community norms but declaring opposition to them has been and must continue to be banning.

                        Stop riding this ridiculous hobbyhorse through browser threads.

                        1. 3

                          I’m neither opposing nor declaring opposition to any “basic community norm” I’m aware of.

                          I’m not saying I intend to spam, I’m saying that in that particular thread, the reference to my article was a useful (and optional) explanation for @dmbaturin about my argument that “there is not such a thing like ‘safe browsing’” and thus it was not spam despite I was aware that people not liking that article would have downvoted it (as they did: +5, -1 off-topic, -5 spam, -2 troll).

                          Such argument is a technical one, proved by an exploit that show how any site you visit can tunnel into your private network. You can disagree with my evaluation of its severity, but that doesn’t turn it into spam, or me into a troll.

                          It’s also on topic, because AFAIK Google Chrome is affected too.

                          Stop riding this ridiculous hobbyhorse through browser threads.

                          Why ridiculous?
                          I never insult anyone here, and yet I get constantly insulted (as spammer, troll, ridiculous, bizarre).
                          I do not care much, but I’d like to understand why you do so!

                          I’m neither a troll, nor a spammer.

                          I try to obey to the rules of the communities I join. And to their administrators.
                          After our private exchange, I even refrained to ask @freddyb to inform Firefox users about the risks they are facing! Or even just to say if Firefox users are vulnerable to these attacks!

                          Fine, I will not cite this set of vulnerabilities on Lobsters again.
                          TBH, I think that having taboo topics will hurt the quality of this site, but your server your rules.

                          Still I would really appreciate if you could explain here why a vulnerability that let you tunnel into a corporate network (and to carry many other attacks to users’ privacy and security) is ridiculous.
                          It’s an honest question, and I promise I will not reply further, whatever you will write.

                          1. 5

                            I’m neither opposing nor declaring opposition to any “basic community norm” I’m aware of.

                            Downvotes are part of how community norms are expressed here. When you ignored the scores of people telling you for months that your comments are inappropriate with downvotes and comments, a mod repeatedly intervening in your discussions and messaging you is an unambiguous warning that you are violating norms. You absolutely can’t or won’t take any of this to heart and wave it all away as internet points or a failure to divine your intentions?

                            1. 4

                              You absolutely can’t or won’t take any of this to heart and wave it all away as internet points or a failure to divine your intentions?

                              No, evidently I was not clear enough.
                              (sorry if I reply, but you are asking me a direct question and you didn’t answer in this comment my question about the browsers’ vunlerability that I promised to not reply to, so I suppose I have to answer)

                              Whenever I get downvoted here, I read again the topic to understand if I got something wrong. Some downvotes I got here were well deserved and I think they teached me about what Lobsters is about.
                              For example the off-topic downvotes to the posts here and here or the incorrect downvotes here, here and here (I still think that the inability to access to the required information is what defines a partition, but I learnt to be more careful with author names, and a different perspective on the CAP theorem, there).

                              An interesting lesson I learnt are the 3 troll downvote here that waere deserved not because my argument was incorrect, but because I didn’t stick to the tone of @friendlysock.
                              I’m always careful to preserve the exact same tone (polite, ironic or sarcastic) used by the people to which I reply to, and in that specific comment, I didn’t. Sorry friendlysock, please accept my sincere apology.

                              Most of times however, I receive downvotes that does not seem to comply with the Lobsters’ Downvote Guideline. When this happens I usually do not get offended but I do not care much, since the community ufficially refuse them.

                              To my eyes, most of these downvotes that seem not compliant with the Lobsters Guidelines are usually on comments that:

                              An interesting example to understand why I do not care about such downvotes is the conversation with @cpnielsen and @geocar in this thread about GDPR: I got a total of -3 spam, -3 incorrect and -8 troll in that thread despite providing plenty of informations, links to dwelve deeper and even references to the actual law.
                              A few weeks later, I even talked about the topic with a lawyer specialized in IT (that work for an multinational Italian-based bank and was working on its GDPR compliance) and I did show him the thread.
                              According to him, I was correct: initially he defined most of the comments I did replied to as either FUD or plain ignorance, but when I made him notice the reference provided by geocar, he agreed that geocar was probably talking about the UK legislation, not the European one.

                              Now, as this detailed analysis (that took me almost 4 hours to write) shows, I’m taking this community and its rules by heart.

                              But, in all honesty, I think I gave a positive contribution here, despite the downvotes (most of which were not deserved).

                              I will go through the CSV you have sent me to further explain my interpretation of the downvotes I got in these mounths as soon as possible. But I’m not sure you are correctly reading the statistics here. Above I have shown you how 98 downvotes were not conforming to the Downvote Guidelines: I do not know how many downvotes correspond to 1 STD here, but how many standard deviations are 98 downvotes?

                              Also I think this approach is pretty dangerous to the community itself. An actual troll or a spammer might reduce the deviations of their own downvotes by downvoting others. Also, one should try to look who downvote who, to get a clue about possible attacks from interest groups or cultural biases.

                              1. 3

                                To give more context for folks wondering: I’ve been discussing Shamar’s commenting style with him for months in public and private after many complains and my own frustrations. I explained how he’s been violating site norms and antagonizing users. Every time he’s re-litigated the technical details of a discussion rather than discuss the pattern of his behavior, even when I’ve repeatedly said that is that thing that needs to be addressed. This particular comment is a response to a private message where, after several rounds of this repetition, I sent him a csv of all of his downvoted comments and invited him to explain the pattern. I banned him not just for this unending antagonism that’s escalated into him using Lobsters to troll Firefox developers, but his unwillingness or inability to get out of the details to recognize and improve it, however many times or forms the feedback takes. I wish him luck finding him a community that welcomes his discussion style, because even at the end of this I don’t think he’s deliberately malicious.

                2. 10

                  It constantly shocks me how far reaching Googles ability to completely shut down a business. Even if you use not a single Google service they could put you on the bad websits list (unlikely), mark all of your emails as spam (very likely). And for most businesses you have no choice but to distribute your software on the Google play store which google could remove your app at any time.

                  Googles heavy use of automated systems makes it highly likely that small businesses or personal websites get marked as bad with no oversight or appeal process. They are simply too powerful now and really have to be stopped.

                  1. 6

                    Years ago I had taken the Win32 port of NetHack, and had ported it to Windows CE, but not the MIPS/ARM/PowerPC/SH3|4 stuff, the CEPC aka x86 version of Windows CE. Well that tripped some alarm at “clean-mx.de” where they downloaded the binary automatically and determined that since it was an “Unknown exe type” therefore it was a virus they took the liberty of sending an automated email to the data center where my server was hosted, and they in turn threatened the colocation services I was using with termination if they didn’t destroy the contents of my server. There was no remediation, and no logic. I was not permitted to even face or question my accuser,

                    I at least had offsite backups so I terminated services, and moved to a new host, and put up a username/password redirection wall to prevent direct downloads. But it’s inevitable that this will become ‘problematic’ again at some point as there is such a massive backlash by the defacto monopoly of megacorps against self hosting people.

                    Google, Microsoft, Yahoo, AT&T, Apple are all impossible to deal with as a small user. Just ask anyone who dare tries to setup their own email server. It’s an impossible situation, and unless you front your servers with theirs it’ll be impossible to not be able to fully use email on the internet.

                    The situation with safe sites, and all the rest of this nonsense is just the same. The lack of transparency and ease to be banned from ‘the internet’ is astounding. But we live in the era of people just letting the machine think.

                    This is the same kind of automa that locked Bach as an exclusive artist who recorded exclusively with SONY back in the 1700’s. Big AI, much like Big Data is a fraud.

                    1. 2

                      Before everyone jumps on the “bad google” hype: A few things here appear a bit odd to me.

                      Within the text the author speculates that another subdomain of his domain could be the reason for trouble (“Now it could be that some other hostname under that domain had something inappropriate”), and then continues to argue why he thinks it would be a bad thing for google to blacklist his whole domain.

                      Sorry to say this, but: If it’s on your domain then it’s your responsibility. If you’re not sure if some of your subdomains may be used for malware hosting then please get your stuff in order before complaining about evil Google. It’s widespread to regard subdomains as something to not care too much about - as can be seen by the huge flood of subdomain takeover vulns that are reported in bugbounty programs - but that doesn’t mean that it’s right.

                      1. 7

                        On shared hosting services it’s pretty common to only have control over subdomains.

                        Think of github as a modern days example: you have no control over what is served on malware-friends.github.io

                        1. 4

                          Technically, friends.github.io is its own domain, not just a sub domain.

                          github.io is on the public suffix, which makes it an effective top-level domain (eTLD).

                          1. 7

                            Correct me if I am wrong but from what it looks like, this list doesn’t mean anything in terms of DNS and is just a community maintained text file. Does Google actually review this file before marking domains as bad? I really doubt it because then spammers would just use domains on that list.

                            1. 1

                              Good point!

                              I was just looking for a familiar example, but actually the PSL might be the root of the issue faced by the author.
                              It reminds me of the master hosts file originally maintained at Stanford: shouldn’t that info be handled at DNS level?

                              1. 1

                                What do I do if I want to make a competitor to GitHub Pages? Do I have to somehow get big and important enough to have my domain end up on the public suffix list before I can launch my service?

                                What if I want to make a self-hosted GitHub Pages alternative, where users of my software can set it up to let other people use my users’ instance? Do all users of my software have to make sure to get their domain names into the public suffix list?

                                1. 2

                                  No, you have to spend four minutes reading the (very short) documentation that covers how to get on the list, open a PR adding your domain to their repo, and set a DNS record on the domain linking to the PR.

                                  It might even have been quicker to read the docs than to type out the question and post it here.

                                  1. 1

                                    You do not have to be big. adding yourself to the list is a pull request in which you must bbe able to prove domain ownership.

                                    If you want browsers to consider a domain aan effective TLD, you have to tell them.

                            2. 1

                              It should be possible to use Search Console to figure out where the problem is: https://developers.google.com/web/fundamentals/security/hacked/hacked_with_malware Or try some specific URLs here: https://transparencyreport.google.com/safe-browsing/search?url=gw90.de Using the open source ClamAV anti-virus to scan all the pages you host on the domain might also give some clues.