1. 4

    HTML email is a really big mess security wise. The fact that such a page exists highlights a problem in itself: Nobody really knows what HTML mails really are and what features they’re supposed to support.

    If there was a reasonable concept behind HTML mail there would be a standard defining exactly which subset of HTML is allowed within mails. There is no such thing. The simple question “How to I process an HTML mail so it’s safe to display in a webmail frontend?” has no clear answer. Unsurprisingly pretty much all webmail frontends suffer from XSS all the time.

    I expanded on this a bit back when efail was found: https://blog.hboeck.de/archives/894-Efail-HTML-Mails-have-no-Security-Concept-and-are-to-blame.html

    1. 10

      As David Roberts of vox.com notes over and over, the most important thing you can do in the US is Vote Democrat. The only thing that will make a significant difference on a global scale is federal policy change and the Republicans have shown they have no interest.

      1. 5

        I’m certainly not gonna say you should not vote Democrats. But their track record on climate isn’t good. Major democrats like Nancy Pelosi and Dianne Feinstein have acted in an astonishingly arrogant way towards climate campaigners lately.

        If you want Democrats to act on climate, of course you have to vote Repulicans out, but you also have to make sure the people within the democratic party that are silent climate deniers (they won’t say so, but they’ll oppose any meaningful action) don’t get the upper hand.

        1. 2

          Who believes that a politician will do what he says ? Even more when it implies going against the autonomous development of the Capital. Talk is cheap for all who seek power

          1. 1

            Both parties are corrupt. They’re corrupt in different ways on some issues. Republicans usually vote against anything that helps in this. So, you’re right. Their party also votes against consumer protections, workers’ rights, etc at State and Federal levels. If you vote for them and don’t own a business, you’re voting against yourself. You’re also still voting against yourself if you’re not rich and interact with any other business that might screw you over.

            Another key difference is that Democrat policies mostly waste tons of money, often on welfare, where Republicans like to waste tons of money on locking up Americans for victimless crimes and mass-murdering people overseas for often hard to justify reasons. That Republicans are more pro-mass-murder… as a party, not necessarily individuals… made me firmly draw a line on not voting Republican. I’d be voting for six digits worth of innocent people to die in a way that benefits rich people (esp defense contractors), leaves our lower-paid soldiers with PTSD or physical disabilities, and puts us in debt that I’ll owe back. I’d rather the debt or financial bullshit be something like getting people education, health insurance, jobs, or good infrastructure. The stuff Democrats like to overspend on.

          1. 0

            See if your local power company lets you buy renewable power. My local utility lets you pay 1¢ extra per kWh on any percentage of your electrical usage for renewable investment.

            I do this at home, and we do it at my business. Costs less than $10/month for my house to go 100% renewable.

            If you’re in the Madison area, check it out: https://www.mge.com/our-environment/green-power/green-power-tomorrow

            1. 3

              This heavily depends on country and availability, but if you buy renewable power from the same company that you bought fossil power that’s not really ideal.

              It’s very well possible that it has zero benefit, because the company probably already has some share of renewables and they may just virtually shift more of that to you while increasing the virtual fossil share of their other customers.

              Ideally you buy renewable electricity from a company that a) is only selling renewable electricity and b) commits to invest a certain share into new renewable energy production and not just sell from already existing facilities. If you can’t have a) and b) at least strive for one of them.

              1. 1

                I don’t disagree with anything you’ve said, but my utility (Madison Gas & Electric) seems to have a decent plan for going net-zero-carbon. I’d prefer them to move faster, and I hope that showing them with my wallet will encourage quicker implementation.

              2. 1

                Or move to Tasmania or New Zealand which are both usually powered > 90% by hydroelectricity.

              1. 20

                Sad :-( I still think Mercurial far better meets the needs of most people, and that the chief reasons for git’s popularity are that Linus Torvalds wrote it, GitHub, and that Linus Torvalds wrote it.

                That said, I did end up switching from BitBucket/mercurial to GitHub/git a few years ago, simply because it’s the more pragmatical thing to do and I was tired of paying the “mercurial penalty” in missed patches and the like. I wrote a thing about it a few ago: https://arp242.net/git-hg.html

                1. 6

                  Why do you think hg is better for most people? I honestly find it vastly more complex to use.

                  1. 15

                    The hg cli is light years ahead of git in terms of intuitiveness.

                    1. 6

                      I’d say it’s years behind ;)

                      1. 10

                        How long have you been using Mercurial? I find most people who dislike Mercurial’s UI, are mainly coming from years of experience with Git. I disliked Mercurial at first as well, but after a few years of forced usage it clicked. Now I appreciate how simple and well composed it is and get frustrated whenever I need to look up some arcane Git flag on StackOverflow.

                        In general, I’d say you need several years experience with both Git and Mercurial before you can draw a fair comparison.

                        1. 3

                          I used mercurial for about 2 years before using git.

                          1. 3

                            Sorry if my post came across a bit accusatory (not my intent). In that case I guess to each their own :).

                          2. 3

                            but after a few years of forced usage it clicked.

                            I’m pretty sure that git clicked for me in a much shorter timeframe.

                            1. 1

                              Me too, but I know many (otherwise perfectly competent engineers) 5-10 years in who still don’t get it and aren’t likely to.

                          3. 9

                            I’m going to strongly disagree. I’ve used git intensively and I find Mercurial to be a well-designed delight. I’ve run across features that Mercurial supports flawlessly, with a nice UI, and Git requires a hacky filter-branch that takes hours to run and doesn’t even behave correctly.

                            IMO, a lot of the badness in projects is down to Git badness. it doesn’t scale and people feel compelled to break things down into tiny sub-projects.

                            The only reason Git is winning anything is GitHub’s support of it.

                            1. 3

                              The only reason Git is winning anything is GitHub’s support of it.

                              Why then was github ever used in the first place? Kind of a strange proposition.

                              1. 1

                                Network effect of the social network is pretty important.

                                1. 1

                                  Why would there ever be a network effect in the first place if git was so bad that github was the only reason to use it. I get that the argument technically holds but it seems very unlikely.

                        2. 8

                          You find mercurial more complex to use than git? That’s an… unusual view, to say the least. The usual recitation of benefits goes something like this

                          • Orthogonal functionality in hg mostly has orthogonal commands (compare git commit, which does a half-dozen essentially unrelated different things).
                          • hg has a somewhat more uniform CLI (compare git branch -a, git remote -v, git stash list).
                          • hg either lacks or hides a bunch of purportedly-inessential and potentially confusing git functionality (off the top of my head, partial commits aren’t baked into the flow a la git’s index/staging area; and rebasing and history rewriting are hidden behind an extension).

                          I personally prefer git, but not because I think it’s easier or simpler; I’m more familiar with it, and I find many of those purportedly-inessential functions to be merely purportedly, not actually, inessential.

                          1. 5

                            One more thing I like about mercurial that the default set of commands is enough for >90% of people, and that everything else is “hidden” in extensions. This is a very different approach than git’s “kitchen-sink” approach, which gives people 170 commands (vs. Mercurial’s 50, most of which also have much fewer options/switches than git).

                            Git very much feels like “bloatware” compared to Mercurial.

                            1. 3

                              I used git for many years, and then mercurial (at FB) ever since we switched over. The cli interface for mercurial is definitely more sensible, crecord is delightful, and overall it was fine. But I was never able to build a mental model of how mercurial actually worked. git has a terrible interface, but it’s actually really simple underneath.

                              1. 1

                                I didn’t think that underneath they were different enough to matter much. What differences do you mean? I guess there’s git’s remote tracking stuff. Generally, it seems like they differ in how to refer to and track commits and topological branches, locally and remotely. (IMHO, neither has great mechanisms for all the things I want to do.) Mercurial is slightly more complex with the manifest, git is more complex with the staging area that feels absolutely critical until you don’t have it (by using hg), at which time you wonder why anyone bothers with it. I’m a heavier hg user than git user, but that’s about all I can come up with.

                              2. 2

                                You find mercurial more complex to use than git?

                                I actually found – in a professional shop – mercurial far more complex to use. Now, the fact is that mercurials core – vanilla hg is IMHO absolutely without doubt vastly superior to git. Git keeps trying to make the porcelain less painful (including a release just a bit ago) – but I still think it is ages behind.

                                The problem is – I never used vanilla mercurial in a professional environment. Not once. It was always mercurial++ (we used $X extension and $Y extension and do it like $Z) which meant even if I knew hg, I felt painfully inexperienced because I didn’t know mq, share, attic, collapse, evolve, and more… not to mention both the bigger shops I worked with using mercurial has completely custom workflow extensions. I suspect part of this was just the ease of writing mercurial extensions, and part of it was wanting to fall into a flow they knew (mq, collapse). But, regardless of how we got there, at each place I effectively felt like I had to relearn how to use the version control system entirely.

                                As opposed to git, wherein I can just drop in and work from day one. It might be less clean, it might be more finicky and enable things like history rewriting by default. But at the end of the day, the day I start, I know how to generally function.

                                I am curious how Mercurial would have faired if instead of shipping default extensions you had to turn on – if they had just baked a little more functionality, to try to cover the 80% of what most shops wanted (not needed, I think most could have gotten by with what vanilla mercurial had) – if the shop to shop transition would have been easier.

                                1. 2

                                  mq, I think, is responsible for many of the “mercurial is too complicated” complaints people have. Evolve, if it ever stabilized and ships with core hg would really enable some killer capabilities. Sadly for social and technical reasons it’s perpetually in beta.

                                2. 1

                                  whoa, no index? Admittedly I didnt really use index as intended for several years, but now its an important part of my workflow.

                                  1. 1

                                    In Mercurial, commits are so much easier to make and manipulate (split, fold, move), that you don’t miss the index. The index in git is just a limited special cased “commit”.

                                    1. 3

                                      The index in git is just a limited special cased “commit”.

                                      I disagree.

                                      The index is a useful way to say “these lines of code are ready to go”. If you are making a big commit, it can be helpful to add changes in logical blocks to the index as you go. Then the diff is not polluted with stuff you know is already fine to commit.

                                      You might say, “why not just make those changes their own commits, instead of trying to do one big commit?” That’s a valid question if you are talking about a 200 line commit or similar, but sometimes the “big” commit is only 50 lines. Instead of making a bunch of one line or few line commits, its helpful to “git add” small chunks, then commit at the end.

                                      1. 0

                                        You can as well amend to a commit instead of adding to the index.

                                        1. 3

                                          True, but all thats doing is bastardizing the commit process. If you are committing a one line change, just to rebase minutes or hours later, thats not a commit.

                                          Rebase to me is for commits that were intended to be commits, but later I decided it would be better to squash or change the history. The index is for changes that are never meant to be a full commit on their own.

                                          1. 1

                                            Having a distinction between draft and published phases in mercurial I think makes it easier to rewrite WIP work. There’s also a number of UI affordances for it. I don’t miss the index using mercurial. There’s also academic user interface research that shows the index is a big conceptual barrier for new users.

                                            1. 1

                                              There’s also academic user interface research that shows the index is a big conceptual barrier for new users.

                                              this isnt really a valid point in my opinion. some concepts are just difficult. if some goal can be achieved in a simpler way i am on board, but I am not a fan of removing useful features because they are hard to understand.

                                              1. 1

                                                But the point is the index is hard to understand and unnecessary.

                                                There’s no need to have a “commit process”. Just commit whatever you want and rewrite/amend it for as long as you want. As long as your commits are drafts, this is fine.

                                                Is the problem the word “commit”? Does it sound too much like commitment?

                                                There’s no need to have two separate ways to record changes, an index, and a commit, each with different degrees of commitments. This is multiplying entities beyond necessity.

                                                1. 1

                                                  That’s your opinion. The index is quite useful to me. I’d rather make a proper commit once it’s ready, not hack together a bunch of one line commits after the fact.

                                                  1. 2

                                                    The index is a commit. Why have two separate ways of storing the same sort of thing?

                                                    Also, it’s not my opinion that it’s hard to understand and unnecessary; it’s the result of usability studies:


                                                    You’re also not “hacking together” anything after the fact. There’s no more hacking together after the fact whether you use git amend (hypothetically) or git add. Both of those mean, “record additional changes”.

                                                    1. 0

                                                      It seems you have a fundamental misunderstanding of the difference between add and commit. Commit requires a commit message.

                                                      1. 1

                                                        This isn’t a useful distinction. You can also create commits with empty commit messages in both git and Mercurial.

                                                        With both git and Mercurial you can also amend commit messages after the fact. The index in git could well be implemented as a commit with an empty commit message that you keep amending and you wouldn’t notice the difference at all.

                                                        1. 1

                                                          you keep amending and you wouldn’t notice the difference at all.

                                                          yeah, you would. again it seems that you either dont know git, or havent used it in some time. when you amend a commit, you are prompted to amend the message as well. another facet that doesnt exist with git add, because add doesnt involve a message.

                                                          if you wish to contort git internals to suit your agenda thats fine, but git add has perfectly valid use cases.

                                                          1. 0

                                                            you are prompted to amend the message as well.

                                                            This is UI clutter unrelated to the underlying concepts. You can get around that with wrappers and aliases. I spoke of a hypothetical git amend above that could be an alias that avoids prompting for a commit message.

                                                            Don’t git users like to say how the UI is incidental? That once you understand the data structures, everything else is easy? The UI seems to have locked you into believing the index is a fundamentally necessary concept, but it’s not. It’s an artifact of the UI.

                                                            1. 1

                                                              The UI seems to have locked you into believing the index is a fundamentally necessary concept, but it’s not.

                                                              Nothing has locked me into believing its a necessary concept. Its not necessary. In fact, for about 7 years I didnt use the index in any meaningful way.

                                                              I think what you are missing is that Im not compelled to use it because its the default workflow, I am compelled to use it because its useful. It helps me accomplish work more smoothly than I did previously, when I would just make a bunch of tiny commits because I didnt understand the point of the index, as you still dont.

                                                              The argument could be made to move the index into an option, like somehow make commit only the default workflow. Im not sure what that would look like with Git, but I dont think its a good idea. It would just encourage people to make a bunch of smaller commits with meaningless commit messages.

                                                        2. 1

                                                          You have a set of things you want to accomplish. With git, you have N+1 concepts/features/tools to work with. With hg, you have N (because you drop the index). That means you have to expand your usage of the remaining N.

                                                          Specifically, since you no longer have this extra index concept, you now expand commits to cover the scenarios you need. Normally, you’d make an initial commit and then amend a piece at a time (probably with the interactive curses hunk selector, which is awesome.) If you’re unsure about some pieces, or you have multiple things going on that you’d like to end up in separate commits, you can always make a series of microcommits and then selectively collapse them later. (In practice, it’s even easier than this, because of the absorb extension. But never mind that.)

                                                          Yes, those microcommits need commit messages. They don’t need to be good ones, because they’re temporary until you squash them out of existence. I usually use a one word tag to specify which of the separate final commits they belong to. (If you don’t have separate final commits, you may as well amend, in which case no messages are needed.)

                                                          …or on the other hand, maybe mercurial ends up with N+1 concepts too, because phases really help in keeping things separate. As I understand it, one reason git users love the index is because it keeps rapidly changing, work in progress stuff separate from mostly set in stone commits. Phases perform the same purpose, but more flexibly, and the concepts are more orthogonal so they compose better. In my opinion.

                                3. 6

                                  I never particularly liked git and find it unintuitive, too.

                                  I wouldn’t consider myself a git poweruser. But whenever I had to work with alternatives I got the feeling that they’re just inferior versions of git. Yeah, maybe the usage was a bit more intuitive, but all of them seemed to lack things that I’d consider really basic (bisecting - hg has that, but e.g. svn has not - and shallow copying - not avaible in hg - are examples what I often miss).

                                  1. 3

                                    Mercurial was actually my first DVCS, and like you I ended up switching to git not out of a sense that it was technically better, just more pragmatic. For me, the change is more of a mixed bag, though. It is definitely the case that Mercurial’s UI is worlds better, and revsets in particular are an amazing feature that I sorely miss, but when I made the switch I found that the way git handles branches was much more intuitive to me than Mercurial’s branch/bookmark system, and that the affordances around selectively editing commit histories were very much worth the risk in terms of being able to manage the narrative of a project’s history in a way that makes it more legible to others. Ultimately, I found that git’s advantages outweighed its downsides for my use case, since learning its UI idiosyncrasies was a one-time cost and since managing branches is a much more common occurrence for me than using revsets. That said, I think this is a really unfortunate development.

                                    1. 2

                                      I occasionally convert people’s git repos to hg for my use. Stubborn like that.

                                    1. 4

                                      The opening comments - particularly about print/parse round trips etc. - suggest a link between fuzzing and property-based testing that I’d love to see explored more. I know that a fuzzer based on Haskell QuickCheck exists but haven’t played with it.

                                      1. 4

                                        Properties are specifications: what your program is supposed to do. Other names include models and contracts. The code itself is how you attempted to do it. Tests generated from them naturally check the how against the what. Finally, you or your tools can convert each property to a runtime check in the code before fuzzing it. Takes you right to point of failure.

                                        Design-by-Contract, contract-based test generation, and fuzzing with contracts as runtime checks is a combo that should work across about any language. Add static/dynamic analysis with low false positives if your language has them. Run this stuff overnight to get more CPU time fuzzing without dragging down performance of your system while you use it.

                                        1. 2

                                          There are a couple papers on Targeted PBT essentially adding argMax semantics to (at least an Erlang) QuickCheck lib. One can say “test this property using this somewhat non trivial generator and also try to maximize code coverage, as this may help the generation of interesting values”. This is exactly what I did in this proof of concept [1]. It indeed finds counter examples faster than the non maximizing code. In this PoC the non maximizing version often doesn’t find anything at all.

                                          I have discovered a passion with this technology and (plug!) am building what will essentially be a language agnostic PBT/fuzzing tool and hopefully SaaS at [2]!

                                          [1] https://github.com/fenollp/coverage_targeted_property_testing

                                          [2] https://github.com/FuzzyMonkeyCo/monkey

                                          1. 1

                                            The way I use the terms, the link is quite simple: both are instances of automated tests with generated input data, but with property based testing, there is a relatively strong oracle, whereas with fuzzing, the oracle is limited to “did it crash?”

                                            This might be slightly different to how the author here uses the terms, though.

                                            1. 4

                                              Your point about oracle is the biggest difference; I think I would expand that to; property based testing can give you statistical guarantees, which means that it tries to sample your program input space according to some pre-defined probability distribution. It doesn’t particularly care about things like coverage either (and as far as I understand it, property based testing should not use feedback — but lines are bluring[1]).

                                              Fuzzing, on the other hand does not particularly care about statistical guarantees (not that you cant make it, but typically it is not done). All it cares about is “can I exercise interesting code that is likely to invoke interesting behaviors”. So, while we use coverage for as a feedback for fuzzing, it is OK to leave aside parts of the program that are not interesting enough.

                                              At the end of the day, I would say the similarities are that both are test generation tools (which also include things like Randoop and Evosuite which are neither fuzzers nor property checkers).

                                              [1] ArbitCheck: A Highly Automated Property-Based Testing Tool for Java

                                              1. 3

                                                I used afl fuzzing to find bugs in math libraries, see e.g. [1] (i.e. things like “divide input a through b with two different libraries, see if the result matches, otherwise throw an assert error”). So you can get the “strong oracle” with fuzzing. I guess you can’t really have a strong line between “fuzzing” and “property-based testing”, it’s just different levels of test conditions. I.e. “doesn’t crash” is also a “property” you can test for.

                                                [1] https://www.mozilla.org/en-US/security/advisories/mfsa2016-07/

                                                1. 2

                                                  The original twitter thread where he solicited ideas about how to write fuzzable code had a conversation about how PBT and fuzzing relate: https://twitter.com/mgambogi/status/1154913054389178369.

                                                  1. 1

                                                    Fuzzing does not limit the oracle to “did it crash?” Other oracles (address sanitizers, for example) are quite common.

                                                    There’s obviously some overlap between fuzzing and property based testing, but:

                                                    Fuzzing tends to work on the whole application, or a substantial part of it, at once. PBT is typically limited to a single function, although both fuzzing and PBT are useful in different scopes.

                                                    Fuzzing tends to run for weeks on multiple CPUs, whereas PBT tends to run alongside unit tests, quickly.

                                                    Fuzzing (often!) tends to use profile guidance, whereas PBT does not.

                                                1. 4

                                                  I’m happy to see FTP die. But aren’t some websites still providing download links over FTP? I think it was just a year ago when I noticed I was downloading an ISO file from an FTP server..

                                                  1. 9

                                                    There’s nothing wrong with downloading an ISO from an FTP server. You can verify the integrity of a download (as you should) independently of the mechanism (as many package managers do).

                                                    1. 4

                                                      I agree! The same goes for downloading files from plain HTTP, as long as you verify the download you know the file is okay.

                                                      The reason I don’t like FTP has to do with the mode of operation; port 21 as control channel and then a high port for actual data transfer. Also the fact that there is no standard for directory listings (I think DOS-style listings are the most common?).

                                                      1. 2

                                                        The reason there’s no standard for directory listings is possibly more to do with the lack of convention on filesystem representation as it took off. Not everything uses the same delimiter, and not everything with a filesystem has files behind it (e.g. Z-Series).

                                                        I absolutely think that in the modern world we should use modern tools, but FTP’s a lot like ed(1): it’s on everything and works pretty much anywhere as a fallback.

                                                        1. 1

                                                          If you compare FTP to ed(1), I’d compare HTTP and SSH to vi(1). Those are also available on virtually anywhere.

                                                          1. 1

                                                            According to a tweet by Steven D. Brewer, it seems that at least modern Ubuntu rescue disks only ship nano, but not ed(1) or vi(1)/vim(1).

                                                            1. 1

                                                              Rescue disks are a special case. Space is a premium.

                                                              My VPS running some Ubuntu version does return output from man ed. (I’m not foolish enough to try to run ed itself, I quite like have a usable terminal).

                                                        2. 1

                                                          Yes, FTP is a vestige of a time where there was no NAT. It was good until the 90s and has been terrible ever since

                                                        3. 1

                                                          Most people downloading files over FTP using Chrome don’t even know what a hash is, let alone how to verify one.

                                                          1. 1

                                                            That’s not really an argument for disabling FTP support. That’s more of an argument for implementing some form of file hash verification standard tbh.

                                                          2. 1

                                                            There is everything wrong with downloading an ISO over FTP.

                                                            Yeah, you can verify the integrity independently. But it goes against all security best practice to expect that users will do something extra to get security.

                                                            Security should happen automatically whenever possible. Not saying that HTTPS is the perfect way to guarantee secure downloads. But at the very least a) it works without requiring the user to do anything special and b) it protects against trivial man in the middle attacks.

                                                            1. 1

                                                              But it goes against all security best practice to expect that users will do something extra to get security.

                                                              Please don’t use the term best practice, it’s a weasel term that makes me feel ill. I can get behind the idea that an expectation that users will independently verify integrity is downright terrible UX. It’s not an unrealistic expectation that the user is aware of an integrity failure. It’s also not unrealistic that it requires the user to act specifically to gain some demonstrable level of security (in this case integrity)

                                                              To go further, examples that expect users to do something extra to get security (for some values of security) include:

                                                              1. PGP
                                                              2. SSH
                                                              3. 2FA

                                                              Security should happen automatically whenever possible.

                                                              And indeed, it does. Even over FTP

                                                              Not saying that HTTPS is the perfect way to guarantee secure downloads

                                                              That’s good because HTTPS doesn’t guarantee secure downloads at all. That’s not what HTTPS is designed for.

                                                              You’ve confused TLS (a transport security mechanism) with an an application protocol built on top of TLS (HTTPS) and what it does with the act of verifying a download (which it doesn’t). The integrity check in TLS exists for the connection, not the file. It’s a subtle but important difference. If the file is compromised when transferred (e.g. through web of trust, through just being a malicious file) then TLS won’t help you. When integrity is important, that integrity check needs to occur on the thing requiring integrity.

                                                          3. 7

                                                            You got it backwards.

                                                            Yeah, some sites still ofter FTP downloads, even for software, aka code that you’re gonna execute. So it’s a good thing to create some pressure so they change to a more secure download method.

                                                            1. 9

                                                              Secure against what? Let’s consider the possibilities.

                                                              Compromised server. Transport protocol security is irrelevant in that case. Most (all?) known compromised download incidents are of this type.

                                                              Domain hijacking. In that case nothing prevents attacker from also generating a cert that matches the domain, the user would have to verify the cert visually and know what the correct cert is supposed to be—in practice that attack is undetectable.

                                                              MitM attack that directs you to a wrong server. If it’s possible in your network or you are using a malicious ISP, you are already in trouble.

                                                              I would rather see Chrome stop sending your requests to Google if it thinks it’s not a real hostname. Immense effort required to support FTP drains all their resources and keeps them from making this simple improvemen I guess.

                                                              1. 1

                                                                MitM attack that directs you to a wrong server. If it’s possible in your network or you are using a malicious ISP, you are already in trouble.

                                                                How so? (Assuming you mostly use services that have basic security, aka HTTPS.)

                                                                What you call “malicious ISP” can also be called “open wifi” and it’s a very common way for people to get online.

                                                                1. 1

                                                                  The ISP must be sufficiently malicious to know exactly what are you going to download and setup a fake server with modified but plausibly looking versions of the files you want. An attacker with a laptop in an open wifi network doesn’t have resources to do that.

                                                                  Package managers already have signature verification built-in, so the attack is limited to manual downloads. Even with resources to setup fake servers for a wide range of projects, one can wait a long time for the attack to succeed.

                                                          1. 1

                                                            Patch notes say “TLS 1.0-1.2”.

                                                            Any particular reason for the omission of TLS-1.3?
                                                            Also, I thought TLS-1.0 was considered pretty insecure[1] at this point?

                                                            [1]: from: wikipedia TLS_1.0

                                                            The PCI Council suggested that organizations migrate from TLS 1.0 to TLS 1.1 or higher before June 30, 2018.[20][21] In October 2018, Apple, Google, Microsoft, and Mozilla jointly announced they would deprecate TLS 1.0 and 1.1 in March 2020.

                                                            1. 2

                                                              I don’t think Netflix is focusing on TLS 1.3 because it’s not widely implemented yet. And 1.0 is fallback for older devices. Netflix doesn’t really care so much of someone does a MITM of your movie.

                                                              Edit: I’m sure there are smart TVs with the Netflix app that can’t go newer than TLS 1.0 and Netflix is contractually obligated to keep it functioning for now

                                                              1. 2

                                                                In which way do you think TLS 1.3 is not widely implemented? According to [1] it’s supported by all mainstream browsers in the latest version.

                                                                Things have changed in this regard. For the majority of users these days it’s normal to have a browser that will update itself automatically on a regular basis. I’m pretty sure major sites already see >50% TLS 1.3 traffic.

                                                                Consider this is a performance feature. Which means a) you don’t need 100%, if you support it for 80% you’re already doing pretty fine and b) it seems strange to want the performance of in-kernel TLS and skip the performance benefits of TLS 1.3.

                                                                [1] https://caniuse.com/#feat=tls1-3

                                                                1. 4

                                                                  You’re thinking browsers and I’m thinking devices:

                                                                  AppleTV/iOS - not yet

                                                                  Roku - not yet


                                                                  And who watches Netflix in their browser? In all the years I’ve been a customer I don’t think I’ve ever watched in my browser :)

                                                                  1. 1

                                                                    I occasionally watch Netflix in Firefox on Linux. Not happy about the DRM aspect of it all, but…

                                                                2. 1

                                                                  Ah right, forgot this is a Netflix thing. That makes sense that they would want to support TLS 1.0 for a while yet.
                                                                  Still seems weird to import a possible footgun (TLS-1.0) that will have to be maintained for 5 years (minimum release support guarantee under the new support model?).

                                                                  1. 2

                                                                    Still seems weird to import a possible footgun (TLS-1.0) that will have to be maintained for 5 years (minimum release support guarantee under the new support model?).

                                                                    Like linux, the key negotiation is still done in userland, it’s just the encryption of packets that is being moved to kernel space and closer to the network driver. I wouldn’t exactly call TLS 1.0 a footgun in that regard.

                                                              1. 6

                                                                So… this is privilege escalation on all Windows versions since XP and it is currently unpatched?

                                                                I don’t know about you, but I run binaries from the internet every workday. I’m not talking about FOSS, either. “Web-based” screen-sharing/conferencing applications that require downloading and executing an .exe come to mind.

                                                                Update: To be clear, some conferencing solutions require each user to download a unique .exe each time you join a conference, not just once to install something..

                                                                1. 2

                                                                  Seems there is a patch already, see https://twitter.com/taviso/status/1161297483139407873

                                                                  1. 2

                                                                    I don’t know about you, but I run binaries from the internet every workday. I’m not talking about FOSS, either. “Web-based” screen-sharing/conferencing applications that require downloading and executing an .exe come to mind. Update: To be clear, some conferencing solutions require each user to download a unique .exe each time you join a conference, not just once to install something..

                                                                    That sounds like it can’t possibly be secure unless you either trust the people creating this software or you run them in throwaway-VMs. And I wouldn’t trust people creating software that asks you to run random EXEs all the time…

                                                                    1. 1

                                                                      It’s Cisco.

                                                                  1. 8

                                                                    I don’t think there’s anything in this that can’t be explained by strong competition and extreme economies of scale.

                                                                    Competition: It’s just that so many places need developers these days, yet the earnings you see are probably not the random crappy app creation startup, but the large corps. They pay because they want the best developers.

                                                                    Economies of scale: This is I think really unique in software and other nonmaterial/digital goods (which also explains high salaries for popstars, actors etc.). If Amazon develops a new feature it doesn’t really matter a lot in developer costs and time whether they sell it 10 times or 10 million times. But if they sell it 10 million times the cost of the developer become quite insignificant.

                                                                    1. 1

                                                                      last time I checked none of the svg optimizer tools produced really good results. I ended up using svgcleaner + svgo to get the best outcome.

                                                                      1. 1

                                                                        I don’t quite understand this. It looks like, for this to be an issue, the attacker has to be able to set the PHP_VALUE env var to whatever they want? Surely you have bigger issues on your hands if attackers can arbitrarily set environment variables?

                                                                        1. 3

                                                                          Okay, I guess I should’ve explained this better.

                                                                          Part of the fastcgi/fpm protocol is to send over the environment of the client. This effectively means this environment variable can be set by the client, i.e. the attacker.

                                                                          This should become clearer if you look at the poc script: https://github.com/hannob/fpmvuln/blob/master/fpmrce

                                                                        1. 3

                                                                          I have no affiliation to the project but I posted this because it seems like a great solution to the on-going problems with the SKS network, particularly surrounding on-going privacy issues and the abuse of key metadata to post illegal content.

                                                                          The new keyserver seems to finally allow the deletion of keys—this is not possible with SKS—and also identity verification by email is finally supported. They seem to have clean separation for identity and non-identity information in keys and all in all it looks like a great evolution from SKS.

                                                                          1. 3

                                                                            Where do we learn more about the concerns around the SKS network? Sounds interesting and it helps build up point you present.

                                                                              1. 4

                                                                                The article has some interesting links, which I’ll post for convenience:

                                                                                The SKS Devel mailing list has actually had quite a few discussions about this too lately—a very small sample:

                                                                                  1. 2

                                                                                    The maintainer’s attitude in that first linked ticket is alarming. “The user isn’t supposed to trust us, so there’s no reason not to display bogus data.” Are you kidding me?!

                                                                                    1. 1

                                                                                      Yes, but the bigger problem is that even if they would want to change it SKS is without actual developers. There are people that maintain it by fixing small bugs here and there but the software is completely and utterly bug-ridden (I had the unfortunate “opportunity” to test it).

                                                                                      https://keys.openpgp.org is not mind-blowing¹ but it’s basically a sane keyserver. To have something like this in 2019 shows only in what dire situation is PGP now.

                                                                                      ¹ actually I think it’s lacking a little bit compared to “modern” solutions such as Keybase

                                                                                      1. 2

                                                                                        Even the people that work developing GPG would agree that the situation is sort of bad. Real-world adoption of GPG is almost nil. Support of GPG, say by major email clients, is almost nil. The architecture with the trust model is ‘perfect’ but it’s not user-friendly. GPG-encrypted email traffic is almost not measurable. The code base is apparently a bit of a mess. It needs maybe a bit of funding and probably some less perfect, but more pragmatic and usable strategies of improving security.

                                                                                        1. 2

                                                                                          Agreed with what you said. I spent some time thinking about this and concluded that at the end the problem is mostly in tooling and UX, not inherent to GPG.

                                                                                          As an example: XMPP was described by Google as being “non-mobile friendly” and it took just one person to create a really good mobile XMPP client that can be used by regular people. (I’m using it with my family and it’s better than Hangouts!).

                                                                                          GPG too can be brought back from the dead but the effort to do that is enormous because there are multiple parties participating. But there are some good things happening, Web Key Directory, easy to use web clients, keys.openpgp.org

                                                                                          Why is it important to work on GPG instead of dumping it for Signal et al.? Because GPG is based on a standard, this is not a “product” that can be sunsetted when investors run away or a manager decides that something else is shiny now.

                                                                                          1. 2

                                                                                            Look at what keybase is doing. That’s what GPG should have been. Some keyserver that actually verifies things, so that when you get a key with an email address, you know that that email belongs to the person who uploaded the key, unlike the current model, where anyone can upload any key with any data.

                                                                                            The whole web-of-trust thing doesn’t help me when I want to get an email from some person overseas I have never met.

                                                                                            1. 2

                                                                                              That’s what GPG should have been. Some keyserver that actually verifies things, so that when you get a key with an email address, you know that that email belongs to the person who uploaded the key, unlike the current model, where anyone can upload any key with any data.

                                                                                              If I understood the idea correctly the submission is already what you propose (maybe you’re aware of that? Hard to tell through text alone…)

                                                                              1. 1

                                                                                I am an amateur, so maybe someone knowledgeable can chime in on this:

                                                                                Is there any value or additional security in using several insecure hashing algorithms together?

                                                                                For example, if I provide both a SHA1 hash and an MD5 hash for a file, how much more difficult is it to create a collision that satisfies both?

                                                                                1. 4

                                                                                  My knowledge of this is also VERY vague but I think it’s something like, given two algorithms A and B, if you use them both in conjunction the cost of breaking them both is cost(A)+cost(B), whereas an algorithm C can give far better results with the same amount of data. If you had two algorithms that were just as good as SHA1 and produced two 160-bit hashes for a file, it would be 320 bits total and the cost of breaking them both would be 2 * cost_of_breaking_sha1. But if you used a single SHA256 hash (256 bits) instead the cost of breaking it would be, well, the cost of breaking SHA256, which merely based on the size of the key should be 2^96 times harder than breaking SHA1.

                                                                                  Using more bad algorithms gets you a linear increase in difficulty at best, using a better algorithm should get you an exponential increase in difficulty.

                                                                                  1. 3

                                                                                    A combination of SHA1+MD5 is only marginally more secure than SHA1. Here’s someone explaining the math behind it: https://crypto.stackexchange.com/questions/36988/how-hard-is-it-to-generate-a-simultaneous-md5-and-sha1-collision

                                                                                    That said: Why would you want to do that? Why use 2 insecure functions when you can just use a secure one?

                                                                                    1. 1

                                                                                      Mainly for backwards-compatibility. If a system uses SHA-1 for identifiers, you could keep doing that, and have an extra sanity check for raising red flags.

                                                                                      Then again, you might as well use SHA3 for that sanity check, now that I think about it.

                                                                                    2. 1

                                                                                      I’m going to say no. When it comes to crypto, don’t try to be clever if you aren’t a crypto expert. Just do the simple thing and use the standard algorithms in the most direct and obvious way.

                                                                                      1. 1

                                                                                        I’m also not an expert but this reminds me of Dave’s False Maxim on the Information Security Stack Exchange. Not 100% sure it applies although it’s still funny either way :P

                                                                                      1. 18

                                                                                        This text doesn’t have a date, but given you sound like you just worte it before posting: No, please don’t make webpages compatible with IE.

                                                                                        I’m fine with the rest, of course please make webpages that are compatible, standards compliant and run in all modern browsers. Modern browsers, not deprecated browsers. IE is undeveloped for years, it has basically no security support, it’s bad. Making IE user’s life miserable is a good thing for the future of the web, because the sooner it’s miserable enought that they stop using it the better.

                                                                                        I want a web that’s not dominated by Google, but I also want a web that’s capable of getting new features and getting rid of cruft and insecurity.

                                                                                        1. 22

                                                                                          have to disagree, people rarely use it by choice.

                                                                                          you’re doing the equivalent of blaming a wheelchair user for requiring ramp.

                                                                                          1. 11

                                                                                            By supporting those legacy users, you are letting their businesses externalize the costs of not upgrading onto the rest of the web. A car analogy may help.

                                                                                            Imagine there’s a person who has a horse, and really likes the reins they put on the horse. They use the horse to get to town and do their errands. One day, cars arrive, and eventually get so cheap that people are just giving away cars. But, the owner has spent money on a picture of their horse and a saddle and by God they’re not just gonna throw away that investment in favor of some new-fangled car.

                                                                                            That’s all well and good, but the associated requirement to have every parking garage and restaurant provide a place to tie up the horse, have a trough for the horse to drink from, and have some way of dealing with the waste–all to support that one person–would rightly be mocked and pilloried.

                                                                                            And don’t even let’s get started on the engineering issues that limit how highways and streets and bridges must be built to accommodate the horse as well as cars that then degrade the performance of those things for car users.

                                                                                            All so that some jerk can hold on to Mr. Ed until they can recoup their investment at a glue factory.

                                                                                            1. 1

                                                                                              You are describing trying to achieve equal functionality for the horse rider, whereas all that is necessary is not disallowing them road access, and a reasonable allowance for graceful degradation, which is sooner or later necessary for humans walking the road anyway, aka other browsers you may not even be aware of. IE is a red herring in a sea of other browsers that lazy devs use as an excuse to not design properly, IMO. I use NN3 and Lynx as my accessibility baseline, and it has not hindered me a whole lot.

                                                                                              It is not business users who are using IE at this point, but elderly and disadvantaged, who don’t have the means to upgrade… and you are basically telling them to fuck off. It’s okay though, I’d wager you’re working on either CRUD or advertising anyway, so no big loss.


                                                                                              1. 2

                                                                                                It is not business users who are using IE at this point, but elderly and disadvantaged, who don’t have the means to upgrade… and you are basically telling them to fuck off.

                                                                                                How are we sure? For a good long time, the biggest users of IE were from pirated copies of Windows XP in China.

                                                                                                1. 0

                                                                                                  Is there a good reason I would want to exclude users of pirated Windows XP from China in my website?

                                                                                                  1. 2

                                                                                                    If your complaint is “Think of the old people!” and the numbers suggest “well, it’s actually Chinese software pirates” you need to justify your position differently. Anyways, we’ve taken up enough space here.

                                                                                                    1. 2

                                                                                                      I don’t understand what you’re saying…

                                                                                                      Is it that because the majority of IE users have been shown to be from China, it means there aren’t any elderly using IE?

                                                                                                      Or that it’s OK to exclude Chinese IE users? Because they’re from China? Or because they’re using pirated Windows XP on their aged computers?

                                                                                                      Or something else, besides that it’s OK to exclude certain users because writing for more than one browser is challenging, perhaps too difficult for you to figure out?

                                                                                                      Or maybe that, since you’ve run out of arguments, this discussion should end automatically?

                                                                                                      Could you please clarify?

                                                                                                      1. 1

                                                                                                        If you’re against excluding users it hardly matters which users you might be excluding, since you don’t want to exclude any users. If you support excluding users their demographic matters a fair bit. In that context the differing opinions here make a lot of sense to me.

                                                                                                        1. 1

                                                                                                          I am definitely in the former camp. I think it is completely reasonable to expect being able to include most users, including no-JS, 5-year-old browsers, 10-year-old browsers, and that 486 you just dug out of the closet. Few bother to actually do it, but who cares?

                                                                                                          For example, this website could be accessible to users without JavaScript if only it was actually designed that way from the start, or, less efficiently, extended to support it and tested thorougly. But who cares? It’s all technically-minded people who can enable JS, and the JS is pretty light, so there are few complainers.

                                                                                                          But that is the difference between, for example, Hacker News, where upvoting something with JS disabled will send you to page that registers your vote and then redirects you to the same page plus an anchor pointing to the comment you voted on, and lobste.rs, where you get a disappointing lack of response.

                                                                                                          1. 6

                                                                                                            HN is the marketing arm of a firm worth billions.

                                                                                                            Lobste.rs is a community-run volunteer effort; I’m sure a pull request would be accepted.

                                                                                                            1. 2

                                                                                                              Good point. I’ll start looking.

                                                                                          2. 8

                                                                                            If you want to support all web users, make one version of your app with all the bells and whistles, and make sure that it works with all modern browsers. But, make another “light” version that works without JS or fancy CSS, and make it easy to fall back to that. The “light” version will work on IE as well as other old or weird browsers, and the main version can lose cruft and use newer browser features with less effort. The “light” version should be easy to maintain. It only needs to support essential app functionality, in the most boring way possible.

                                                                                            I know, many developers can’t just decide to do this. But it’s an option that decision makers should be aware of: show them Gmail’s “Basic HTML” version, for example.

                                                                                            1. 13

                                                                                              If you build the light version first, and then add the bells and whistles, much happiness will come your way.

                                                                                              1. 9

                                                                                                Agree. For those who wasn’t working in the field 10 years ago when this was cool, here is some more info: https://en.m.wikipedia.org/wiki/Progressive_enhancement

                                                                                                My attempt at summarizing it: Javascript only allowed to simplify stuff that will work anyways without Javascript. E.g. you start by coding a plain html dropdown and then add JS to swap it for an autocomplete if the browser supports it. Of course this also goes for styling, it should be OK in the oldest supported browser but it is ok to add more advanced CSS for browsers who support it.

                                                                                              2. 4

                                                                                                I just submitted a good example of that in response to this comment.

                                                                                                1. 2

                                                                                                  It’s worth pointing out that Gmail’s Basic HTML is amazing and much better than their “nice” UI in my opinion.

                                                                                              1. 17

                                                                                                The problem is we have two bad solutions, but bad for different reasons. None of them works transparently for the user.

                                                                                                GnuPG was built by nerds who thought you could explain the Web of Trust to a normal human being. S/MIME was built to create a business model for CAs. You have to get a cert from somewhere and pay for it. (Also for encryption S/MIME is just broken, but we’re talking signatures here, so…) And yeah, I know there are options to get one for free, but the issue is, it’s not automated.

                                                                                                Some people here compare it to HTTPS. There’s just no tech like HTTPS for email. HTTPS from the user side works completely transparently and for the web admin it’s getting much easier with ACME and Let’s Encrypt.

                                                                                                1. 7

                                                                                                  We don’t need WoT here though. WoT exists so you can send me a signed/encrypted email. Nice, but that’s not what’s needed here.

                                                                                                  1. 3

                                                                                                    Of course you need a some measure of trust like a WoT or CA because how else are you going to verify that the sender is legitimate? Without that you can only really do xkcd authentication.

                                                                                                    1. 5

                                                                                                      Yes, you need some way to determine what you trust; but WoT states that if you trust Alice and I trust you, then I also trust Alice, and then eventually this web will be large enough I’ll be able to verify emails from everyone.

                                                                                                      But that’s not the goal here; I just want to verify a bunch of organisations I communicate with; like, say, my government.

                                                                                                      I think that maybe we’ve been too distracted with building a generic solution here.

                                                                                                      Also see my reply to your other post for some possible alternatives: https://lobste.rs/s/1cxqho/why_is_no_one_signing_their_emails#c_mllanb

                                                                                                      1. 1

                                                                                                        Trust On First Use goes a long way, especially when you have encrypted (all its faults nonewithstanding) and the communication is bidirectional as the recipient will notice that something is off if you use the wrong key to encrypt for them.

                                                                                                    2. 1

                                                                                                      Also for encryption S/MIME is just broken

                                                                                                      It is? How?

                                                                                                      1. 2

                                                                                                        The vulnerability published last year was dubbed EFAIL.

                                                                                                        1. 1

                                                                                                          Gotcha. Interesting read. I’ll summarize for anyone who doesn’t want to read the paper.

                                                                                                          The attack on S/MIME is a known plaintext attack that guesses—almost always correctly—that the encrypted message starts with “Content-type: multipart/signed”. You then can derive the initial parameters of the CBC encryption mode, and prepend valid encrypted data to the message, that will chain properly to the remainder of the message.

                                                                                                          To exfiltrate the message contents you prepend HTML that will send the contents of the message to a remote server, like an <img> tag with src="http://example-attacker-domain.com/ without a closing quote. When the email client loads images, it sends a request to the attacking server containing the fully decrypted contents of the message.

                                                                                                          S/MIME relies on the enclosed signature for authenticity AND integrity, rather than using an authenticated encryption scheme that guarantees the integrity of the encrypted message before decryption. Email clients show you the signature is invalid when you open the message, but still render the altered HTML. To stop this attack clients must refuse to render messages with invalid signatures, with no option for user override. According to their tests, no clients do this. The only existing email clients immune to the attack seem to be those that don’t know how to render HTML in the first place.

                                                                                                          The GPG attack is similar. Unlike S/MIME, GPG includes a modification detection code (MDC). The attack on GPG thus relies on a buggy client ignoring errors validating the MDC, like accepting messages with the MDC stripped out, or even accepting messages with an incorrect MDC. A shocking 10 out of 28 clients tested had an exploitable form of this bug, including the popular Enigmail plugin.

                                                                                                    1. 1

                                                                                                      If this is similar in severity to Poodle, does that mean all CBC ciphers are doomed?

                                                                                                      This will be a major problem for many users, since (AFAIK) this will disable usage of TLS below 1.2. It would be nice if someone could clarify :)

                                                                                                      1. 3

                                                                                                        These are implementation flaws, but they’re caused by a spec that’s hard to get right.

                                                                                                        You can implement CBC mode in a secure way. It’s complicated. The underlying vuln is known since 13 years and people still don’t implement it correctly.

                                                                                                      1. 1

                                                                                                        This doesn’t sound convincing. We know that all those legacy RNGs are shitty and we should use CSRNGs. They compare themselve to plenty of known bad algs and then say theirs is better, where only one of the ones in the table is actually serious (ChaCha20).

                                                                                                        They say Prediction Difficulty of their construction is “Challenging”, while for ChaCha20 it’s “Secure”. Doing a quick skimming of the page I don’t find an explanation what “Challenging” means, but it sounds to me like “not really secure”. The downsides of ChaCha20 they imply are questionable. They say it’s “Fairly Slow”, I beg to disagree. I really don’t care about 0,1 kb space usage. “Complex” is somewhat of an argument, but if I need a bit of complexity to get proper security I’ll take it.

                                                                                                        1. 1

                                                                                                          This doesn’t sound convincing. We know that all those legacy RNGs are shitty and we should use CSRNGs. They compare themselve to plenty of known bad algs and then say theirs is better, where only one of the ones in the table is actually serious (ChaCha20).

                                                                                                          She compares the algorithms with other popularly used algorithms. You might think that ‘all those legacy RNGs are shitty’ and that ‘we’ should use CSPRNGs, but you don’t speak for everyone, and there are lots of good reasons to not use CSPRNGs.

                                                                                                          They say Prediction Difficulty of their construction is “Challenging”, while for ChaCha20 it’s “Secure”. Doing a quick skimming of the page I don’t find an explanation what “Challenging” means, but it sounds to me like “not really secure”.

                                                                                                          You can’t claim something is secure until it’s been thoroughly tested, but there’s no evidence that PCG is any less secure than ChaCha20.

                                                                                                          The downsides of ChaCha20 they imply are questionable. They say it’s “Fairly Slow”, I beg to disagree. I really don’t care about 0,1 kb space usage. “Complex” is somewhat of an argument, but if I need a bit of complexity to get proper security I’ll take it.

                                                                                                          You can’t disagree with facts. ChaCha20 literally is slow. And you might not care about the space usage, but others do. And you don’t need the complexity, that’s pretty much the whole point of PCG.

                                                                                                        1. 3

                                                                                                          Ideally, all these custom allocators would be updated to include ASAN support - you can mark any allocated memory as ‘allocated, but poisoned as far as ASAN is concerned’.

                                                                                                          1. 4

                                                                                                            I’m not sure this is the best way.

                                                                                                            Some of these applications (including Apache) have Valgrind support. Which well, works for Valgrind, which was once the tool to use for these things, but it’s clearly outperformed by ASAN today.

                                                                                                            It seems like a better approach to just have a “make every allocation a real call to malloc” mode, which will work for all current and future tools in that space.

                                                                                                          1. 9

                                                                                                            This has a problem that may not be immediately obvious: As soon as one of your passwords gets breached you have a public hash of your master password in public that can be attacked via brute force. So one brute forced password leads to access to all of your accounts.

                                                                                                            Thus unless your master password is extremely secure (impractical to be bruteforced) this whole concept is very risky.

                                                                                                            I wouldn’t recommend such a scheme.

                                                                                                            1. 4

                                                                                                              I think more problematically even, this scheme also prevents users from effectively changing their passwords on websites where databases get breached & passwords exfiltrated.

                                                                                                              To change that password, you have to change the “master” password (or, more realistically, have 2 master passwords) and remember that again.

                                                                                                              It doesn’t seem very effective for keeping people safe in the real world.

                                                                                                              1. 2

                                                                                                                Not true, each password has a “version” which I believe does something like tack on a version number to the scrypt input. A more robust way to do it would be to iterate a random salt input, or to use a one-way hash function on a KDF chain (similar to key ratcheting).

                                                                                                                1. 3

                                                                                                                  The user would now have to remember a version for each website then instead, doesn’t sound very practical. Unless you store it but then the premise of this kinda flies out the window.

                                                                                                                  1. 1

                                                                                                                    No, the app remembers it. JS localStorage or similar. Not sure how the other version (ios, etc) handle it.

                                                                                                                    1. 4

                                                                                                                      Yes, but as mentioned; this throws out the part that the app doesn’t store anything.

                                                                                                                      1. 2

                                                                                                                        Compute secure passwords without storing them [passwords] anywhere

                                                                                                                        You still have to store the web sites configuration somewhere (Password length, required character, version, etc).

                                                                                                                        1. 3

                                                                                                                          I don’t think in that case you win anything significant over storing the password itself. Using a password database, you can also use custom passwords (ie when you import old passwords or are given passwords) and with a strong vault password it should have equivalent security.

                                                                                                                  2. 1

                                                                                                                    So I guess you end up tacking a number to the end of your password and get to remember that, per site. I guess that’s not too bad, but I still wouldn’t call it great.

                                                                                                                2. 1

                                                                                                                  One would have to know that such a scheme is being used to begin to attempt attacking it though which is likely never the case for general website passwords.

                                                                                                                  1. 3

                                                                                                                    Unless you see every “required” character in the first characters of a password, until they fix that bug (unless I’m wrong that it works like that, I only read it quickly).

                                                                                                                    Any password solution which get less robust the more popular it is, is a bad one.

                                                                                                                1. 2

                                                                                                                  Only 23.4% of respondents even remembered seeing checksums on websites they had used in the past. Only 5.2% of respondents select the correct answer (out of six possible options including ‘not sure’ and ‘other’) when asked what checksums were for.

                                                                                                                  I doubt these numbers, they seem far too high to me if they chose “average users”.

                                                                                                                  In any case there’s a very simple answer to all of this: Use HTTPS(+HSTS) everywhere and ignore the checksums.

                                                                                                                  The idea of extending SRI to downloads has some merit in case the downloads are hosted separately on a less trusted host. But that’s a more obscure scenario. HTTPS does 99% of what you want to do with checksums and works without any extra user interaction.

                                                                                                                  1. 1

                                                                                                                    The idea of extending SRI to downloads has some merit in case the downloads are hosted separately on a less trusted host.

                                                                                                                    Websites pulling in 3rd party content and JS is getting more common every day.