1. 2

    The other day I was debugging an issue where I did include the meta tag, but I still had weird zooming issues; when an input field would get focused, Safari would zoom towards it, even though it already was in full view.

    Long story short, Brutal CSS should be the meta tag, and input { font-size: max(1em, 16px); }

    (I checked with the inspector afterwards. Without that rule, my input had font size 15.852px, and Safari auto-zooms if it’s less than 16px)

    1. 1

      When you use ZFS, why have a separate OS disk? You can just boot from your ZFS RAID. I don’t know how well Ubuntu supports this, but on FreeBSD it’s a breeze.

      1. 1

        I’m not using RAID. I just have two main storage drives (an 18TB and a 12TB) and two backups (a 14TB and a 10TB respectively). I’m using Alpine Linux. I haven’t tried to boot to encrypted ZFS on Linux yet, but that might be a good project for the future. For right now, I just find it easier to keep my OS drive and storage drives separate. There’s very little on the OS drive. With Alpine Linux, I’m using 1.3G of the 128GB M.2 SSD in there.

      1. 6

        Out of curiosity, I just installed Opera 12, the last release for FreeBSD and it works really well for being 7 years old. I had to symlink /usr/local/lib/libfreetype.so to /usr/local/lib/libfreetype.so.9, but that doesn’t seem to cause any problems.

        Some issues I’ve encountered after ~10 minutes of browsing:

        • Lobste.rs initially gave a “Handshake failed because the server does not want to accept the enabled SSL/TLS protocol versions.” error, which is actually explained on the error page. I enabled all of the security protocols in the settings and restarted, and it seems to work with TLS 1.2 now, and I’m typing this message in it.
        • Some fields on Github’s new code view page show as empty rectangles. Specifically the last commit messages and the last modified dates.
        • Some sites (reuters, NYTimes) have a big pause the first time they load. Maybe the JS is running synchronously and blocking the main thread?
        • The reuters.com main page displays fine, but images don’t show up after clicking through to the articles. This is almost certainly due to the javascript slide-show they use.
        • Some HTTPS sites just refuse to load, I suspect they’re using TLS>1.2. Opera only exposes a UI to enable TLS up to 1.2, but maybe there’s workaround editing INI files directly…
        • Wikipedia redirects every page to a “Your Browser’s Connection Security is Outdated” page

        If it weren’t for the HTTPS issues, I’d say it’s still usable for day to day browsing - at least for me.

        Edit: After browsing around a bit longer, I’ve gotten a few random crashes. Maybe something to do with the freetype library hack? In any case, it’s nice enough to bring up a crash dialog that offers to restart with the last set of tabs open and unlike the “Firefox needs to update and restart” dialog, Opera actually restarts with the tabs open.

        1. 3

          Pretty sure O12 predates TLS 1.3 (at least the current spec), so I think 1.2 is the best you’re going to get.

          1. 1

            Well, I guess there is an option of using a MITM proxy in front of the browser. I think Squid has an option to install a proxy CA in the browser and craft certificates that have the same «verification status» but relative to a different CA.

            1. 1

              Do you have any resources on setting up Squid to effectively downgrade TLS? Last I looked into it, I couldn’t seem to make Squid do what I wanted to from reading their documentation.

              1. 1

                I never got around to set this up as my real-browser usage went a bit down and domain-based blocking works well enough.

                Current Squid Wiki seems to imply it negotiates protocol details independently with client and server; apparently (looking at Stack Exchange) by enforcing bump on the step 1 (in the very beginning) it worked a few years ago, but did not mimick the certificate status properly.

          2. 2

            Nice experiment!

            I’ve disabled TLS 1.3 in my Firefox (it does 1.2 only). I’ve been browsing like this for over a year, but I’ve never hit a site that refused to load because of this. Maybe Opera is missing some cipher suites in TLS 1.2?

          1. 11

            Is it harder to send a plain-text email than to create a GitHub account (or equivalent,) create/upload SSH keys, learn non-git concepts (in addition to git concepts) like pull requests, and whatever else GH requires?

            1. 9

              I think so, yes. The GitHub UI is fairly straightforward. Sending email patches isn’t, especially if you’re not very familiar with email, and there’s a long list of stuff you can do wrong.

              1. 6

                Whether something is harder or easier to learn depends largely on your starting point and available educational resources.

                I think we’ve reached a tipping point where A) more developers are familiar with github than with plaintext email, and B) better tutorials exist for github than for plaintext email.

                1. 4

                  GitHub is a propriety non-free software project controlled by Microsoft. A free software project like the Linux kernel should not treat the number of potential developers who are familiar with proprietary software-based workflows as relevant for running their free software project.

                  1. 6

                    Not sure what that has to do with the question I was answering, which was about the relative difficulty of different workflows.

                2. 4

                  git-send-email is more scary than hard. It’s an interface that always makes me feel like I’m one keystroke away from doing something wrong/embarrassing and there’s no way back from sending an email.

                  In the Phabricator patch workflow or the GH/GL pull request workflow, there’s always a very clear preview of how everything would exactly look when it’s published.

                  1. 2

                    --annotate can help you with that, letting you preview and edit emails before sending them. There is also --confirm=always that lets you confirm weather to send a message or not.

                    1. 1

                      Yes, I’ve used both of course. Can’t say it helps with the feeling.

                    2. 2

                      Well with the Github workflow, you cannot retract a pull request either. And if you push something bad and quickly git commit –amend ; git push -f your code, Github will show everyone that you force pushed.

                      I think that this is just one of the cases where Github is not intimidating because you know it, while the Git e-mail fow is intimidating because abeit it being around much longer, it’s less used.

                      We definitely need more and better guides for the mail flow (Drew has a nice start, we need more of those!)

                      1. 1

                        I feel the same way - you don’t want to be the n00b who messes up and causes a maintainer to simply drop your patch on the floor without feedback.

                        However, this points to the need for lower-stakes projects that work with email-based workflows, so that new developers can acculturate to the workflow and then confidently take part in, for example, kernel development. In that sense, email-focused outfits like Sourcehut are a valuable feeder service.

                    1. 8

                      IRC is clearly not good enough. It had the users at some point and network effects should have allowed it to remain on top if it had truly been good enough. My hope is that Matrix will take the top spot. XMPP would be okay as well. But thankfully Matrix and XMPP can be made to bridge to one another. Yay openness, federation and bridging!

                      1. 7

                        In my opinion, Matrix is a very heavy and complex protocol, and it’s getting even more heavy and complex: you can hardly say it’s really an open standard. Element is pretty much the only usable client, Synapse is pretty much the only usable homeserver, and matrix.org—where everyone registers—is always slow and has trouble federating. I think its future depends mainly on New Vector and commercial interests.

                        1. 3

                          Yes, matrix the protocol is complex to implement. But it doesn’t seem needlessly complex when looking at the requirements. And the specification is designed so that clients are easy to implement, the server does most of the work. Here’s a good example of a simple client: https://github.com/ara4n/random/blob/master/bashtrix.sh

                          There are lots of usable clients, but you’re right that element is probably the only full-featured one. The alternative homeservers are coming along well lately.

                          Also, there is the matrix foundation, which owns the copyright on most stuff. True, new vector / element (has the company renaming completed yet?) is the major driving force, just as mozilla is to firefox. But there are others working on it as well. Without new vector the progress would be slower, but I doubt it would stop.

                          1. 1

                            Clients can be made to be (at least partially) lightweight fairly easily. What I was mostly referring to is the server-side implementation. I was considering running my own Matrix instance for me and my friends, only to discover that we’d need a significant amount of RAM and CPU time since we were planning to subscribe to a bunch of high-traffic rooms and run a few bridges to various other services.

                            I have an account on matrix.org and it does get slow at times when it tries to fetch new messages after a few hours of inactivity. Not sure if the existing, “primary” Synapse implementation can be optimized significantly, but it could sure use some of it if at all possible.

                            1. 2

                              True, synapse can be a bit heavy. But the matrix people seem to be following the rule “make it work, make it right, make it fast”. Synapse has improved a lot in the past couple of months. And they are focusing on improving synapse further. Memory use can be hard to completely fix in a Python-based project. But Dendrite is also improving. Construct is an independent homeserver that can federate and is fast. I haven’t run in though. And there are some other homeserver projects that seem to be quite fast as well.

                            2. 1

                              Decisions like polling JSON over HTTP instead of working directly with TCP sockets, were intended to make it easier to implement. The protocol is simple, but huge: you have tons of endpoints and JSON objects, that creating a full-featured client is extremely difficult. One of the most difficult aspects is also UI/UX, especially for device verification, cross-signing and encryption keys. It’s a lot of simple things, bundled into a single complex thing.

                            3. 2

                              I haven’t gone through all the specifics regarding the protocol itself, but I do agree the current implementations of the protocol are pretty taxing in terms of system resource usage, among other things. The problem is, how do you implement a protocol that provides so much functionality and keep it lightweight. Such a task would require a lot of engineering effort and thinking many things through before you touch a keyboard.

                              If Matrix catches on even more, less heavy implementations will likely appear at some point, but it won’t happen overnight.

                              1. 2

                                How do you implement a protocol that provides so much functionality and keep it lightweight? Don’t. Instead of a huge single protocol, you can use multiple lightweight protocols to achieve the same thing, and you can even glue them together into a single platform. Matrix is, in a sense, a bridging layer that connects multiple different protocols; but it’s so big, that it can also be used standalone. You could easily replace Matrix with XMPP, IRC, ZNC and IPFS; and it would work just fine.

                            4. 2

                              I think you can bridge IRC and Matrix as well

                              1. 5

                                Because IRC is not federated the bridging there is necessarily more ugly and weird. You have to have a nick on the target network in order to speak there, and there’s no obvious nick the bridge can just make up for you that will not look weird to IRC people.

                                1. 2

                                  There’s two different kinds of bridging you can do; as a matrix user you can bridge your account to freenode, which mostly works but is a bit flaky and rather difficult to set up. As a channel owner you can bridge the entire channel, and this works great; you only have to do it once and everyone benefits from it. We did this in the #fennel channel and I have no regrets; all the core devs can keep the workflow they’re familiar with, and all the newcomers can come thru Matrix and get the persistent history and other nice features that take more work to set up on IRC.

                                  1. 2

                                    yeah, not on Freenode, but apart from once-in-a-month “I didn’t get that message” or some formatting messups with certain matrix clients it’s pretty flawless. I love IRC but it’s a pain on mobile, matrix is a lot better there.

                                  2. 1

                                    You can. And there’s also matrix-ircd which is a matrix client and irc server. So you can use your irc client to connect to matrix.

                                1. 1

                                  This is a very interesting idea. I wonder how this would work with JSON data; I have been trying to gzip compress JSON data, and I found that using escaping instead of UTF-8 give me a small compression boost (maybe because every byte has the most significant bit set to 0?), but I have been wondering if I could squeeze more by sorting arrays and dictionaries in a specific way.

                                  1. 4

                                    I once pointed out to a new developer that I use git blame when something doesn’t work, and whoever touched the line last will have to answer first. So by changing virtually every line in the project, every bug is now yours.

                                    1. 25

                                      Safari is a joke.

                                      Why? Personally I use it a lot and I really like it. Moreover WebKit wouldn’t exist without Safari, and Chrome was forked from WebKit. Back in the time, even IE wasn’t a joke and killed NetScape. Could you elaborate?

                                      1. 15

                                        I’m a bit puzzled by this statement as well. The most heard criticism for Safari is that it’s slow to implement new features, if it implements them at all. Given the rest of the article, it’s safe to assume that the author doesn’t share in this criticism.

                                        1. 9

                                          It’s a funny statement as Safari actually will not implement 16 Web API’s due to privacy/tracking concerns. So they aren’t adding the bloat which Drew complains about ;-)

                                          1. 3

                                            It was my preferred browser on Mac too. You can disable tabs with it, which is pretty much impossible in Firefox or chrome now.

                                            1. 3

                                              WebKit wouldn’t exist if it wasn’t for KDE and KHTML. We can thank Apple and now Google for creating forks that just fragment the community.

                                            1. 6

                                              Deduplication (heh my phone wants to correct to “reduplication”??) in ZFS is kind of a mis-feature that makes it easy to destroy the performance. (I’ve had some painful experiences with it on a small mail server…) Pretty much everyone recommends not enabling it ever. So indeed it’s not a realistic concern, but it is fun to think about.

                                              It shouldn’t be that hard to add a setting to ZFS that would only show logicalused to untrusted users, not used.

                                              1. 9

                                                For folks not familiar with ZFS, just want to expand on what @myfreeweb said: “pretty much everyone” even includes the ZFS folks themselves. The feature has a bunch of warnings all over it about how you really need to be sure you need deduplication, and really you probably don’t need it, and by the way you can’t disable it later so you better well be damn sure. btrfs’ implementation, though, does not AFAIK suffer from the performance problems ZFS’ does because btrfs is willing to rewrite existing data pretty extensively, whereas ZFS is not because this operation (“Block Pointer Rewrite”) would among other problems break a bunch of the really smart algorithms they can use to make stuff like snapshot deletion fast. A btrfs filesystem after offline deduplication is not fundamentally different from the same filesystem before. ZFS deduplication fundamentally changes the filesystem because it adds a layer of indirection.

                                                logicalused seems like a good idea. It doesn’t fix the timing side channel, though. I think you’d want to keep a rolling average of how long recent I/O requests took to service, plus the standard deviation. Then pick a value from that range somehow (someone better at statistics than me could tell you exactly how) and don’t return from the syscall for that amount of time. Losing the performance gain from a userspace perspective is unavoidable since that’s the whole point, but you can use that time (and more importantly, I/O bus bandwidth) to service other requests to the disk.

                                                (Side note: my phone also wanted to correct to “reduplication”. Hilarious. Someone should stick that “feature” in a filesystem based on bogosort or something.)

                                                1. 2

                                                  It shouldn’t be that hard to add a setting to ZFS that would only show logicalused to untrusted users, not used.

                                                  I think that’s harder than you think. The df(1) command will show you free space, I’m not sure you can set a quota that hides whether a file was deduplicated. Also a user can use zpool(8) to see how much space is used in total.

                                                  However, I hardly think this is going to be a problem with ZFS, because, as you say, “Pretty much everyone recommends not enabling it ever”. I have never experienced a use case where deduplication in ZFS would be advantageous for me, on the contrary; ZFS gets slower because it has to look up every write in a deduplication table, and it uses more space because it has to keep a deduplication table. If you enable deduplication on ZFS without thorough research, you will be punished for it with poor performance long before security becomes an issue.

                                                  1. 2

                                                    I mean report logicalused to everywhere like df, hide the zpools..

                                                    The pools would already be hidden if it’s e.g. a FreeBSD jail with a dataset assigned to it.

                                                1. 4

                                                  As a matter of medical ethics, I’m not convinced that creating and marketing this drug in particular, or in general any drug that has depression and suicidal tendencies as side effects, is necessarily the wrong thing to do. Lots of useful drugs have serious side effects, and while it’s important for both doctors and patients to be aware of those side effects when making decisions about whether to use the drug, I don’t think the existence of those side effects implies that no one should use the drug. That’s a complicated medical question that depends on how likely those side effects are, exactly how bad they are, and what aliments the drug purports to treat, how effectively it does so, and how bad the untreated effects of that ailment are (additionally I wouldn’t assume that because one person I heard about in the news committing suicide while taking this drug, that implied that the drug was specifically responsible for that suicide, or that even if the drug was responsible for the suicide, that suicide necessarily outweighed the aggregate benefits of using the drug to treat an ailment). In any case, it’s certainly not a question that programmers, as opposed to doctors and medical regulators, have any special insight about.

                                                  Part of the reason the societies we live in have things like medical ethics laws, governmental regulatory organizations like the FDA, drug regulations, and so on, is because the ethical questions about when it is and is not okay to market and sell a drug are complicated and require medical domain-specific knowledge as well a shared conception of the common good to answer. It’s a very reasonable position to follow the letter of the law unless and until some knowledgeable medical authority, or the evidence of my own or my community’s experience with the drug, convinced me that the regulatory system around this specific drug ought to be changed. And I feel like, say, a doctor writing a blog, might convince me that more people would die without this drug than with it, just as easily as they might convince me that the suicidal thoughts side-effect was too serious and the medical regulatory establishment erred in letting this drug be sold at all.

                                                  Given that I’m not convinced the drug company actually acted unethically, or that the laws that permitted them to sell and market this drug should be changed, then why should I expect a programmer to refuse to write code on behalf of such a drug company?

                                                  1. 34

                                                    not convinced the drug company actually acted unethically,

                                                    Presenting a neutral-appearing “find the best drug for you!” informational website and then always giving the same answer with a fake quiz doesn’t sound like a bald-faced lie to you?

                                                    The issue here isn’t that companies are marketing their drugs; it’s that they’re being deceitful lying twats about it. You seem to have completely missed the point. As you mentioned drugs and side-effects is complicated and hard, which is why you shouldn’t create fake “informational” websites with fake quizzes to market your drugs.

                                                    1. 13

                                                      Agree completely with your point, I think it’s quite clearly unethical behaviour.

                                                      The issue here isn’t that companies are marketing their drugs

                                                      It could be one of them. It strikes me as an incredibly odd thing when I see American TV and am not only bombarded with drug ads, they are targeted at the patients rather than medical professionals.

                                                      1. 1

                                                        I’ve been thinking about this too. It’s also super weird to me that there’s the whole list of side effects at the end… shouldn’t my doctor be telling me that and not the ad? What is I didn’t see the ad? I’m sure there’s some baroque liability reason they have to do it, but it’s dumb. (American here, FWIW.)

                                                    2. 5

                                                      I haven’t worked on code that’s specifically related to drugs and medication, but I have worked on medical devices so I guess my opinion is… partly informed? :). I don’t have experience bringing drugs to market but I’m somewhat familiar with the regulatory system involved.

                                                      There is no question as to the fact that drugs that have certain side effects should be allowed. Practically all of them have side-effects. Even “natural” medicine, like various plants and whatever, have side-effects and can interact with other drugs, “natural” or not. That’s why the side effects are listed on the fine print – so that physicians and patients can make an informed decision about these things, and so that reactions can be properly supervised. How efficient the fine prints are is another debate, but I think we can safely argue that the benefit of some substances can outweigh the risk of side-effects, as long as administration is properly supervised and so on. For example, if a drug can cause depression and anxiety, a doctor can recommend close monitoring of a patient’s mental state, by another doctor or even a psychiatrist if necessary, especially if they lack a support system (if they live alone, secluded etc.) or have a history of depression. Or they may avoid that drug altogether if possible.

                                                      However, uninformed self-medication is also a thing. That’s part of the reason why some drugs are only issued based on prescriptions, and why you’re supposed to keep some of them out of children’s reach and so on. It’s a very real problem, especially when it’s related to drugs for affections that carry some form of social stigma (mental illnesses, STDs), or for particularly difficult age ranges, where people have difficulties seeking help. Depressed teenagers, for example, are not very likely to go to adults for help, especially if their depression is fueled by adults in their life, like abusive parents.

                                                      “Proving that you have a prescription” over the Internet was pretty easy to do twenty years ago (and I think it still is in some cases). You can often use someone else’s. A teenage girl can generally use her mother’s prescriptions pretty easily, for example.

                                                      Now, of course, there’s only so much you can do to prevent self-medication by uninformed people. At the end of the day, if people think it’s a good idea, they’ll get their stuff one way or another. You can’t keep all drugs under lock and key in a safe and not hand them out unless someone brings in their doctor and three independent witnesses to confirm that they need the drug and that prescription they have is real. You print out (or the FDA makes you print out) big warnings, the drug can only be sold to prescription holders under specific conditions etc.. There a point past which you can’t do much to prevent self-medication.

                                                      But acting in a manner that encourages self-medication – deliberately eschewing a physician’s ability to supervise medication and the patient’s evolution – is absolutely unethical. It’s akin to going into a hospital, leaving a bunch of pills on the table, and telling people to help themselves if they want, as long as they don’t tell their doctors about it. Doing so against a vulnerable age group makes it worse, too. Especially if the targeting deliberately exploits a prevalent vulnerability (edit: someone here mentioned Accutane – that was my guess, too, but I was hesitant to call it out, since the original author didn’t, and I don’t know that much about what was allowed in Canada twenty years ago. Accutane was a drug that was meant to help with acne. Yep.)

                                                      1. 4

                                                        Based on timing, target audience, and the issues it caused, it sounds an awful lot like Accutane. That’s a since-discontinued drug for treating acne.

                                                        That aside, I think I would expect myself to refuse to write the code in question because:

                                                        Remember, this website was posing as a general information site. It was not clearly an advertisement for any particular drug.

                                                        and

                                                        “Well, it seems that no matter what I do, the quiz recommends the client’s drug as the best possible treatment. The only exception is if I say I’m allergic. Or if I say I am already taking it.”

                                                        and

                                                        “Yes. That’s what the requirements say to do. Everything leads to the client’s drug.”

                                                        Quite apart from any ethical judgement about what suicide frequency is acceptable as a side-effect for an acne drug, I’d expect myself to refuse to write code whose purpose is to deceive the public into making a particular health decision for the benefit of my client.

                                                        1. 5

                                                          I’d expect myself to refuse to write code whose purpose is to deceive the public into making a particular health decision for the benefit of my client.

                                                          I would too. But I also remember how I was a lot more naive when I started coding, and I would not be surprised if I wouldn’t have picked up on this, just like the author. You expect that, if this was not okay, someone else would have stepped in already. It comes as a shock when you find out that “someone else” should have been you.

                                                          1. 3

                                                            Yes. I should have said “I would now expect myself…”

                                                            I can’t claim with any certainty that I would have caught on 20 years ago.

                                                          2. 2

                                                            Somewhat off-topic, but Accutane isn’t discontinued, although according to Wikipedia the original manufacturer no longer produces it - was that what you meant?

                                                            It is highly controlled in (at least) the US though. You have to get monthly blood tests to make sure it isn’t killing your liver, and if you can get pregnant you have to be on (IIRC) at least two forms of birth control. The latter is why it’s so controlled - it causes really severe birth defects.

                                                            1. 1

                                                              I didn’t realize anyone had picked up the manufacturing. Wow. I remembered it as having gone away.

                                                              1. 1

                                                                Yeah. I took it in 2015, which is how I know. I switched away from Accutane halfway through though because it was cheaper to go with a generic version, which was marketed as something else but had the same underlying active ingredient (isotretinoin) - maybe that’s what you’re thinking of? When I was reading about it last night Wikipedia said the original manufacturer shut down production because cheaper generic versions had become available (and because of lawsuit settlements over side effects…), so it’s unclear to me as to whether in 2020 it’s still actually available under the brand name “Accutane”.

                                                        1. 5

                                                          One of the reasons that Flash was not adopted on iOS, was that it drained the battery. Today, my browser eats up more than 50% of my CPU time while I’m using it, due to JavaScript heavy web applications.

                                                          It was good advice disabling Flash on sites where you didn’t need it, as it made your browser faster and nearly all sites were implemented very well with graceful degradation. Today, people look at you weird for wanting to disable JavaScript.

                                                          I have immensely enjoyed Flash games, I still have a small collection of swf files and a standalone Flash runtime. Good times!

                                                          1. 2

                                                            One of the reasons that Flash was not adopted on iOS, was that it drained the battery

                                                            Flash did all of its compositing in software. This meant that it couldn’t easily take advantage of hardware acceleration for video decoding. Apple actually added some APIs specifically for Flash to allow you to send a video to the GPU to decide and then pull the decoded frames back across the bus but it wasn’t a massive win because you were pulling large images back across the bus, compositing in software, and then sending them back. At the same time, OS X did vector rendering in software (2D vector rendering is still typically faster on the CPU than GPU because the setup costs dominate for the GPU even though the actual render is an order of magnitude faster) but then sent individual views to the GPU as textures and did all of the compositing there. A lot of animations with CoreAnimation were just short sequences of GPU commands, whereas the Flash equivalent did a load of work on the CPU.

                                                            Playing back a video in a <video> tag in Safari was about half the system load of playing exactly the same video in a Flash thingy on the same Mac. Given how important video was to Apple, if YouTube on iOS had had the same problem I think it would have changed the outcome for these devices.

                                                            It would have been possible to implement Flash in an efficient way but it would have been almost a complete rewrite of the graphics engine and Adobe was unwilling to do this.

                                                            Flash was also going through a very bad period for security. Every month there was a new sandbox escape and the existing browser plugin model ran plugins like Flash in the same process as the rest of the browser. You could (on *NIX, at least) run Flash in a separate process and sandbox it but that made things even slower / power intensive.

                                                            Today, my browser eats up more than 50% of my CPU time while I’m using it, due to JavaScript heavy web applications.

                                                            Try disabling GPU acceleration and see how much slower it gets. That’s roughly the ballpark for where Flash would be on the same hardware.

                                                            It was good advice disabling Flash on sites where you didn’t need it, as it made your browser faster and nearly all sites were implemented very well with graceful degradation. Today, people look at you weird for wanting to disable JavaScript.

                                                            This mostly worked for Flash because Flash was quite siloed. Flash widgets didn’t (usually) tightly bind to the DOM and modify the rest of the page, they were little self-contained views. That made fallback quite easy (though quite a few sites just did ‘this page requires Adobe Flash’ as their fallback). In contrast, JavaScript is closely interwoven with the rest of the site, so fallback typically means a complete fallback implementation of the site, not a fallback implementation of a single view.

                                                            I think the biggest thing that we lost with HTML5 versus Flash was that component separation. You can do quite strong separation with iframes that contain a canvas and a load of JavaScript, but then it’s hard for them to communicate with the rest of the page. Flash let you have little separately packaged things that could be enabled individually and integrate with the page.

                                                          1. 16

                                                            In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.

                                                            I’d also like to have a more honest read on how a mechanism to provide ephemeral key exchange and host authentication can be used with the same goal as PGP, which is closer to end-to-end encryption of an email (granted they aren’t using something akin to keycloak). The desired goals of an “ideal vulnerability” reporting mechanism would be good to know, in order to see why PGP is an issue now, and why an HTTPS form is any better in terms of vulnerability information management (both at rest and in transit).

                                                            1. 22

                                                              In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.

                                                              Let’s not confuse the PGP message format with the PGP encryption system. Both PGP and x509 encodings are a genuine clusterfuck; you’ll get no dispute from me there. But TLS 1.3 is dramatically harder to mess up than PGP, has good modern defaults, can be enforced on communication before any content is sent, and offers forward secrecy. PGP-encrypted email offers none of these benefits.

                                                              1. 6

                                                                But TLS 1.3 is dramatically harder to mess up than PGP,

                                                                With a user-facing tool that has plugged out all the footguns? I agree

                                                                has good modern defaults,

                                                                If you take care to, say, curate your list of ciphers often and check the ones vetted by a third party (say, by checking https://cipherlist.eu/), then sure. Otherwise I’m not sure I agree (hell, TLS has a null cipher).

                                                                can be enforced on communication before any content is sent

                                                                There’s a reason why there’s active research trying to plug privacy holes such as SNI. There’s so much surface to the whole stack that I would not be comfortable making this claim.

                                                                offers forward secrecy

                                                                I agree, although I don’t think it would provide non-repudiation (at least without adding signed exchanges, which I think it’s still a draft) and without mutual TLS authentication, which can be achieved with PGP quite easily.

                                                                1. 1

                                                                  take care to, say, curate your list of ciphers often and check the ones vetted by a third party

                                                                  There are no bad ciphers in 1.3, it’s a small list, so you could just kill the earlier TLS versions :)

                                                                  Also, popular web servers already come with reasonable default cipher lists for 1.2. Biased towards more compatibility but not including NULL, MD5 or any other disaster.

                                                                  I don’t think it would provide non-repudiation

                                                                  How often do you really need it? It’s useful for official documents and stuff, but who needs it on a contact form?

                                                                2. 3

                                                                  I want to say that it only provides DNS based verification but then again, how are you going to get the right PGP key?

                                                                  1. 3

                                                                    PGP does not have only one trust model, and it is a good part of it : You choose, according to the various sources of trust (TOFU through autocrypt, also saw the key on the website, or just got the keys IRL, had signed messages prooving it is the good one Mr Doe…).

                                                                    Hopefully browsers and various TLS client could mainstream such a model, and let YOU choose what you consider safe rather than what (highly) paid certificates authorities.

                                                                    1. 2

                                                                      I agree that there is more flexibility and that you could get the fingerprint from the website and have the same security.

                                                                      Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.

                                                                      The mantra “flexibility is an enemy of security” may apply.

                                                                      1. 1

                                                                        I meant content whose exclusive disclosure is in a signed message, such as “you remember that time at the bridge, I told you the boat was blue, you told me you are colorblind”.

                                                                        [EDIT: I realize that I had in mind that these messages would be sent through another secure transport, until external facts about the identity of the person at the other end of the pipe gets good enough. This brings us to the threat model of autocrypt (aiming working through email-only) : passive attacker, along with the aim of helping the crypto bonds to build-up: considering “everyone does the PGP dance NOW” not working well enough]

                                                                        1. 1

                                                                          Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.

                                                                          I can publish your comment on my HTTPS protected blog. Doesn’t prove your comment is mine.

                                                                          1. 2

                                                                            Not sure if this is a joke but: A) You sign my mail. Op takes this as proof that your key is mine. B) You put your key on my website..wait no you can’t..I put my key on your webs- uh…you put my key on your website and now I can read your email…

                                                                            Ok, those two things don’t match.

                                                                  2. 9

                                                                    I’d claim I’m familiar with both the PGP ecosystem and TLS/X.509. I disagree with your claim that they’re a similar clusterfuck.

                                                                    I’m not saying X.509 is without problems. But TLS/X.509 gets one thing right that PGP doesn’t: It’s mostly transparent to the user, it doesn’t expect the user to understand cryptographic concepts.

                                                                    Also the TLS community has improved a lot over the past decade. X.509 is nowhere near the clusterfuck it was in 2010. There are rules in place, there are mitigations for existing issues, there’s real enforcement for persistent violation of rules (ask Symantec). I see an ecosystem that has its issues, but is improving on the one side (TLS/X.509) and an ecosystem that is in denial about its issues and which is not handling security issues very professionally (efail…).

                                                                    1. 3

                                                                      Very true but the transparency part is a bit fishy because TLS included an answer to “how do I get the key” which nowadays is basically DNS+timing while PGP was trying to give people more options.

                                                                      I mean we could do the same for PGP but if that fits your security requirements is a question that needs answering..but by whom? TLS says CA/DNS PGP says “you get to make that decision”.

                                                                      Unfortunately the latter also means “your problem” and often “idk/idc” and failed solutions like WoT.

                                                                      Hiw could we do the same? We can do some validation in the form of we send you an email encrypted for what you claim is your public key to what you claim is your mail and you have to return the decrypted challenge. Seems fairly similar to DNS validation for HTTPS.

                                                                      While we’re at it…. Add some key transparency to it for accountability. Fix the WoT a bit by adding some DOS protection. Remove the old and broken crypto from the standard. And the streaming mode which screws up integrity protection and which is for entirely different use-cases anyway. Oh, and make all the mehish or shittyish tools better.

                                                                      That should do nicely.

                                                                      Edit: except, of course, as Hanno said: “an ecosystem that is in denial about its issues and which is not handling security issues very professionally”…that gets in the way a lot

                                                                      1. 2

                                                                        I’d wager this is mostly a user-facing tooling issue, rather than anything else. Would you believe that having a more mature tooling ecosystem with PGP would make it more salvageable for, say, vulnerability disclosure emails instead of a google web form?

                                                                        If anything, I’m more convinced that the failure of PGP is to trust GnuPG as its only implementation worthy of blessing. How different would it be if we had funded alternative, industry-backed implementations after e-fail in the same way we delivered many TLS implementations after heartbleed?

                                                                        Similarly, there is a reason why there’s active research on fuzzing TLS implementations for their different behaviors (think, frankencerts). Mostly, this is due the fact that reasoning about x509 is impossible without reading through stacks and stacks of RFC’s, extensions and whatnot.

                                                                        1. 0

                                                                          I use Thunderbird with Enigmail. I made a key at some point and by now I just send and receive as I normally do. Mails are encrypted when they can be encrypted, and the UI is very clear on this. Mails are always signed. I get a nice green bar over mails I receive that are encrypted.

                                                                          I can’t say I agree with your statement that GPG is not transparent to the user, nor that it expects the user to understand cryptographic concepts.

                                                                          As for the rules in the TLS/X.509 ecosystem, you should ask Mozilla if there’s real enforcement for Let’s Encrypt.

                                                                        2. 4

                                                                          The internal complexity of x509 is a bit of a different one than the user-facing complexity of PGP. I don’t need to think about or deal with most of that as an end-user or even programmer.

                                                                          With PGP … well… There are about 100 things you can do wrong, starting with “oops, I bricked my terminal as gpg outputs binary data by default” and it gets worse from there on. I wrote a Go email sending library a while ago and wanted to add PGP signing support. Thus far, I have not yet succeeded in getting the damn thing to actually work. In the meanwhile, I have managed to get a somewhat complex non-standard ACME/x509 generation scheme to work though.

                                                                          1. 3

                                                                            There have been a lot of vulns in x509 parsers, though. They are really hard to get right.

                                                                            1. 1

                                                                              I’m very far removed from an expert on any of this; so I don’t really have an opinion on the matter as such. All I know is that as a regular programmer and “power user” I usually manage to do whatever I want to do with x509 just fine without too much trouble, but that using or implementing PGP is generally hard and frustrating the the point where I just stopped trying.

                                                                            2. 1

                                                                              You are thinking of gnupg. I agree gnupg is a usability nightmare. I don’t think PGP (RFC4880) makes much claims about user interactions (in the same way that the many x509 related RFC’s talk little about how users deal with tooling)

                                                                            3. 1

                                                                              Would you say PGP has a chance to be upgraded? I think there is a growing consensus that PGP’s crypto needs some fixing, and GPG’s implementation as well, but I am no crypto-people.

                                                                              1. 2

                                                                                Would you say PGP has a chance to be upgraded?

                                                                                I think there’s space for this, although open source (and standards in general) are also political to some extent. If the community doesn’t want to invest on improving PGP but rather replace it with $NEXTBIGTHING, then there is very little you can do. There’s also something to be said that 1) it’s easier when communities are more open to change and 2) it’s harder when big names at google, you-name-it are constantly bashing it.

                                                                                1. 2

                                                                                  Can you clarify where “big names at Cloudflare” are bashing PGP? I’m confused.

                                                                                  1. 1

                                                                                    Can you clarify where “big names at Cloudflare” are bashing PGP? I’m confused.

                                                                                    I actually can’t, I don’t think this was made in any official capacity. I’ll amend my comment, sorry.

                                                                            1. 1

                                                                              I noticed that, when you install FreeBSD on DigitalOcean, it comes with some DigitalOcean-specific tools preinstalled. Since these are written in Python, the FreeBSD image you get already has pkg, python3x and a handful of other things already installed.

                                                                              If this would be your first time using FreeBSD, you’d be fooled into thinking that FreeBSD works this way; a clean install gives you a package manager and some runtimes such as python and you manage these through pkg.

                                                                              I don’t use DigitalOcean anymore and this experience was a couple of years ago, so maybe things have changed by now.

                                                                              1. 23

                                                                                I have a nagging feeling that I’m missing something here. It doesn’t seem right that such an obvious solution would have been left on the table, by everyone, for decades.

                                                                                Browser vendors.

                                                                                They’re Why We Can’t Have Nice Things; they refuse to add UI for basic HTTP and TLS-level features and force everyone to roll their own replacements at a higher level that tend to suck. Imagine if browsers implemented HTTP Basic Auth in a way that didn’t look like it was straight out of 1996 … how much pointless code wouldn’t need to be written.

                                                                                1. 8

                                                                                  Managing a custom CA and renewal and everything is a serious pain worth avoiding. Especially when dealing with errors for non-technical users. UX is terrible and keeping a secret file was asking too much of many people. That’s why https + username + password won in e-commerce. Lowest friction.

                                                                                  Enterprises are different of course. Less users to worry with. Centralized specific documentation for a reduced set of supported client software.

                                                                                  1. 1

                                                                                    Especially when dealing with errors for non-technical users. UX is terrible and keeping a secret file was asking too much of many people.

                                                                                    I can see why you wouldn’t want to use your Mozilla hat on this post..

                                                                                    1. 2

                                                                                      what do/did you see? Curious to hear if our understanding aligns.

                                                                                      Parts of this thread are about browsers but my experience and my comment isn’t. I co-managed a tiny CA with computer security students about 10 years ago. Failure mode was hard and breaking assignment labs is a lot of bad stress. I don’t wanna know what it’s like with paying customers.

                                                                                      I haven’t done any crypto related stuff at Mozilla. Mostly focusing on web/browser security. Doesn’t really make sense to use the hat, don’t you think?

                                                                                      1. 3

                                                                                        You state that https + username + password won because they’re the lowest friction, and you’re right. You also state that this is because of (among others) bad UX with other solutions. You’re also right there.

                                                                                        Bad UX is a browser problem; no browser has done any serious work on a generic authentication UX. Basic Authentication in Firefox still presents a dialog box that looks like it’s made in the 90s. Client side certificate management is cumbersome, and using client side certificates is hard. These are not technology problems, these are UX problems.

                                                                                        Our situation would be better, considering both security and UX, if browsers made authentication a first class citizen. Web developers would have it easier, users would have a more consistent experience and we would not have so many custom broken login implementations, because in that timeline letting the browser handle the authentication would have been the solution with lowest friction.

                                                                                        Because of this, I see browser vendors as a big part of the problem, hence my remark about you not wearing your hat. Mozilla made a step in the right direction a while ago when they announced Persona, but it’s been discontinued for longer than it has been alive now.

                                                                                        1. 1

                                                                                          Whatever blame you’re trying to throw, it won’t stick. I’m not your crypto/logins guy. Anyway you might wanna try WebAuthn to solve this properly? Doesn’t have the tracking issues too.

                                                                                  2. 4

                                                                                    “Why use existing layers as a basis to the layer above while we can replace layers below with extra layers put on top”

                                                                                    We are so much used to this scheme that it looks familiar everywhere we go.

                                                                                    1. 1

                                                                                      Agreed: it’s a “nice” solution from a system design standpoint but sometimes IRL I re-open a browser window and 5 tabs each suddenly need my PIN number, one after the other, or they never default to the right cert, &c. Plus even when it works, it can be super laggy.

                                                                                      If the user agent was a more effective key agent, it would be great!

                                                                                    1. 1

                                                                                      My best guess about why OAuth took off while client certificates did not is that one can implement the client side of the former in JavaScript and run it in a browser, but probably not the latter.

                                                                                      1. 6

                                                                                        Every client implementation of OAuth which has no server-side component is leaking their secret key, which is a Bad Thing you are Not Supposed To Do.

                                                                                        1. 3

                                                                                          With OAuth2, app secret keys should not exist for public clients (no matter if web or native, keys can be extracted either way).

                                                                                          The old way of implementing public clients was implicit grant; now it’s PKCE (RFC 7636).

                                                                                          1. 2

                                                                                            I think it’s important to point out that this was a limitation of OAuth, but not a limitation of OAuth2, which is what most implementations use these days. I am assuming @jimdigriz is assuming OAuth2, and you OAuth 1.0. Are my assumptions correct?

                                                                                            *Edit—I am wrong here. I forgot that there’s a client secret possible for some OAuth2 flows as well.

                                                                                            1. -1

                                                                                              Not all client implementations do https://gitlab.com/jimdigriz/oauth2-worker

                                                                                              1. 10

                                                                                                This is a placebo of the worst order, a “security” library which is nothing of the sort. This will still leak your client secrets, it is logically impossible not to if you’re completing the OAuth flow from the client side.

                                                                                                1. -1

                                                                                                  So instead of walking through the methodology in there, you just decided to make wild statements.

                                                                                                  Marvelous.

                                                                                                  1. 9

                                                                                                    I did read it, and I understand what it’s trying to do, and what it’s not doing is keeping your client secret a secret. It’s keeping it from other JavaScript, but anyone on your page can find the secret by popping up the network panel and watching the request go out. Try it yourself.

                                                                                                    This library should come with a big bold header telling you that it does absolutely nothing at all to keep your client secret safe from an adversary who knows where the “view source” button is, and that it would be impossible to do so effectively.

                                                                                                    Client side security, isn’t. This is a cold, hard fact.

                                                                                                    1. -2

                                                                                                      Pray, do tell, what server side componment would fix this attack vector?

                                                                                                      1. 11

                                                                                                        If you put it on the server, then you don’t have to send it to the client! The whole point of the secret token is that you keep it on your server, behind your firewall, and then issue the request to obtain the exchange token there. NOT in code that you’ve sent to the client to do whatever they want with.

                                                                                                        CLIENT SIDE CODE IS NOT SECURE.

                                                                                                        1. 1

                                                                                                          What does the client use to authenticate its-self? What is it that the client communicate to authenticate its-self. Your statement of ‘put the token on the server’ is nonsensical as the problem is authenticating a user-agent.

                                                                                                          No one gives a damn what an untrusted client does with a token, it is authorised for whatever its allowed to do, no more, no less.

                                                                                                          You also seem to be trying to add substance to your argument by making out this secret works like a root credential, which is an implementation detail and irrelevant.

                                                                                                          Your arguments read as if you need to swot up on you AAA.

                                                                                                          1. 8

                                                                                                            I don’t follow your line of questioning, but from the linked repo.

                                                                                                            client_secret [optional and not recommended]: your application secret

                                                                                                            SAP’s are considered a public (‘untrusted’) client as the secret would have to published making it no longer a secret and pointless

                                                                                                            This is what you risk losing, not the access token. The application secret has to be stored somewhere and if there is no server-side component then its on the client, which like the documentation says is insecure.

                                                                                                            1. 4

                                                                                                              No one gives a damn what an untrusted client does with a token, it is authorised for whatever its allowed to do, no more, no less.

                                                                                                              Do I understand correctly that you actually don’t care that an untrusted client has access (even limited) without you knowing it?

                                                                                                              Why do you need oauth2 then?

                                                                                                              1. -4

                                                                                                                God, I hope you aren’t expected to secure any production code.

                                                                                                                1. -4

                                                                                                                  God I pity those that have to work with you. Rockstar coder I assume?

                                                                                                                  1. 20

                                                                                                                    Folks, let’s all try and be civil and charitable here.

                                                                                                2. 2

                                                                                                  Over in the 802.1X world, EAP-TLS (client certificates) is mostly a pain not at initial provisioning, but renewal of the certificates and the UX around that. Though a mostly awful standard, EAP-FAST tried to address this (including provisioning) and now things like TEAP are coming out the woodwork.

                                                                                                  1. 1

                                                                                                    EAP-TLS of course is fixed by having irresponsibly long validity times on the certificates, in the hopes that the user tosses the device before the certificate expires.. And then you hope your user comes to you for getting a new certificate instead of reusing the one for their old device..

                                                                                                1. 0

                                                                                                  I sort of hate GDPR for this cookie consent thing. Pre-GDPR, only EU business had this invasive, (most of the time) ‘whatever you say, we will set cookie unless you disable the cookies from the browser’ type shoutings. Now, lots of website does this.

                                                                                                  As a non-EU citizen, I am not interested in being forced to see an EU-law-specific notice, when I don’t really have a choice. (In my jurisdiction, cookie notices are required in the Privacy Policy, but that doesn’t require you a banner.)

                                                                                                  1. 9

                                                                                                    The thing is GDPR is not invasive. It is the illegal implementation from sites that want to force you to say “yes” to them selling your data to third-parties for non-essential purposes (such as the targeted advertising purposes) that makes it invasive.

                                                                                                    1. 1

                                                                                                      You really should read the article linked at the very top of this page..

                                                                                                    1. 16

                                                                                                      Please correct me if I’m wrong, but AFAIK you don’t need to ask for cookie consent for core site functionality, such as remembering user settings.

                                                                                                      The “cookie law” isn’t about using cookies per se, but about using cookies (or any other identifier) for tracking people.

                                                                                                      1. 7

                                                                                                        That’s true. You’re not required to get cookie consent for the necessary functionality such as session cookies that for instance hold the items in the shopping cart etc.

                                                                                                        1. 4

                                                                                                          remembering user settings.

                                                                                                          Presumably, it depends whether your user settings cookie contains data like “dark mode on” or whether it contains a user id which allows you to load the user’s settings from your database. While they both achieve the same functionality, the latter case tracks the user, while the former does not. I don’t know whether you could try to justify the latter as “necessary”, given that the former is possible.

                                                                                                          1. 4

                                                                                                            From my reading here https://gdpr.eu/cookies/ it says:

                                                                                                            “Receive users’ consent before you use any cookies except strictly necessary cookies.

                                                                                                            Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site. Cookies that allow web shops to hold your items in your cart while you are shopping online are an example of strictly necessary cookies. These cookies will generally be first-party session cookies.

                                                                                                            Preferences cookies — Also known as “functionality cookies,” these cookies allow a website to remember choices you have made in the past, like what language you prefer, what region you would like weather reports for, or what your user name and password are so you can automatically log in.”

                                                                                                            So you would have to ask for consent for “preferences” but you don’t need to ask for those as soon as someone enters a site like you need to ask for marketing/tracking cookies. You can for instance have a “remember me” box and let users check it in case they want to save settings such as dark mode or they try to login.

                                                                                                            1. 3

                                                                                                              If you keep the dark mode setting client-side, you can just store it in local storage and never send it to the server. You don’t need opt-in for that.

                                                                                                              1. 4

                                                                                                                GDPR is technology-agnostic. Don’t assume that if you use some other mechanism that you’re in the clear. Local storage can still hold persistent identifiers that your website can access.

                                                                                                                1. 3

                                                                                                                  I agree, but I said “never send it to the server”. I don’t think anything done purely client-side is subject to consent in GDPR.

                                                                                                                  1. 1

                                                                                                                    I wouldn’t bet on a technicality convincing any lawyers.

                                                                                                                    • If you embed third-party scripts on your page, you’re sharing localStorage with them, and you have the responsibility to ensure they won’t grab it. XSS on your site may need to be treated the same way as server-side data breaches, because it only matters what data leaked, not how.

                                                                                                                    • If you make use of the data, you’re still processing it. Even if you don’t send the data as-is to the server, it may still have privacy implications, e.g. if you choose which ads user gets served based on localStorage data.

                                                                                                                    1. 1

                                                                                                                      Both of those points are interesting, but I wonder about the implications on native, desktop applications then…

                                                                                                                      I’ve never had a video game ask me for consent to autosave, for instance.

                                                                                                                      1. 1

                                                                                                                        Games come with heavy EULAs, especially if any account or on-line component is involved (e.g. DRM and anti-cheat rootkits have access to lots of private information, so they must have made you “agree” to this).

                                                                                                                2. 2

                                                                                                                  That only works if the user runs your JS

                                                                                                                  1. 2

                                                                                                                    Yes. It doesn’t really matter if the dark mode setting isn’t saved (or even cannot be used) for users who disable JS…

                                                                                                                    1. 3

                                                                                                                      you’re right. and it also doesn’t matter if that wheelchair ramp actually works.

                                                                                                                      1. 1

                                                                                                                        I don’t know an accessibility issue that would be solved by a native dark mode and where the user cannot interpret JavaScript…

                                                                                                                        Honestly, accessibility matters, but this really sounds like a straw man argument.

                                                                                                                        1. 2

                                                                                                                          Correct me if I’m wrong, but what I interpret your comment to say is:

                                                                                                                          I don’t think it is important for users without JS to be able to save their site preferences.

                                                                                                                          1. 1

                                                                                                                            Yes, but today disabling JS is a choice so it’s not really fair to compare it to disability.

                                                                                                                            How many users can still afford to disable JS entirely anyway, in a time when so many popular websites are single page applications?

                                                                                                                            It’s 2020, and CLI browsers, screen readers, crawlers… can all run JS. (And they don’t care about a dark mode anyway.)

                                                                                                                            For what it’s worth I’ve always thought that styling was a client-side issue. 20 years ago we had alternate stylesheets, and Mozilla never fixed this issue so people already replied on cookies + client-side JS to solve that. The only difference in using localStorage is that the information is never sent to the server, where it isn’t used anyway.

                                                                                                                            1. 2

                                                                                                                              Yes, but today disabling JS is a choice so it’s not really fair to compare it to disability.

                                                                                                                              Go get yourself an Obamaphone, or a $25 phone from Best Buy, and use it as your primary device for a week. Then come back here and say that again.

                                                                                                                              How many users can still afford to disable JS entirely anyway, in a time when so many popular websites are single page applications?

                                                                                                                              That’s what I’m saying here. You could be a “popular website” which makes itself inaccessible, or you could go the extra mile and build that ramp.

                                                                                                                              JS can also break due to incomplete transfers, broken cached JS files, errors in the JS itself, unfinished loading due to slow connections. Do you really want to eliminate all these users, when they may depending on your service the most?

                                                                                                                  2. 2

                                                                                                                    I think it’s okay to send the individual preferences to the server, but not an ID to look up preferences server-side, because that is something you can individually track the user with.

                                                                                                                    For example, if both me and catwell prefer the dark site with language Swahili, you can’t distinguish us if these settings are set directly in the cookies; same settings, same cookies. But if you store them in a database with user ID, suddenly I’m user 42 and catwell is user 1337, and you can distinguish between us.

                                                                                                                    1. 1

                                                                                                                      I think it’s okay to send the individual preferences to the server, but not an ID to look up preferences server-side, because that is something you can individually track the user with.

                                                                                                                      Is there really a difference?

                                                                                                                      1. 1

                                                                                                                        Is there really a difference?

                                                                                                                        Yes. Definitely. darkmode=1 vs darkmode=0 has one bit of entropy, so you can’t track me with it. SID=94898743637843438653262 is many bytes of entropy, and I don’t know what you use it for. Is that identifier only matched to a darkmode setting or is it a giant GDPR violation server side?

                                                                                                                3. 2

                                                                                                                  or just use css media queries and let the person configure it on their browser instead of having to figure out where every website’s special toggle mode button it.

                                                                                                                  @media (prefers-color-scheme: dark) and @media (prefers-color-scheme: light)

                                                                                                              1. 1

                                                                                                                I’m checking the source code, but it doesn’t seem so easy to extract the actual crypto code for reuse. There’s copied and obfuscated code such as the AES implementation. Is this more a collection of earlier work in a Bootstrap template than a crypto tool suite?

                                                                                                                I’d be interested to see a discussion how a pure-JS implementation (I guess this is one?) performs against WebCrypto. I’ve noticed that WebCrypto does not support streaming encryption for one; you input a string and out comes another string.

                                                                                                                1. 4

                                                                                                                  There’s of course this: https://github.com/jedisct1/libsodium.js/. I’d trust that much more than any other client side “crypto” implementation. No need to use WebCrypto either because it is a bad idea.

                                                                                                                  1. 1

                                                                                                                    Enough of this. If someone chooses the primitives that experts agree is a good idea and uses them in a way that experts agree is acceptable then WebCrypto is perfectly fine. I use libsodium because it’s interoperable between the web and server and others but using WebCrypto with expertly an chosen configuration is not hard with all of the information out there now. It also needs to be mentioned in cases where you need to encrypt or decrypt a large amount of data in one go WebCrypto is higher performance than libsodium.js. There are tradeoffs.

                                                                                                                    1. 1

                                                                                                                      chooses the primitives that experts agree is a good idea

                                                                                                                      uses them in a way that experts agree is acceptable

                                                                                                                      with expertly an chosen configuration

                                                                                                                      with all of the information out there

                                                                                                                      Okay then.

                                                                                                                1. 3

                                                                                                                  A nice guide, but it’s making the installation process complicated by doing too much. You don’t need mongod_enable=YES, you don’t need the lines in /etc/fstab (or having it exist, at all). In my experience, the installation is no harder than:

                                                                                                                  pkg install unifi5 && sysrc unifi_enable=YES && service unifi start
                                                                                                                  

                                                                                                                  I had done this myself a while ago. A caveat with my setup, for Unifi at least, is that I run jails without IPv4 connectivity; the jails don’t even have a loopback.

                                                                                                                  % ping 127.0.0.1
                                                                                                                  ping: ssend socket: Protocol not supported
                                                                                                                  

                                                                                                                  This causes the connection to the local mongodb service to fail, because it’s hardcoded to use 127.0.0.1 there. Otherwise, it works fine on my setup with IPv6 only, so as long as I give it access to an IPv4 loopback address for mongodb it works.

                                                                                                                  The controller listens on IPv6, but the access point does not support IPv6 as far as I can see. An TCP proxy between the AP and the controller translating between IPv4 and IPv6 fixes that. I forward ports 80, 443 and 8080 to the controller using sniproxy.

                                                                                                                  1. 1

                                                                                                                    Maybe they wanted to make their movie longer instead of finishing after several seconds with these 3 commands you mentioned :)

                                                                                                                    As for ping(8) working in a FreeBSD Jails its pretty simple. Just set security.jail.allow_raw_sockets to 1 on the FreeBSD host and they will work.

                                                                                                                    # sysctl security.jail.allow_raw_sockets=1
                                                                                                                    

                                                                                                                    To make it permanent put it into the /etc/sysctl.conf file.

                                                                                                                  1. 3

                                                                                                                    The downside for BSD is a higher coupling between user- and kernel land. Suppose a bug is found in BSDs ifconfig and it is fixed in a newer BSD release. If you cannot backport the bug fix patch (for whatever reason), then you have to update your whole BSD. On Linux, you just use the new ifconfig.

                                                                                                                    1. 11

                                                                                                                      According to the article:

                                                                                                                      its worth noting that OpenBSD and NetBSD do not have these libraries because the kernel interface itself is highly stable anyways. FreeBSD even provides a COMPAT layer in the rare cases that an older binary fails to run on modern versions of FreeBSD.

                                                                                                                      1. 7

                                                                                                                        The updated BSD is compatible with binaries built for the older BSD though