1. 7

    I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

    That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

    1. 13

      Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

      HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

      The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

      edit[0]: grammar

      1. 6

        I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

        1. 19

          I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

          I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

          On HardenedBSD:

          void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
          mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
          
          munmap(mapping, getpagesize());
          
          mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
          mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
          

          It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

          I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

          Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

          1. 3

            So how will JITs work on HardenedBSD? is the sequence:

            mmap(PROT_WRITE);
            // write data
            mprotect(PROT_EXEC);
            

            allowed?

            1. 5

              By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

              HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

          2. 9

            Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

            OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

            If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

            1. 5

              If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

              I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

              1. 3

                Why fork? Just don’t load the kernel modules…

                1. 4

                  There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                  1. 1

                    You may want to take a look at dragonflybsd then.

              2. 4

                Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

        1. 3

          This seems like the kind if thing that would be a great fit for crowdsourcing, and made fetchable/searchable via ISBN (similar to imdb or what have you). Maybe converted to a standard format of some kind.
          Might even be useful for actual libraries too perhaps.

          1. 3

            Actually, Google Books already did that, but it’s somehow limited to Google Books only, no public API, AFAIK

          1. 11

            I’m one of the VerneMQ developers, so if you have any questions about VerneMQ I’d be happy to try to answer.

            1. 2

              I saw you addressed the difference with RabbitMQ. Do you know about Malamute (ZeroMQ broker)? How would it compare against? I really like the philosophy behind ZeroMQ, and use Malamute internally, but its a bit of a pain to install outside of Linux and I’ve hit bugs that left me with a feeling of “I’m the first user of this”.

              1. 1

                I haven’t heard about Malamute before, so can’t say anything about it, I’m afraid. I did work with ZeroMQ briefly some years back and it seemed pretty nice. I’ll have to check out Malamute!

              2. 2

                What are the pros/cons of VerneMQ vs Mosquitto?

                1. 3

                  I guess what’s a pro and what’s a con is in the eye of the beholder. The biggest difference is that VerneMQ is built from the start to be a distributed broker, while Mosquitto is a stand-alone broker. The clustering makes VerneMQ horizontally scalable, so that would be a pro if you need that. Another difference which may be an important pro or con, depending on what one fancies, is that Mosquitto is written in C and hence plugins has to be written in C (correct me if I’m wrong here). VerneMQ plugins can be written in Erlang, Elixir or Lua or as HTTP endpoints. There are of course lots of other details, but those are, I think the main ones.

                  1. 2

                    emqtt is another one I have run across a few times (haven’t tried it out yet).

                1. 3

                  Illumos is neat for sure. There are other BSD’s out there too.

                  1. 3

                    Trivia related to computer graphics + early Linux: Bruce Perens was the leader of Debian for awhile, and also worked at Pixar for 12 years.

                    I assume that Pixar was an early adopter of Linux, because otherwise they would have to pay commercial OS licensing fees for the hundreds / thousands of machines they used to render movies.

                    Although I read Ed Catmull’s recent book and I don’t think he mentioned Linux? That book did mention the NYIT graphics lab.

                    https://en.wikipedia.org/wiki/Bruce_Perens

                    https://en.wikipedia.org/wiki/New_York_Institute_of_Technology_Computer_Graphics_Lab

                    1. 2

                      I vaguely recall some news around 2003 (I think it was) about Pixar switching from Sun to Intel hardware, and porting renderman.

                      1. 1

                        Sun? I know they bought a ton of SGI Octanes for Toy Story.

                        1. 3

                          found this: https://www.cnet.com/news/pixar-switches-from-sun-to-intel/
                          May have been what I was recalling.

                          Maybe they used SGI before that?

                          1. 1

                            Cool! I didn’t know that.

                            You are probably right - buying SGI in the 2000s isn’t likely a smart move ;)

                          2. 2

                            This story said they used SGI for desktops and Suns for rendering.

                            Also for @trousers.

                            1. 2

                              This story said they used SGI for desktops and Suns for rendering.

                              Also for @trousers.

                              They used Suns for trousers? Sparc64 pants? A novel usecase for sure. ;)

                              I kid, I kid. Thanks for the link. :)

                              1. 3

                                They were rendering them in the movie. Had to get accurate lighting, ruffling, and so on. Geek producing it spent so much on the hardware they couldnt afford all the actors. Show got cancelled.

                                Many investors now suspect the Trouser Tour documentary was a ruse devised so the producer could play with a bunch of SGI and Sun boxes. Stay tuned for updates.

                      1. 3

                        We are. Especially interested in devops candidates (aws/terraform/saltstack).

                        1. 1

                          Hi.

                          Do you have remote positions(US based)?

                          1. 1

                            We do have a few remote-only folks, a remote friendly atmosphere, and try our best to maintain a good culture of communication (remote teams and employees keeping us honest).
                            On-site is preferred (Eugene/Denver), but for the right candidate remote may be an option.

                        1. 8

                          To be fair, they should also mark as “Not Secure” any page running JavaScript.

                          Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
                          (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

                          1. 11

                            By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

                            1. [Comment removed by author]

                              1. 5

                                Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.

                                1. 3

                                  Why on earth would I give anyone else my private certificate?

                                  1. 4

                                    Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)

                                    Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)

                                    Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)

                                    Lastly, of course, are Terms Of Service, different from the above by at least being above-board.

                                2. 2

                                  No.

                                  It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

                                  1. 11

                                    With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

                                    1. 1

                                      Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                                      Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                                      What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

                                    2. 3

                                      WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                                      I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                                      1. 4

                                        CDNs are man-in-the-middle attacks.

                                    3. 1

                                      As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                                      1. 0

                                        MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                                        Well… how can I say that… I don’t think so.

                                        Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                        Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                                        If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                                        Is this something webmasters should care? I think so.

                                        1. 4

                                          Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                          Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                                          1. 5

                                            There is an entire industry around products that do this

                                            There is an entire industry around rasomware. But this does not means it’s a security solution.

                                            1. 1

                                              It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                                              What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                                              1. 1

                                                I wonder if you did read the articles I linked…

                                                The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                                                In this context, we need to grant to people accessibility and security.

                                                An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                                                1. 1

                                                  I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                                                  I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                                                  And that is even without going into which content is safe to be cached in a given environment.

                                                  1. 1

                                                    And that is even without going into which content is safe to be cached in a given environment.

                                                    Yes, this is the best objection I’ve read so far.

                                                    As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                                                    But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                                          2. 2

                                            HTTPS proxy isn’t incompetence, it’s industry standard.

                                            They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                                            Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                            1. 2

                                              Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                              Browsers bypass the network configuration to protect the users’ privacy.
                                              (I agree this is stupid, but they are trying to push this anyway)

                                              The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                                              It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                                              And, doing that in a school or a public library is dangerous and plain stupid.

                                              1. 0

                                                Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                                                Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                                                Browsers bypass the network configuration to protect the users’ privacy.

                                                Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                1. 1

                                                  Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                                                  Yes this is true.

                                                  If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                                                  Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                  Did you know about Firefox’s DoH/CloudFlare affair?

                                                  1. 2

                                                    Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                                                    It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                                                    1. 1

                                                      TBH, I don’t know what you mean with “security maximalism”.

                                                      I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                                      Mozilla has a contract with CloudFlare to protect the user privacy

                                                      It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                      AFAIK, even Facebook had a contract with his users.

                                                      Yeah.. I know… they will “do no evil”…

                                                      1. 1

                                                        Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                        It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                        Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                                        AFAIK, even Facebook had a contract with his users

                                                        Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                                        1. 1

                                                          Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                          You should define “common user”.
                                                          If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                                          The problem is for those people who are actually useful to the society.

                                                          Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                                          The problem with Cloudflare is not what they did, it’s what they could do.
                                                          There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                          But my concerns are with Mozilla.
                                                          They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                          1. 1

                                                            So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                            Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                                            There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                            Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                                            they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                            You mean safe because everyone involved knows what’s happening?

                                                            1. 1

                                                              I don’t believe the concerns are really concerns for the common user.

                                                              You should define “common user”.
                                                              If you mean the politically inepts who are happy to be easily manipulated…

                                                              So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                              I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                                              Let’s assume the first… for now.

                                                              I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                                              That’s it.

                                                              they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                              You mean safe because everyone involved knows what’s happening?

                                                              Really?
                                                              Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                                              A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                                              As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                                              I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                                              All this is very suspect for a company that claims to care about users’ privacy!

                                                              1. 0

                                                                I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                                                I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                                                Are you sure everyone understand what is a MitM attack?

                                                                An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                                                Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                                                Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                                                And it extends the attack surface, both for the users and the company.

                                                                And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                                                And they ship WebAssembly.

                                                                And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                                                And you have to edit about:config to disable JavaScript…

                                                                Or install a half-way competent script blocker like uMatrix.

                                                                All this is very suspect for a company that claims to care about users’ privacy!

                                                                I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                                                1. 1

                                                                  An attack requires an adversary, the evil one.

                                                                  According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                                                  It shows very well your understanding of security.

                                                                  The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                  I have on concerns about WebAssembly.

                                                                  Not a surprise.

                                                                  Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                  Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                  As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                  1. 1

                                                                    According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                                                    If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                                                    It shows very well your understanding of security.

                                                                    My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                                                    There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                                                    The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                    Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                                                    Malory sits between Eve and Bob not Bob and Alice.

                                                                    Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                    I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                                                    It’s not my duty or problem to debug web applications that I don’t develop.

                                                                    Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                    Then don’t do it? Nobody is forcing you.

                                                                    As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                    I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                                            2. 2

                                              My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                                              1. 3

                                                With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                                                The lack of awareness makes MitM caching worse.

                                        1. 13

                                          I think I understand where the author’s coming from, but I think some of his concerns are probably a bit misplaced. For example, unless you’ve stripped all the Google off your Android phone (which some people can do), Google can muck with whatever on your phone regardless of how you install Signal. In all other cases, I completely get why Moxie would rather insist you install Signal via a mechanism that ensures updates are efficiently and quickly delivered. While he’s got a point on centralized trust (though a note on that in a second), swapping out Google Play for F-Droid doesn’t help there; you’ve simply switched who you trust. And in all cases of installation, you’re trusting Signal at some point. (Or whatever other encryption software you opt to use, for that matter—even if its something built pretty directly on top of libsodium at the end of the day.)

                                          That all gets back to centralized trust. Unless the author is reading through all the code they’re compiling, they’re trusting some centralized sources—likely whoever built their Android variant and the people who run the F-Droid repositories, at a bare minimum. In that context, I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users. Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers. Chances are honestly higher that you’ll be hacked by some random other app you put on your phone than that Google will opt to go after Signal on their end. Moxie’s point is that you’re better off trusting Signal and Google than some random APK you find on the Internet. And for the overwhelming majority of users, I think he’s entirely correct.

                                          When I think about something like Signal, I usually focus on, who am I attempting to protect myself from? Maybe a skilled user with GPG is more secure than Signal (although that’s arguable; we’ve had quite a few CVEs this year, such as this one), but normal users struggle to get such a setup meaningfully secure. And if you’re just trying to defend against casual snooping and overexcited law enforcement, you’re honestly really well protected out-of-the-box by what Signal does today—and, as Mickens has noted, you’re not going to successfully protect yourself from a motivated nation-state otherwise.

                                          1. 20

                                            and cause irreparable harm to trust in Google from both users and developers

                                            You have good points except this common refrain we should all stop saying. These big companies were caught pulling all kinds of stuff on their users. They usually keep their market share and riches. Google was no different. If this was detected, they’d issue an apologetic press release saying either it was a mistake in their complex, distribution system or that the feature was for police with a warrant with it used accordingly or mistakenly. The situation shifts from everyone ditch evil Google to more complicated one most users won’t take decisive action on. Many wouldn’t even want to think to hard into it or otherwise assume mass spying at government or Google level is going on. It’s something they tolerate.

                                            1. 11

                                              I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users.

                                              The problem is that moxie could put things in the app if enough rubberhose (or money, or whatever) is applied. I don’t know why this point is frequently overlooked. These things are so complex that nobody could verify that the app in the store isn’t doing anything fishy. There are enough side-channels. Please stop trusting moxie, not because he has done something wrong, but because it is the right thing to do in this case.

                                              Another problem: signals servers could be compromised, leaking the communication metadata of everone. That could be fixed with federation, but many people seem to be against federation here, for spurious reasons. That federation & encryption work together is shown by matrix for example. I give that it is rough on the edges, but at least they try, and for now it looks promising.

                                              Finally (imho): good crypto is hard, as the math behind it has hard constraints. Sure, the user interfaces could be better in most cases, but some things can’t be changed without weakening the crypto.

                                              1. 2

                                                many people seem to be against federation here, for spurious reasons

                                                Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there.

                                                Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                                                1. 2

                                                  Federation seems like a fast path to ossification.

                                                  I have been thinking about this. There are certainly many protocols that are unchangeable at this point but I don’t think it has to be this way.

                                                  Web standards like HTML/CSS/JS and HTTP are still constantly improving despite having thousands of implementations and different programs using them.

                                                  From what I can see, the key to stopping ossification of a protocol is to have a single authority and source of truth for the protocol. They have to be dedicated to making changes to the protocol and they have to change often.

                                                  1. 2

                                                    I think your HTTP example is a good one. I would also add SSL/TLS to that, as another potential useful example to analyze. Both (at some point) had concepts of versioning built into them, which has allowed the implementation to change over time, and cut off the “long tail” non-adopters. You may be on to something with your “single authority” concept too, as both also had (for the most part) relatively centralized committees responsible for their specification.

                                                    I think html/css/js are /perhaps/ a bit of a different case, because they are more documentation formats, and less “living” communication protocols. The fat clients for these have tended to grow in complexity over time, accreting support for nearly all versions. There are also lots of “frozen” documents that people still may want to view, but which are not going to be updated (archival pages, etc). These have also had a bit more of a “de facto” specification, as companies with dominant browser positions have added their own features (iframe, XMLHttpRequest, etc) which were later taken up by others.

                                                  2. 1

                                                    Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there. Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                                                    It may seem so, but that doesn’t mean it will happen. It has happened with xmpp, but xmpp had other problems, too:

                                                    • Not good for mobile use (some years back when messenger apps went big, but mobile connections were bad)
                                                    • A “kind-of-XML”, which was hard to parse (I may be wrong here)
                                                    • Reinventing of the wheel, I’m not sure how many crypto standards there are for xmpp

                                                    Matrix does some things better:

                                                    • Reference server and clients for multiple platforms (electron/web, but at least there is a client for many platforms)
                                                    • Reference crypto library in C (so bindings are easier and no one tries to re-implement it)
                                                    • Relatively simple client protocol (less prone to implementation errors than the streams of xmpp, imho)

                                                    The google problem you described isn’t inherent to federation. It’s more of a people problem: Too many people being too lazy to setup their own instances, just using googles, forming essentially an centralized network again.

                                                2. 10

                                                  Maybe a skilled user with GPG is more secure than Signal

                                                  Only if that skilled user contacts solely with other skilled users. It’s common for people to plaintext reply quoting the whole encrypted message…

                                                  1. 3

                                                    And in all cases of installation, you’re trusting Signal at some point.

                                                    Read: F-Droid is for open-source software. No trust necessary. Though to be fair, even then the point on centralization still stands.

                                                    Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers.

                                                    What makes you certain it would be detected so quickly?

                                                    1. 5

                                                      “Read: F-Droid is for open-source software. No trust necessary”

                                                      That’s non-sense. FOSS can conceal backdoors if nobody is reviewing it. Often the case. Bug hunters also find piles of vulnerabilities in FOSS just like proprietary. People who vet stuff they use have limits on skill, tools, and time that might make them miss vulnerabilities. Therefore, you absolutely have to trust the people and/or their software even if it’s FOSS.

                                                      The field of high-assurance security was created partly to address being able to certify (trust) systems written by your worst enemy. They achieved many pieces of that goal but new problems still show up. Almost no FOSS is built that way. So, it sure as hell cant be trusted if you dont trust those making it. Same with proprietary.

                                                      1. 3

                                                        It’s not nonsense, it’s just not an assurance. Nothing is. Open source, decentralization, and federation are the best we can get. However, I sense you think we can do better, and I’m curious as to what ideas you might have.

                                                        1. 4

                                                          There’s definitely a better method. I wrote it up with roryokane being nice enough to make a better-formatted copy here. Spoiler: none of that shit matters unless the stuff is thoroughly reviewed and proof sent to you by skilled people you can trust. Even if you do that stuff, the core of its security and trustworthiness will still fall on who reviewed it, how, how much, and if they can prove it to you. It comes down to trusting a review process by people you have to trust.

                                                          In a separate document, I described some specifics that were in high-assurance security certifications. They’d be in a future review process since all of them caught or prevented errors, often different ones. Far as assurance techniques, I summarized decades worth of them here. They were empirically proven to work addressing all kinds of problems.

                                                      2. 2

                                                        even then the point on centralization still stands.

                                                        fdroid actually lets you add custom repo sources.

                                                        1. 1

                                                          The argument in favour of F-Droid was twofold, and covered the point about “centralisation.” The author suggested Signal run an F-Droid repo themselves.

                                                      1. 11

                                                        I use Signal, and you’d have to conclude thereby that I trust it to some extent. But I do get the feeling, over the years, that Moxie has made some really bad trade-offs in order to get Signal more widely used. I don’t think any of these trade-offs are as indefensible as Drew does, but they’re not good.

                                                        Requiring a phone number makes it easy for people to adopt Signal, because they can just use it as a drop-in replacement for their SMS app (which is important in the US, and also explains the crap features like gif search and half-assed stickers). But it also breaks the threat model where you don’t want to share your phone number – not a concern when you are worried about nation-state security forces, but a real issue for sex workers, social workers and therapists, people wanting to avoid harassment from exes, etc. I have definitely had friends who didn’t want to use Signal because they were unwilling to have something potentially leak their phone number. It also requires you to trust the app more, since it has to be able to access your phonebook.

                                                        I think the Play Store Only and No Federation trade-offs are similar. They probably do, actually, do more good than harm, only because there are more people in the categories who are benefited by them than in the categories harmed by them. But I think Moxie does overstate his case for them, unfairly dismisses the arguments against them, and underestimates the bad press that they generate.

                                                        (My personal messenger preference is Conversations, which uses XMPP+OMEMO, an adaptation of the Signal protocol to XMPP. But I recognize the difficulties of getting people to use it.)

                                                        1. 4

                                                          If you think of signal as a more secure replacement for text messaging, then the use of a phone number seems very sane. If you look at it as a replacement for xmpp/whatsapp/etc, then not so much.

                                                          My guess is that Signal was aiming for the former as a primary use-case.

                                                        1. 28

                                                          By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.

                                                          I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.

                                                          In my opinion preoccupation with the nominal personal data, actually displaces real privacy. Who cares about privacy of their name and family name, or office held? Except to hide shady politicking and worse, majority of us are happy to consciously publicize it as much as possible. It’s wrong, impractical and disrespectful to assume the contrary.

                                                          There are dozens of situations when it’s actually socially undesirable to keep it private, yet it is zealously protected under the GDPR in exactly the same way as your shopping history or your family photos.

                                                          I do care about the privacy of my name and family name. Is my name public on the internet? Yes. If I wanted to make it not public, would I want to be able to do so? Yes. Simple as that, really.

                                                          Equally questionable are formal and bureaucratic prescriptions for better data protection — more documentation, privacy impact audits, formal training, etc.

                                                          Does anyone honestly believe that more paperwork will lead to more privacy? More security risks in handling of our data (say thousands of hand signed consents) are somewhat more likely, I’m afraid.

                                                          Why would formal training around data protection, auditing of privacy protection and documentation of efforts to comply with the GDPR lead to another other than more privacy?

                                                          Apart from the right to complain under the new rules and few marginal rights — which are primarily of interest to the corrupt and the criminal, like the right to be forgotten — the average data subject barely gained any new privacy through the GDPR.

                                                          Yeah okay, nothing interesting to read here. The right to be forgotten is certainly not ‘primarily of interest to the corrupt and the criminal’. What a great load of ‘if you have nothing to fear you have nothing to hide’ twaddle.

                                                          1. 2

                                                            By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.

                                                            I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.

                                                            Excuse me if I misunderstand, but isn’t it still the case that they can add terms to their privacy policy, then tell users to either check all the boxes or leave?

                                                            1. 15

                                                              That’s exactly what you can’t do — you can’t refuse service if a user says “no” to tracking (unless you can prove in court that the tracking is strictly required for the functioning of the service).

                                                              1. 2

                                                                An example of a site that doesn’t follow the rules you state at all:

                                                                If you do not agree with our new privacy policy (that haven’t really changed much) we absolutely respect that. Feel free to go to your user settings page and delete your account. Optionally, you can change your settings and/or user profile if that helps. If you miss any settings feel free to let us know. If you just miss-clicked you can always go back and agree to the policy. If you have more questions feel free to send an e-mail to support@{{domainName}} and we will do our very best help you out.

                                                                They’re relatively small though, so I hope they’re not representative of too many other companies.

                                                                1. 3

                                                                  Then their privacy policy is invalid, and they’re committing a crime with every bit of data they collect.

                                                                  To be allowed to collect userdata, you need consent, and under the GDPR consent is only valid if it has been given freely, without any advantage/disadvantage coming from giving/not giving consent. (except for functionality that directly requires the consent).

                                                                2. 1

                                                                  Oh. I guess I’ve been doing privacy policy change dialogs wrong then 😅 I could’ve sworn lots of them wouldn’t let you continue until you accepted though.

                                                              2. 1

                                                                I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.

                                                                Have the various aspects of GDPR been applied/tested in court yet?

                                                                1. 7

                                                                  European civil law originals from Roman civil law, and is quite different from common law systems that originate from British law. Generally the law is quite specific and the intent is that the law will be applied as written rather than interpreted in the social and political context of the day in light of precedent, as is done in common law systems.

                                                                  I don’t know if that’s the case with the GDPR to the extent that it’s true of say, German law or French law, but if it is, it doesn’t need to be ‘tested’ in court, it is what it is.

                                                                  1. 1

                                                                    There are a few things which GDPR leaves open to interpretation, such as:

                                                                    • Maximum fines are specified, but we have yet to see what fines will be handed out for different levels of non-compliance.
                                                                    • How far the “legitimate interest” can be stretched.
                                                              1. 7

                                                                One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because I think one of the traps that the merely Rust-curious fall into is a disproportional fear and loathing of the borrow checker. This is disproportional because it ignores many of the delightful aspects of Rust – for example, that algebraic types in a non-GC’d language represent a revolution in error handling. (I also happen to love the macro system, Cargo, the built-in testing framework, and a bunch of other smaller things.) Yes, the lack of things like non-lexical lifetimes can make for some wrestling with the borrow checker, but once one is far enough into Rust to encounter these things, they are also far enough in to appreciate the value it brings to systems programming.

                                                                To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                                                                1. 1

                                                                  One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because …

                                                                  To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                                                                  I believe it wasn’t your intent, but your commentary reads a bit like “Only true Rustaceans should be allowed to talk about Rust”.

                                                                  1. 3

                                                                    Everyone should be allowed to talk about Rust. There is no authority that deserves to have the power to decide which people can or cannot talk about Rust.

                                                                    That said, it’s also fine to say that the author’s opinion about Rust is untrustworthy because it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way. I myself agree that it’s possible to write lots of useful rust code without running into situations where the borrow checker trips you up, and that some of Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language.

                                                                    1. 1

                                                                      it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way

                                                                      I still use rustlang but share the same opinion as the author. Did I write enough of it to be trustworthy? :)

                                                                      Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language

                                                                      Nothing on that list was rustlang’s innovation.

                                                                1. 2

                                                                  Curious that jsonrpc wasn’t evaluated. I would think that it was more popular than RPyC, at any rate.

                                                                  1. 2

                                                                    Thanks. I will check it out - didn’t know about it. Someone mentioned they had used RPyC when I was checking out gRPC.

                                                                  1. 2

                                                                    I nearly posted this as an ‘ask’: Slack is not good for $WORK’s use case because it does not have an on-premise option. What on-premise alternatives are people using/would you recommend?

                                                                    1. 4

                                                                      I’ve used Mattermost before, which AFAIK has an on-prem version - just as a user, not setup or admin so I can’t speak to that end.

                                                                      1. 6

                                                                        I’ve heard rumblings about Zulip being a decent option too. I haven’t used it myself though.

                                                                        1. 2

                                                                          Same, actually. It does look very interesting, I’d be highly interested in whether anyone has any experience with it?

                                                                          1. 1

                                                                            Zulip looks pretty solid, thanks for mentioning it. We may give it a try…

                                                                          2. 2

                                                                            We’ve used mattermost for a few years now, it’s pretty easy to setup and maintain, you basically just replace the go binary every 30 days with the new version. We just recently moved to the integrated version with Gitlab, and now Gitlab handles it for us, even easier now, since Gitlab is just a system package you upgrade.

                                                                            1. 2

                                                                              A lot of people have said Mattermost, might be a good drop-in replacement. According to the orange site they’re considering dropping a “welcome from Hipchat” introductory offer, which is probably a smart move.

                                                                              1. 2

                                                                                IIRC mattermost is open core. I’ve heard good things about zulip. Personally, I like matrix, which federates and bridges

                                                                              2. 3

                                                                                Matrix is fairly nice to use. I had some issues hosting it though.

                                                                              1. 7

                                                                                signify seems like it would be a great tool for git signatures.

                                                                                1. 7

                                                                                  But it’s written in C, which by definition can’t be used by any respectable rustlang developer ;)

                                                                                1. 1

                                                                                  This is a parody of the GWAN web server. These Hacker News discussions have some context about GWAN:

                                                                                  https://news.ycombinator.com/item?id=8130849

                                                                                  https://news.ycombinator.com/item?id=4109698

                                                                                  And about their insane attempt at a CATPCHA:

                                                                                  https://news.ycombinator.com/item?id=4113514

                                                                                  1. 4

                                                                                    I believe the website is made in a parody style, but the project itself appears real.

                                                                                    references:

                                                                                  1. 26

                                                                                    Given how many times over the years I had journald completely hose itself and freeze apps running on production systems [1] , I don’t find his arguments exceptionally compelling. Far more problems with journald/journalctl than I ever did with various syslog implementations. Yes you can still install syslog, but journald still gets the logs first, and then forwards/duplicates the data to syslog.

                                                                                    Maybe journald is better now? Been a couple of years since I had to deal with it on high volume log systems. At the time we ended up using a program wrapper (something similar to logexec) that sent the logs directly to syslog, and avoided systemd/journald log handling entirely.

                                                                                    [1]: app outputting some log data, journald stops accepting app output, app stdout buffer fills, app freezes blocking on write to stdout

                                                                                    1. 7

                                                                                      I see. Well nothing beats real world experience, so thank you very much for sharing that!

                                                                                      1. 5

                                                                                        For me it’s quite the opposite, I never had any issues with journald, neither in production nor in development environments.

                                                                                        1. 4

                                                                                          Seconded, I actually quite like that I can see all my logs the same way without setting up stuff on my side. With syslog I’d have to tell every program where to log and the systemd combo just takes away that manual burden.

                                                                                          1. 3

                                                                                            “works for me”

                                                                                          2. 4

                                                                                            I had this experience too, but that was because journald was hanging due to my disks being slow as molasses (I had deeper problems). I’m honestly not sure whether to blame journald for that.

                                                                                          1. 6

                                                                                            Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.

                                                                                            Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!

                                                                                            1. 16

                                                                                              Offtopic: Excuse me.

                                                                                              I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.

                                                                                              My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.

                                                                                              1. 11

                                                                                                Use the cached link for Medium articles. It doesn’t have the popup. Just the content.

                                                                                                1. 1

                                                                                                  Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.

                                                                                                  1. 3

                                                                                                    There is a’ cached’ link under each article title on lobste.rs

                                                                                                    1. 1

                                                                                                      Thanks.

                                                                                                2. 7

                                                                                                  I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).

                                                                                                  As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.

                                                                                                  1. 2

                                                                                                    I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.

                                                                                                    1. 6

                                                                                                      I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.

                                                                                                      1. 1

                                                                                                        Nice trick, thanks!

                                                                                                  2. 6

                                                                                                    How timely! Someone at the office just shared this with me today: http://makemediumreadable.com

                                                                                                    1. 4

                                                                                                      From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.

                                                                                                      I just click the little X in the top right corner of the popup.

                                                                                                      But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.

                                                                                                      1. 1

                                                                                                        And I find that I can’t scroll down.

                                                                                                        1. 3

                                                                                                          I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.

                                                                                                          On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.

                                                                                                          1. 1

                                                                                                            Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!

                                                                                                      2. 3

                                                                                                        What is the keyserver’s privacy policy?

                                                                                                        1. 5

                                                                                                          I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.

                                                                                                          Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.

                                                                                                          I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]

                                                                                                          You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1

                                                                                                          1. 5

                                                                                                            Thanks for this info.

                                                                                                            Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.

                                                                                                            A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation

                                                                                                            However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).

                                                                                                            I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.

                                                                                                            I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.

                                                                                                            1. 3

                                                                                                              gerikson you make really good points there about the GDPR.

                                                                                                              Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.

                                                                                                              The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.

                                                                                                              They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.

                                                                                                              Someone recently made a point about the below term non-repudiation.
                                                                                                              Non-repudiation this means in digital security

                                                                                                              A service that provides proof of the integrity and origin of data.
                                                                                                              An authentication that can be asserted to be genuine with high assurance.
                                                                                                              

                                                                                                              KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.

                                                                                                              I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.

                                                                                                              Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.

                                                                                                            2. 2

                                                                                                              Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.

                                                                                                              Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.

                                                                                                          2. 3

                                                                                                            i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.

                                                                                                            1. 3

                                                                                                              I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).

                                                                                                              If that could be something that interests you, let me know and I’ll let you know :)

                                                                                                              1. 2

                                                                                                                lmao you don’t even get paid when someone has to pay for your article?

                                                                                                                1. 1

                                                                                                                  correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.

                                                                                                                2. 2

                                                                                                                  hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.

                                                                                                                  1. 3

                                                                                                                    No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.

                                                                                                              1. 1

                                                                                                                Is there any well known PGP alternative other than this? Based from history, I cannot blindly trust code written by one human being and that is not battle tested.

                                                                                                                In any case, props to them for trying to start something. PGP does need to die.

                                                                                                                1. 7

                                                                                                                  a while ago i found http://minilock.io/ which sounds interesting as pgp alternative. i don’t have used it myself though.

                                                                                                                  1. 2

                                                                                                                    Its primitives and an executable model were also formally verified by Galois using their SAW tool. Quite interesting.

                                                                                                                  2. 6

                                                                                                                    This is mostly a remix, in that the primitives are copied from other software packages. It’s also designed to be run under very boring conditions: running locally on your laptop, encrypting files that you control, in a manual fashion (an attacker can’t submit 2^## plaintexts and observe the results), etc.

                                                                                                                    Not saying you shouldn’t be ever skeptical about new crypto code, but there is a big difference between this and hobbyist TLS server implementations.

                                                                                                                    1. 5

                                                                                                                      I’m Enchive’s author. You’ve very accurately captured the situation. I didn’t write any of the crypto primitives. Those parts are mature, popular implementations taken from elsewhere. Enchive is mostly about gluing those libraries together with a user interface.

                                                                                                                      I was (and, to some extent, still am) nervous about Enchive’s message construction. Unlike the primitives, it doesn’t come from an external source, and it was the first time I’ve ever designed something like that. It’s easy to screw up. Having learned a lot since then, if I was designing it today, I’d do it differently.

                                                                                                                      As you pointed out, Enchive only runs in the most boring circumstances. This allows for a large margin of error. I’ve intentionally oriented Enchive around this boring, offline archive encryption.

                                                                                                                      I’d love if someone smarter and more knowledgeable than me had written a similar tool — e.g. a cleanly implemented, asymmetric archive encryption tool with passphrase-generated keys. I’d just use that instead. But, since that doesn’t exist (as far as I know), I had to do it myself. Plus I’ve become very dissatisfied with the direction GnuPG has taken, and my confidence in it has dropped.

                                                                                                                      1. 2

                                                                                                                        I didn’t write any of the crypto primitives

                                                                                                                        that’s not 100% true, I think you invented the KDF.

                                                                                                                        1. 1

                                                                                                                          I did invent the KDF, but it’s nothing more than SHA256 applied over and over on random positions of a large buffer, not really a new primitive.

                                                                                                                    2. 6

                                                                                                                      Keybase? Kinda?…

                                                                                                                      1. 4

                                                                                                                        It always bothers me when I see the update say it needs over 80 megabytes for something doing crypto. Maybe no problems will show up that leak keys or cause a compromise. That’s a lot of binary, though. I wasn’t giving it my main keypair either. So, I still use GPG to encrypt/decrypt text or zip files I send over untrusted mediums. I use Keybase mostly for extra verification of other people and/or its chat feature.

                                                                                                                      2. 2

                                                                                                                        Something based on nacl/libsodium, in a similar vein to signify, would be pretty nice. asignify does apparently use asymmetric encryption via cryptobox, but I believe it is also written/maintained by one person currently.

                                                                                                                        1. 1

                                                                                                                          https://github.com/stealth/opmsg is a possible alternative.

                                                                                                                          Then there was Tedu’s reop experiment: https://www.tedunangst.com/flak/post/reop

                                                                                                                        1. 14

                                                                                                                          I’ve been using Macs for nearly a decade on the desktop and switched to Linux a couple of months ago. The 2016 MacBook Pro finally drove me to try something different. Between macOS getting more bloated each release, defective keyboard, terrible battery life, and the touch bar I realized that at some point I stopped being the target demographic.

                                                                                                                          I switched to Manjaro and while there are a few rough edges as the article notes, overall there really isn’t that much difference in my opinion. I’m running Gnome and it does a decent enough job aping macOS. I went with Dell Precision 5520, and everything just worked out of the box. All the apps that I use are available or have equivalents, and I haven’t found myself missing anything so far. Meanwhile it’s really refreshing to be able to configure the system exactly the way I want.

                                                                                                                          Overall, I’d say that if you haven’t tried Linux in a while, then it’s definitely worth giving another shot even though YMMV.

                                                                                                                          1. 4

                                                                                                                            terrible battery life

                                                                                                                            Really? It’s that bad? The Dell is better?

                                                                                                                            1. 3

                                                                                                                              I don’t know about Dell, but my 2016 MacBook Pro was hit pretty hard after the Specter/Meltdown fix came out. I used to go 5 or 6 hours before I was down to 35-40%. Now I’m down to %20-25% after about 4 hours.

                                                                                                                              1. 2

                                                                                                                                Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops. Quite the debacle.

                                                                                                                                In regards to the parent, I have actually been considering moving from an aged Macbook Pro 15” (last of the matte screen models – I have avoided all the bad keyboards so far), to a Mac /desktop/ (mac pro maybe). You can choose your own keyboard, screen, and still get good usability and high performance. Then moving to a linux laptop for “on the road” type requirements. Being able to leave work “at my desk” might be nice too.

                                                                                                                                (note: I work remotely)

                                                                                                                                1. 3

                                                                                                                                  I honestly don’t understand the fetish for issuing people laptops, particularly for software development type jobs. The money is way better spent (IMHO) on a fast desktop and a great monitor/keyboard.

                                                                                                                                  1. 2

                                                                                                                                    Might be the ability to work remotely. I’m with you, though, that laptops are a bizarre fetish, as is working from Anywhere You Want(!)

                                                                                                                                    1. 2

                                                                                                                                      It’s an artifact of, among other things, the idea that you PURSUE YOUR PASSIONS and DO WHAT YOU LOVE*; I don’t want to “work anywhere” – I want to work from work, and leave that behind when I go home to my family. But hey, I’m an old, what do I know.

                                                                                                                                      *: what you love must be writing web software for a venture funded startup in San Francisco

                                                                                                                                  2. 2

                                                                                                                                    Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops.

                                                                                                                                    I wouldn’t guess that. Apples ARM design was one of the few also affected by meltdown. Using it for a laptop wouldn’t have helped.

                                                                                                                                    1. 1

                                                                                                                                      I bought a Matebook X to run Arch Linux on and it’s been pretty great so far.

                                                                                                                                      1. 1

                                                                                                                                        I’ve been thinking about a librem 13. I’ll take a look at the matebook too. Thanks!

                                                                                                                                  3. 2

                                                                                                                                    Yeah I get 4-6 hours with the Dell, and I was literally getting about 2-3 hours on the Mac with the same usage patterns and apps running. I think the fact that you can be a lot more granular regarding what’s running on Linux really helps in that regard.

                                                                                                                                    1. 5

                                                                                                                                      +1 about deciding what you run on GNU/Linux.

                                                                                                                                      I have a Dell XPS 15 9560 currently running Arch (considering switching to NixOS soon), and with Powertop and TLP set up I usually get around 20 hours (yes, 20 hours) per charge on light/normal use.

                                                                                                                                      1. 1

                                                                                                                                        Ha! Thanks for this I didn’t know these were available!

                                                                                                                                        1. 1

                                                                                                                                          No problems! They’re very effective, and are just about the first package I install on a new setup.

                                                                                                                                1. 3

                                                                                                                                  The license for this software is unclear.

                                                                                                                                  Eschewing normal practice, there’s no LICENSE file in the source distribution.

                                                                                                                                  I’m asking this because DJB seems to have views on software licensing that are at odds with the majority of the FOSS community. I’m not sure if this is still the case though.

                                                                                                                                  1. 3

                                                                                                                                    From djb’s previous writings and software, he probably intends this to be license-free software.

                                                                                                                                    1. 7

                                                                                                                                      And I know licensing is an interesting, complex topic that’s fun to armchair lawyer, so if folks want to pick up this topic please start by linking to and building your comment on the 20+ years of previous discussion, and avoid moralizing/shaming others’ licensing choices.

                                                                                                                                      1. 2

                                                                                                                                        I wouldn’t necessarily qualify many of djb’s works as “license free”. He has explicitly put many of them into the public domain. See some of the license related notations on https://cr.yp.to/distributors.html as well.

                                                                                                                                        1. 1

                                                                                                                                          Thanks for the link, it’s certainly an interesting perspective.