1.  

    For a moment II thought this was just a guide to installing uunbound the DNS server and configure it with ad hosts blocked out, but this is super valuable:

    Additionally, adding a rule such as below to your router’s pf.conf will help to block ads on Google devices such as Chromecasts and Youtube apps that are often hardcoded to use Google DNS

    vi /etc/pf.conf

    pass out quick on egress from any to { 8.8.8.8 8.8.4.4 } rdr-to $adblock-server

    Neato!

    1. 3

      IRC is great. It’s easy to implement a client for, simple enough to understand, and used pretty widely. It’s still the main protocol I use to keep in touch with people.

      The only thing I wish that it would get is server side history, so I could scroll back in a channel without idling or setting up a bouncer.

      1. 2

        Early this year I finally broke down and subscribed to IRCCloud. They handle all the details of staying connected so you have access to channel history. Admittedly it works out to about 14 cents per day. Well worth it, in my opinion.

        1.  

          I’ve been keeping an eye on IRCCloud – I currently keep a ZNC server running… playing with it has been quite fun and instructive so far, but I would be OK paying 5$/month for someone taking care of it all. I’m waiting for them to open up a bouncer service so we can connect with other clients than their official IRCCloud client… apparently it’s on their roadmap (see bottom).

        2. 2

          There are people working on a revised spec called ircv3, that aims to address those issues. I haven’t been in touch with that group, so I can’t speak to their progress or success.

          1. 2

            I’m aware of ircv3, but I’m not aware of any proposal to add server side history to it – at least not one that’s gone anywhere. I’m sure someone cares about the other features, but they don’t really make a difference for me.

            Edit: And I found one: https://github.com/ircv3/ircv3-specifications/pull/292

            1. 1

              Uh, that is interesting. Thanks for sharing! I wonder if there are already any clients and servers out there.

              Also see this story: https://lobste.rs/s/zdkuil

              1. 2

                the rust irc crate aims for ircv3 support ;) (I’m trying to contribute to it (but outside of the v3 things))

                1. 1

                  You mean this crate? https://crates.io/crates/irc

                  1. 1

                    Yup.

          1. 2

            What’s ECDAA support like in browsers, backend libraries or hardware tokens?

            1. 1

              Chrome doesn’t implement it. Neither does Firefox. I don’t know about any other browsers.

              As far as everything else: https://twitter.com/herrjemand/status/1031511164671483906

            1. 14

              I’ve learned programming by writing IRC bots and this is still my default activity when I learn a new language. It’s fun, because you get to play with sockets, text parsing and you can mix in all the interesting things you like (if you want to): TLS, asyncio, natural language parsing.

              1. 17

                I strongly disagree with a CVE tag. If a specific CVE is worth attention it can be submitted under security, most likely with a nice abstract discussing it like the Theo undeadly link or the SSH enumeration issue. Adding a CVE tag will just give a green light to turning lobste.rs in a CVE database copy - I see no added value from that.

                1. 7

                  I agree. I think it comes down to the community being very selective about submitting CVEs. The ones that are worth it will either have a technical deep-dive that can be submitted here, or will be so important that we won’t mind a direct link.

                  1. 2

                    Although I want to filter them, I agree this could invite more of them. Actually, that’s one of @friendlysock’s own warnings in other threads. The fact that the tags simultaneously are for highlighting and filtering, two contradictory things, might be a weakness of using them vs some alternative. It also might be fundamentally inescapable aspect of a good, design choice. I’m not sure. Probably worth some contemplation.

                    1. 2

                      I completely agree with you. I enjoy reading great technical blog posts about people dissecting software and explaining what went wrong and how to mitigate. I want more of that.

                      I don’t enjoy ratings and CVSS scores. I’d rather not encourage people by blessing it with a tag.

                    1. 3

                      for my tiny eslint plugin, I do this:

                      • Check everything locally:
                        • npm run test
                        • npm run lint
                      • Bump version in package.json
                      • git add
                      • git commit
                      • git push
                      • wait for travis results (to see how other supported nodejs versions do)
                      • npm publish
                      1. 2

                        Not sure whether I’d call it a security bug, but it’s definitely a bug. Thank you for filing it!

                        1. 2

                          I hope from the conclusion you can see that my line of thinking shifted this way, but at the time I became so wrapped up in the excitement of it all that I thought I’d hit something!

                          1. 1

                            Yeah, I can relate. Been there way too often :-)

                        1. 2

                          If you want tthe mixed contents warning go away (I.e., website containing insecure content) you might want to serve an “upgrade insecure requests” CSP header.

                          1. 3

                            I’m Freddy and I do security things. Mostly web security: https://frederik-braun.com/archives.html

                            1. 4

                              Does anyone have a sense of the level of effort required to port something like this to run on Firefox?

                              1. 5

                                Porting this would make for a great starter project if you’re just getting into Firefox extension development!

                                Depending on your level of experience it would take a competent developer anywhere from 1 to 4 days to complete the port.

                                1. 2

                                  Indeed, should be quite easy.

                              1. 2

                                making the reports eligible for a bug bounty. Neat!

                                1. 2

                                  The most interesting about this isn’t the fine. The EU can make Google stop the problematic business practices that Android OEMs have to ship Chrome as default browser.

                                  1. 9

                                    totally happy with a Microsoft ergonomic keyboard.

                                    1. 4

                                      Same here.

                                      I’d love to have a fancy mechanical keyboard with lots of option keys etc. but I don’t have endless spare time to research let alone configure something like that.

                                      1. 2

                                        Same on both counts

                                      2. 1

                                        The MS Natural 4000 is perfect for me: I went from pain after an hour to no pain no matter how much I type. It’s only $30! Great stuff.

                                        I’ve got an ergodox, but getting used to the thumb keys at work but not having it on my laptop at home was just too much to get used to.

                                      1. 4

                                        The main (only?) argument for putting static sites behind HTTPS is to prevent visitors from getting MITM’d. I’m a little uncomfortable about the unspoken implication that content publishers should be responsible for the security of their visitors but that’s a separate point.

                                        What really annoys me about the push for HTTP on static sites and other benign content is two things:

                                        1. HTTPS is touted as the best thing since sliced bread but we already know the existing TLS certificate trust chain in mainstream browsers is pretty weak. Certificate authorities have suffered serious security lapses and/or incompetence (Symantec, Wosign), or delegate too freely to likewise entities. Pretty much all developed countries in the world either have government-run CAs or can swoop in and “borrow” the private keys of commercial CAs to sign fraudulent certs or (more likely) decrypt traffic as it goes by. There are things happening to make incremental improvements to these problems but right now the mainstream opinion is just to keep putting band-aids on the system. Don’t get me wrong, HTTPS is better than nothing but the whole trust chain is very half-assed and nobody seems interested in fixing it.

                                        2. HTTPS is arguably not the right tool for public, non-secret content. As a static content publisher (yes, I use that term loosely), I don’t want to encrypt my content, I only want to sign it to show that it hasn’t been tampered with. But with HTTPS it’s all-or-nothing. If we had secure DNS (however implemented), this would be fairly straightforward: public key in a DNS record and a signature for the page in the HTTP headers. The browser can show the page as signed, clients who don’t have the technology to verify the signature or who don’t care are free to be MITMed at their leisure.

                                        1. 8

                                          How can visitors secure themselves against MITM attackers without the cooperation of content publishers? Maybe I should be concerned that requiring free content publishers to do more work makes a less useful web.

                                          Public non secret content is tricky. A blog about food isn’t a secret, but access patterns might be. If I only visit the pages about sugary foods, my ISP might sell this data to an advertiser or a health insurance company. This is prevented by TLS encryption. What is the downside of encrypting as well as signing?

                                          1. 2

                                            If I only visit the pages about sugary foods, my ISP might sell this data to an advertiser or a health insurance company. his is prevented by TLS encryption.

                                            Except it isn’t prevented by TLS. The sugary foods site, using Google Analytics (or even Google hosted jquery or webfonts) will still sell the fact that you were there. If it doesn’t use Google then any externally hosted resource could be used to track you. The blog itself would know which pages you visited and could resell the data. Your ISP can integrate technologies that use techniques that TLS does not defend against. Here’s a video of Vincent Berg’s work on deanonymzing Google Maps over TLS from 2012.

                                            At the very least your ISP will have the metadata about the fact you visited a site with sugary pages, how much data was transferred and when.

                                            The problem here is that the HTTPS infrastructure does not grant sufficiently reliable confidentiality and provides some (occasionally broken) integrity confirmation compared to other more difficult to manage methods.

                                          2. 2

                                            We’re getting closer and closer to a world where all certificates are in Certificate Transparency logs, which addresses the security concerns around your first point (whether that’s desirable from a data hoarding / secrecy perspective is a totally different aspect).

                                            Regarding your second point, I honestly think that it shouldn’t be you deciding whether you want to encrypt your content. I understand you don’t think it’s necessary, but the goal for all of this is to change the web to provide encryption by default in the long run. Because it makes sense for users.

                                          1. 6

                                            Great article! I wonder whether the future of HTML escaping libraries will lie with something like ammonia, which actually parses the HTML before emitting a sanitized version, instead of simple text-replacement - at a certain point, I guess it becomes a better idea to just do what a browser would do in order to ensure that your sanitation worked…

                                            1. 5

                                              Yeah, I prefer using DOM functions for everything, including templating. With the DOM, everything gets escaped in proper context and you can do other sanity checks, like always outputting strictly well-formed stuff. A HTML document isn’t really a string and I prefer to avoid pretending it is.

                                              1. 2

                                                Do you have a link or example for this method?

                                              2. 3

                                                Related; DOMPurify, uses DOM APIs exposed to JavaScript to ensure that browser and sanitizer show the we parsing behavior.

                                              1. 20

                                                The only dependency is a recent 57+ version of Firefox.

                                                1. 3

                                                  It seems pretty useless at first but you can run the program on a remote machine and ssh/mosh in to it for ultra light web browsing. Although you could probably get super light browsing by just disabling images in your browser.

                                                  1. 2

                                                    i.e., you need X and GTK installed (it’s not necessqry to run X though afaiu)

                                                  1. 0

                                                    Neat collection, thank you!

                                                    1. 7

                                                      The architecture of open source applications should be real-world enough

                                                      1. 3

                                                        I don’t know is AOSA is what @hwayne is looking for, but for anyone reading, its a great series of books and I’d advise everyone to check them out.

                                                      1. 3

                                                        Reminds me of the format we wanted to use in Firefox OS next, which never came to be.

                                                        There’s also an IETF draft following a similar goal: https://datatracker.ietf.org/doc/html/draft-yasskin-webpackage-use-cases-01

                                                        1. 3

                                                          One idea I had to solve the fundamental insecurity of web-based crypto was to add subresource integrity support to service workers.

                                                          Service workers are persistent and are essentially a piece of JavaScript which can intermediate all HTTP requests to an origin, which means a trusted service worker could verify the integrity of all loaded resources according to an arbitrary policy. Then you only need to secure the service worker itself. If you could specify the known hash of a service worker JS file, you could know that all future resource loads from the origin would be intermediated by that JS. Presumably, the service worker JS would change rarely and be publically auditable. (If the inconvenience of not being able to change its hash is too inconvenient, it could chainload a signed service worker file; you can implement arbitrary policies.)

                                                          This creates a TOFU environment. One logical extension if this were implemented would be to create a browser extension which preseeds such service workers with their known hashes, similarly to HTTPS Everywhere.

                                                          I created an issue suggesting this at the Subresource Integrity WG: https://github.com/w3c/webappsec-subresource-integrity/issues/66

                                                          1. 3

                                                            I am one of the editors of the SRI spec but I am currently on a month long vacation in remote places that doesn’t allow internet access (except this comment, written on a potato.)

                                                            Having said that, you’ll likely be interested in this web hacking challenge of mine from last year. It involves SRI and Service Workers: https://serviceworker.on.web.security.plumbing/

                                                            I’ve summarized my findings here: https://frederik-braun.com/sw-sri-challenge.html