1. 4

    there’s an idea floating around to require aa CORS dance for local addresses. https://wicg.github.io/cors-rfc1918/ As always, there’s a problem with backwards compatibility that makes many implementers shy away.

    1. 6

      How ridiculous.

      Here, we propose a mitigation against these kinds of attacks that would require internal devices to explicitly opt-in to requests from the public internet.

      Or, you know, you could change the web browsers so that they can’t make requests to private addresses from public addresses. If I’m on https://lobste.rs/, I don’t want someone to be able to link to http://192.168.1.1/sendMailTo=spam.target@example.com/ or http://127.0.0.1/deleteAllMyShit/. Those devices should be able to sit on my network and be confident that I’m not stupid enough to let people on my network willy-nilly. And my web browser should enforce the distinction between public and private addresses.

      CORS is an utterly stupid system that only serves a useful purpose in situations that should not exist in the first place. The idea that your browser will send off your cookies and other credentials in a request to example.com when that request was made from Javascript from a completely different domain like malicious.org is batshit crazy.

      1. 4

        so that they can’t make requests to private addresses from public addresses

        we only have private addresses because of NAT. There are still networks that have public IPv4 addresses for all devices, or have RFC 1918 addresses for IPv4, but public addresses for IPv6. This restriction you propose does not make that much sense.

        I don’t want someone to be able to link to … http://127.0.0.1/deleteAllMyShit/.

        This is how “native” OAuth apps work on the desktop. So this is actually used. Oh the horror indeed.

        CORS is an utterly stupid system

        Agreed.

        1. 1

          we only have private addresses because of NAT.

          Private IP addresses have nothing to do with NAT.

        2. 3

          CORS is essential for using APIs from the frontend. It also let’s you do things like host your own copy of riot.im and still connect to matrix.org.

          1. 1

            Maybe a local daemon could be used to automatically log in to websites. Or maybe support message signing/encryption out of browser.

        3. 3

          to clarify: We can disallow and forbid all the things, turn the privacy crank up to 11 for all of our users. But most people won’t understand why websites are broken and will then use the other browser, because it just works for them.

          Whenever we want to improve privacy and security for all users, we need to make deliberate meaningful change. Finding the middle ground is hard. But if we don’t, we do our users a disservice by effectively luring them into using a less privacy-protecting browsers ;-)

          The alternative, is of course, education and user-defined configuration. We do that too. But lots of people are busy, have other priorities or are resistant to education. It’s not enough to just help the idealists ;)

          1. 2

            Is it not possible to make the browsers return an error at the same speed?

            1. 1

              this isn’t really possible, as far as I can tell. Unless you made every error take 10 seconds and rejected anything that took more than 10 seconds, it’s an unacceptable solution.

              1. 1

                Whats wrong with just holding quick fails for 10 seconds before returning and failing anything that takes longer than 10 seconds to reply?

                1. 1

                  it’s just such a long time to wait.

                  1. 1

                    A hard value of 10 seconds would probably be too much, and it would not work anyway. The main problem is that the attacker can distinguish between error types using time measurements (whether its 3ms or a static 10s). Instead what you want is to delay one error time to take a similar amount of time to the other - maybe you could pick a random delay based on previous errors of the same type.

                    This kind of mitigation approaches - at least for network times - are not that different from working on a really slow network. I dont expect the speed focused browsers like firefox/chrome to add this kind of thing. But maybe one of the more privacy aware spin-offs could implement this.

          1. 6

            I fully agree with this mostly due to accessibility. I find that more and more websites are harder to read and navigate.

            However for some problems I dont have solutions either, without bringing in some javascript, or changing browser internals:

            • the whole endless scroll concept works well for things like real time chat interfaces, but i dont think you can express it with html alone
            • a lot of things like deferred loading of images, which is done in javascript to speed up loading could probably be done by the browser
            • if the browser would let me do with frames what my window manager does with windows …

            Also I suspect people stress over custom design so much because the default stylesheet for the browser actually looks like crap.

            1. 3

              I agree with this. A default stylesheet with better typography would do a lot to reduce the appeal of css frameworks.

              A lot of the reasonable and valid use-cases for javascript probably ought to be moved into html attributes and the browser – things like “on click make a POST request and replace this element with the response body if it succeeds”. Kind of a “pave the cowpaths” approach that would allow dynamic front-ends without running arbitrary untrusted code on the client.

              1. 1

                “on click make a POST request and replace this element with the response body if it succeeds”.

                You can actually do that with a target attribute on the form going toward an iframe. It isn’t exactly the same but you can make it work.

            1. 5

              I’m not sure the blame falls entirely on the shell on this one. The shell has no way to signal to an external program what is the type of an argument. You would need some kind of exec() with typed arguments for that, or as the author suggests some type of in-band convention, but for that you either apps to support it.

              For critical commands like rm/ls and shell expansion (the example of a file called -ls ) can’t this be solved if those commands are builtins in the shell? If ls is a builtin it and the arguments are typed then it could avoid that corner case.

              1. 2

                I’m not sure the blame falls entirely on the shell on this one

                Absolutely. This convention would go across boundaries and mean changes to both the shell and the command line tools. Perhaps we could go further and embed metadata about unix programs in the ELF binary like a computer parseable description of the args and their types.

                1. 2

                  The ELF metadata would be neat. As an alternative. I think the fish shell grabs some information on command argument completion from the manpages. Maybe you could do something similar to detect compliant commands and cache a table of commands somewhere.

              1. [Comment from banned user removed]

                1. 14

                  Rubbish.

                  Direct access to all the XUL/XPCOM/whatever messy internals from extensions was a huge disadvantage. Firefox developers couldn’t change anything in the browser because some damn extension would break. Also these extensions barely worked in multi-process mode.

                  A well defined, standardized extension API is a massive improvement. (And it makes me extremely happy as an addon developer — same code works in Chromium, Firefox and Edge!!)

                  Actual technological advantages were added to Firefox recently, with Stylo, OMTP, and (not in release yet) WebRender. (In the future, WebRender will even render fonts and vector graphics on the GPU!)

                  1. [Comment from banned user removed]

                    1. 10

                      Advantages for developers directly translate to advantages for users. Namely, performance, security and reliability.

                      nobody gives a shit

                      This is literally false – Firefox has gained market share significantly with the “Quantum” release.

                      Statistically, nobody gives a shit about powerful extensions. (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!) Most people only care about performance.

                      And yet, Mozilla is constantly adding new APIs to WebExtensions to help angry ungrateful nerds get their unnecessary features back. (Most recently, tab hiding has landed, allowing implementations of Tab Groups and such.)

                      1. 4

                        Statistically, nobody gives a shit about powerful extensions. (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!)

                        That is a big leap there, the other 50% are users too, not to mention those that do not report telemetry.

                        The addon changes did make life easier from some extension developers because they get to use the same code for chrome and firefox. Not so much for others, extensions that shell out to the operating system or binary components - these are now much harder to do - just like in chrome.

                        While I appreciate the improved speed, and the new shiny features I hope they don’t lead down a path that, drops support for many other capabilities e.g. does supporting webrender mean dropping support for targets that lack opengl 3?

                        And yet, Mozilla is constantly adding new APIs to WebExtensions to help angry ungrateful nerds

                        This is hardly fair. Many times those ungrateful nerds implemented extensions for features that were later made part of firefox that put the browser ahead competition - adblocking, video autoplay block, decent password managers, etc. Not to mention the reason why they are adding new APIs is because they removed the old ones :)

                        1. 3

                          (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!)

                          Not debating the rest of your points, but I would assume that the people who do use more “powerful extensions” are more apt to turn off Mozilla’s telemetry (I have no data to back this up, just a thought)

                  1. 6

                    I wish the browser wars meant we got some more variety rather than more of the same. We are getting boxed in between two vendors (three if you count webkit/safari).

                    While I understand everyone wants their browser to be snappy, and speed perception drives user adoption, I have other priorities.

                    • I’d like the browser to help me with usability using larger fonts or disable some effects (gradients and low contrast are the new blink).
                    • Videos sometimes dont play at all, or have choppy sound. But the native video players in my system can play the same stream just fine. Why can’t I just outsource playback to the OS?
                    • input handling in the browser always defers to the web page. Sometimes I just want to scroll the page or paste on an input field - but the webpage defined some bindings that prevent me from doing it. I tried to hack around this with some webkitgtk code, but even then I was not 100% successful (lets face it I want normal mode in my browser)

                    I’m savvy enough to have a long list of hacks to do some of this stuff. But it seems to be getting harder to do it. I consider Firefox to be the most configurable of the two, but each release breaks something or adds some annoyance that breaks something else. Currently I’m seriously pondering switching from firefox to chromium because alsa does not work with the new sandbox.

                    The wide scope of browser APIs means they are more like full operating systems than single applications. In fact I think my laptop lacks the disk/ram to build chrome from source. Webkit is likely the most hackable of the bunch, but then again I have no experience with CEF. It seems likely that the major browsers will continue to converge until they become more or less the same, unless some other player steps up.

                    1. 10

                      Firefox is introducing support for decentralized protocols in FF 59. The white-listed protocols are:

                      • Dat Project (dat://)
                      • IPFS (dweb:// ipfs:// ipns://)
                      • Secure Scuttlebutt (ssb://)

                      I think that’s moving things in an interesting direction as opposed to doing more of the same.

                      1. 7

                        Hey! I made that patch! :-D

                        so basically the explanation is simple. There is a whitelist of protocols you can have your WebExtension take over.

                        If the protocol you want to control is not on that whitelist such as an hypothetical “catgifs:” protocol, you need to prefix it like: “web+catgifs” or “ext+catgifs” depending if it will be used from the Add-on or by redirection to another Web page. This makes it inconvenient to use with lots of decentralization protocols because in many other clients we are already using urls such as “ssb:” and “dat:” (eg, check out beaker browser). In essence this allows us to implement many new cool decentralization features as add-ons now that we can take over protocols, so, you could be in Firefox browsing the normal web and suddenly see a “dat:” link, normally you’d need to switch clients to a dat enabled app, now, you can have an add-on display that content in the current user-agent you’re using.

                        Still, there is another feature that we need before we can start really implementing decentralization protocols as pure WebExtensions, we need both TCP and UDP APIs like we had in Firefox OS (as an example, Scuttlebutt uses UDP to find peers in LAN and its own muxrpc TCP protocol to exchange data, DAT also uses UDP/TCP instead of HTTP).

                        I have been building little experiments in Firefox for interfacing with Scuttlebutt which can be seen at:

                        https://viewer.scuttlebot.io/%25csKtp9VmxTjJoKy17O7GA6%2F3S8

                        https://viewer.scuttlebot.io/%25uBev5w8m8iZGVbQDo9fpr%2BCXLB

                        I hope to start a conversation in the add-ons list about TCP and UDP APIs for WebExtensions soon :-)

                        1. 2

                          Fantastic work! :)

                      2. 2

                        Well, on Windows you have a 3rd option: IE.

                      1. 3

                        I’m also very happy with the (relative) easy of use OpenBSD.

                        I missed the existence of Void. Is there any real advantage over Debian besides no-systemd?

                        1. 8

                          To each its own poison. But I like void because

                          • It is a rolling distro, if you are into that kind of stuff.
                          • It has packages for openbsd programs variants e.g. netcat, ksh and doas.
                          • the default network setup uses dhcpcd hooks and wpa_supplicant, so you can avoid networkmanager
                          • it has a muslc variant, but many packages are not available for that
                          • $ fortune -o void

                          The tools for package cross compile and image building are pretty awesome too.

                          1. 3

                            While there are more packages for the glibc variant than the musl variant, I would not characterise this as “not many packages”. Musl is quite well supported and it’s really only a relatively small number of things which are missing.

                            1. 2

                              Thanks!, will try it next time when OpenBSD isn’t suitable.

                            2. 6

                              Void has good support for ZFS, which I appreciate (unlike say Arch where there’s only unofficial support and where the integration is far from ideal). Void also has an option to use musl libc rather than glibc.

                              1. 5

                                Void has great build system. It builds packages using user namespaces (or chroot on older kernels) so builds are isolated and can run without higher privileges. Build system is also quite hackable and I heard that it’s easy to add new packages.

                                1. 1

                                  Never tried adding a package, but modifying a package in my local build repository was painless. (specifically dwm and st)

                                2. 3

                                  Things I find enjoyable about Void:

                                  • Rolling release makes upgrades less harrowing (you catch small problems quickly and early)
                                  • High quality packages compared to other minimalist Linux distros
                                  • Truly minimalist. The fish shell package uses Python for a few things but does not have an explicit Python dependency. The system doesn’t even come with a crond (which is fine, the few scripts I have running that need one I just put in a script with a sleep).
                                  • Has a well maintained musl-libc version. I’m running musl void on a media PC right now, and when I have nothing running but X, the entire system uses ~120MB of RAM (which is fantastic because the system isn’t too powerful).

                                  That said, my go-to is FreeBSD (haven’t gotten a chance to try OpenBSD yet, but it’s high on my list).

                                  1. 1

                                    I’d use void, but I prefer rc.d a lot. It’s why I like FreeBSD. It’s so great to use daemon_option= to do stuff like having a firewall for client only, to easily run multiple uwsgi applications, multiple instances, with different, of tor (for relays, doesn’t really make sense for client), use the dnscrypt_proxy_resolver to set the resolver, set general flags, etc.

                                    For so many services all one needs to do is to set a couple of basic options and it’s just nice to have that in a central point where it makes sense. It’s so much easier to see how configuration relates if it’s at one single point. I know it doesn’t make sense for all things, but when I have a server, running a few services working together it’s perfect. Also somehow for the desktop it feels nicer, because it can be used a bit like how GUI system management tools are used.

                                    In Linux land one has Alpine, but I am not sure how well it works on a desktop. Void and Alpine have a lot in common, even though Alpine seems more targeted at server and is used a lot for containers.

                                    For advantages: If you like runit, LibreSSL and simplicity you might like it more than Debian.

                                    However I am using FreeBSD these days, because I’d consider it closer to Linux in other areas, than OpenBSD. These days there is nothing that prevents me from switching to OpenBSD or DragonFly though. So it’s about choosing which advantages/disadvantages you choose. OpenBSD is simpler, DragonFly is faster and has recent Intel drivers, etc.

                                    For security: On the desktop I think other than me doing something stupid, the by far biggest attack vector is a bug in the browser or other desktop client application, and I think neither OS will safe me from that on its own. Now that’s not to say it’s meaningless or that mitigations don’t work or that it’s the same on servers, but it’s more that this is my threat model for the system and use case.

                                  1. 4

                                    more audio and tactile outputs would be great - it would also improve accessibility to technology.

                                    having started wearing glasses this year due to age - my fonts are getting larger and larger on the high resolution screen that I own…

                                    1. 3

                                      replying to myself - one of the issues I notice is that the focus on the visual means that much information is lost as we end capturing written word as images - rather than text.

                                      while ocr and image recognition systems help, when you p-score tells you it’s a giraffe when you know it’s a cat there is likely to be information loss.

                                      1. 2

                                        For tactile outputs there are a couple of microfluidics based prototypes. For braille there was BLITAB and a couple others. There are also some tactile screens where the keyboard rises. The tech seems promising, but other than promotion articles and events I have yet to see one.

                                        My eyes would really appreciate a large size e-ink display for work. Most of my time is spent reading or writing text anyway. The largest one I’ve found was a 13in screen.

                                      1. 4

                                        I love these, is there any way to automate applying these to a given Firefox Profile?

                                        It’d be so nice to have these set as part of a local Ansible run, for example

                                        1. 7

                                          You can find prefs.js inside the profile folder. You can just add entries like user_pref("media.eme.chromium-api.enabled", false); there — it is a text file you can edit.

                                          1. 1

                                            Yeah, that would be cool, I’ve been trying to fully automate my desktop setup using nix.

                                            1. 1

                                              If marionette is enabled (or maybe webdriver?) you can also alter settings at runtime. The official python package for this is marionette_driver. I use my own code for the marionette bits, but I setup firefox settings from shell scripts.

                                              1. 1

                                                As far as I know, all WebDriver support in Firefox is implemented by a proxy that connects to the Firefox instance itself via Marionette protocol.

                                                And WebDriver protocol is too cross-browser to support preferences. So if you want to randomize sme options in runtime (to mess with fingerprinting, I guess) or to allow/block Javascript by a script (I actually use this), native Marionette client is needed.

                                                1. 2

                                                  As far as I know, all WebDriver support in Firefox is implemented by a proxy

                                                  Yes, geckodriver is the proxy. Webdriver does support browser specific options, for example you can set profile preferences when starting up geckodriver, but I dont know if webdriver provides an api to do it after the browser is started.

                                                  I manage firefox instances using a little CLI https://github.com/equalsraf/ffcli/blob/master/ff/MANUAL.md and lots of shell script shenanigans.

                                                  or to allow/block Javascript by a script (I actually use this)

                                                  You mean suspend script execution? How do you do that using marionette?

                                                  1. 1

                                                    No, I just start with scripts disabled, and then I manually trigger preference modification (like your prefset) to reenable scripts if I want them enabled. And I generally have many Firefox instances under different UIDs, so the effect is formally local but actually affects only one site anyway. (And launch new instances using rofi — which is similar to dmenu, and I have a way to make some bookmark there be associated with scripts enabled immediately). I gave up on managing the ports when I start too many instances at once (race conditions are annoying), so now they just live in their own network namespaces.

                                            1. 1

                                              Now that I’m no longer so caught up in this pet project article of mine, I realize how BS it is. Deleted.

                                              1. 3

                                                can you delete the lobste.rs post too then? i just tried to find it, and read the comments down to here for not much reason

                                                1. 3

                                                  Which makes this whole thread completely useless as I have no idea what other posters are referring to.

                                                1. 3

                                                  I actually agree with this but I also think it will not work for some people or work setups. I use a tilled window manager to assign windows to desktops and switch efficiently. But if your work involves repeatedely going through the same windows I think a larger screen is preferable, you save your fingers a lot of work.

                                                  As a side note, now that I use progressive lens I find my home screen too big and want to switch to a smaller one :)

                                                  1. 1

                                                    Several methods for different purposes.

                                                    I keep a small pocket notebook for keeping track of daily taks - one or two entries per day at most (too many entries usually mean I got very little done). I write these at the end of the day. The main goal is to be able to look back at the end of the month or later.

                                                    For work notes it depends a lot on what I am doing at the moment. For meetings/presentations I usually write stuff down in Vim (vim-pad and some other plugins)

                                                    When I’m working alone I sometimes like to think by writing down free text on paper, I have a larger notebook just for this.

                                                    For very quick one line notes I have a small program that writes a rotating log file. I dont use it much to be honest. Most times I just jot it down in my pocket notebook.