Threads for raf

    1. 3

      Am I the only one that encodes this in the hostname of the remote, and then setup the keys in .ssh/config? e.g.

      $ git remote -v
      origin  git@equalsraf-codeberg:equalsraf/filemark.git (fetch)
      

      and then

      Host equalsraf-codeberg
      	Hostname codeberg.org
      	IdentityFile ...
      
      1. 3

        I used to do this, but it broke some tools I used that needed the original hostname (e.g. to link you to a commit in your browser).

      2. 1

        Does this help with the email as well? Or just the ssh key.

        1. 3

          you mean the email used for the commit. No it does help with that.

          I usually config email per repo, and leave the global settings empty. Likewise for gpg key.

    2. 2

      Actually we already do part of this reversal for mailing lists, when you start the subscription by sending an email and confirm by clicking a link. e.g. for mailman

      https://www.list.org/mailman-member/node13.html

      I am not convinced it completely avoids the spam box, the response can still land in there, but yes it reduces the probability if your email server auto allows emails in your contact list.

      Concerning the challenge I think some mailing lists use unique subscription email addresses as an alternative to any token in the subscription email, or they do two round trips to place a unique token in the reply text. The second round trip is also an alternative if you do not have DKIM and so on.

      There is probably a deeper question about whether we need email verification at all, but that is besides the point :D

    3. 6
      • Walking a bit, I feel that 2020 just removed leg muscle mass
      • Working on a little programming language - looks ML-ish - still early days, wondering about code comments and ASTs
      • Online games, because why not
    4. 0

      Interesting topic but it reads so emotionally charged that it’s hard to believe there are no further personal reasons for the author to write this.

      Also it’s often stated that IP addresses are an extremely poor way to identify someone reliably, so why would it not be the case now?

      That’s especially true when most people are going to be using mobile data and not their home WiFi, so an attacker would need to do so much work just to get some even more unreliable data on whether someone has contracted covid or not. For what purpose anyway? It’s not like it’s the bubonic plague after all.

      The Bluetooth part I agree is a bit poor, especially the precision, but I guess if it is presented as not a perfect and final metric for social distancing then I don’t see the problem with it.

      Valie points but the tone is completely off in my opinion.

      1. 1

        While I agree this is emotionally charged, there are good points in there about bluetooth and privacy design in general.

        Why do you say that “it’s often stated that IP addresses are an extremely poor way to identify someone reliably”? If anything this proved to be reliable enough (TM).

        1. 2

          Why do you say that “it’s often stated that IP addresses are an extremely poor way to identify someone reliably”?

          Because that has been tested in court on multiple occasions.

          1. 1

            True that they can fail to work as proof in court. And depending or your ISP (mobile or not) they may rotate frequently leading to the wrong mobile (or not).

            They can still be used as correlation together with other data though. The concern here is not court use, but rather privacy disclosure. In court they are discarded because they are not enough to identify one single person, at least not without other proof. The article also suggests this by combining IP with user agent.

            Any service that can tie that ip/timeframe to user identity could collude with this Covid tracking service to reveal that information - this could be your ISP, or any other service that you used in a time window and can identify you.

            I think his point stands - compromising that service risks exposing the identity, even if not directly. It is not like there are no alternative designs to avoid this either.

        2. 1

          Exactly what @colonelpanic said.

          Anyway IP are quite precise indicators when you can go and talk privately with the ISP of that IP with a datestamp in hand.

          But only IP address means absolutely nothing: VPNs, restarting your router, changing cellular station and restarting your mobile data, all that usually changes your IP address, making it not a 1 to 1 match.

          That’s why browser fingerprinting is much worse as normally nobody changes that often enough (unless you use extensions to prevent some of that).

      2. 0

        Also it’s often stated that IP addresses are an extremely poor way to identify someone reliably, so why would it not be the case now?

        The IP address is now tied to a mobile phone and not a family home computer.

        1. 1

          Mobile IP addresses get reused very often by the ISP so, as I said, unless you are in touch with people there you won’t get anything out of IP addresses nowadays.

    5. 4

      I did something along these lines but at a fraction of the cost, here is my eink display. It’s 7” and controlled by a RaspberryPi Zero and shows me the latest news, weather for the week, and public transport info.

      1. 1

        I eagerly await the day when these things become affordable as a computer screen.

        This one looks interesting https://www.waveshare.com/product/displays/e-paper/12.48inch-e-paper-module-b.htm?___SID=U

        12” with three colors. Also if you look closely at one of the photos is that a raspberry pi under it?

      2. 1

        The WaveShare link seems to point at a regular screen, not an e-ink display.

        1. 1

          Woops sorry updated! They have a lot of displays btw https://www.waveshare.com/product/displays/e-paper.htm

          1. 1

            I’m considering getting one of them, actually. I have several Raspberry Pis that I can use, but I’ve just been hesitant to buy one because I don’t know how the interface between the Pi and the display works. If it supports HDMI, that’s familiar to me; however, many of these seem to use SPI, which I don’t think I’ve ever used before. If I get one, I want to know that I can actually write a program that can communicate over this connection (and I want to know there are programming language options other than just Python, which most libraries I’ve found seem to be written in).

            1. 1

              Yeah it is SPI and for my usage I use Python with a little forked lib. I’m not sure if there are any eink displays with HDMI.

    6. 5

      In some cases recaptcha is not even about proof of humanity, it is just about spam prevention. A proof of wait/patience would be enough to prevent high volume spam. Specially in cases where the user is already authenticated anyway.

      In other cases the websites have other means at their disposal and they just dont use them. My previous mobile operator uses recaptcha on their services despite having my phone number which they could use to call/sms me. This has led to absurd tech support calls where the staff at the end of the line has to wait 15min for me to solve three captchas.

      Sometimes recaptcha is even used to filter operations that have zero spam impact, like logging in. Basically outsourcing DoS prevention to Google (to some extent).

      I’m not so sure about javascript as a deterrent though. Its pretty easy to spin up a remote controlled browser these days.

    7. 12

      I’m blind in one eye and have bad vision in the other. I can’t see in 3D (which is irritating at 3D movies because I feel like I shouldn’t have to pay for a dimension I’m not going to use).

      The vision in my other eye is correctable, but it’s been slowly getting worse over the years. I know that, eventually, I’m going to need some sort of assistive technology to use my computer. I already have to zoom stuff even when I’m wearing my glasses.

      Also, closed-captioning on videos is useful even if you’re not deaf/HoH. Some of us like to watch videos while other people are sleeping.

      1. 2

        Why wouldn’t you go see the 2D version of the movie?

        1. 9

          It was mostly a joke, but for a lot of those movies they have like 12 showtimes for the 3D version and sometimes zero for the 2D so often it’s a scheduling/availability thing.

          Plus my wife has two working eyes (though she must be blind because she married me).

        2. 4

          Why wouldn’t you go see the 2D version of the movie?

          Because your friends picked the movie, or because there is no 2d version available where you live.

          I too am monocular, at some point had surgery on my good eye, which meant that for a short bit I had to use a screen reader, I still keep some hacks around:

          • I like videos to have captions - I could use them as transcripts, and grep through them. It is not always easy to download those captions though.
          • popups are fun, when your screen reader mixes some content with ads the results are hilarious
          • autoplay is the work of the devil when you are using audio to navigate the page, and even when you are not
          • Reading the web with a screen reader can be tricky, but what is really hard is to writing something - filling up a long web form without typing stuff in the wrong place, etc
          • image loading and zooming is beyond broken for me - i just want to be able to zoom some pictures on a site, but the fancy js image zoom widgets get in the way

          I have no expectations that hordes of developers/designers will agree on designing websites that fix all of this for everyone. I would expect state services to keep accessible services, sadly this is not the case.

          I just wish my browser would help me more. In firefox you can (could?) disable colors and fonts from css rules which helps a little bit with contrast issues. These days I run web pages with a larger minimum font - I’m perfectly functional without it - but it makes me a lot less tired at the end of the day. In general zoom is broken on most pages.

          I wish you could treat some elements in a web page as you do in a tilling window manager - just keep them out of your way until you want to have a look at them.

    8. 5

      Please - the word you wanted is “lose” and not “loose”. “loose” is the opposite of “tight”, not of “gain”. (I realise you’re probably not a native English speaker and I wouldn’t complain, but it’s right there in the title and it reads wrong - because the words are pronounced differently).

      I too am concerned about the web browser monoculture. I personally continue to use Firefox, although some of the practices of Mozilla occasionally irk me, I still find it preferable (and far easier to build) than Chrome. The question is, though, what can we actually do about it? Chrome is very successful and has a lot of resources behind it. But web renderers are far from trivial; it’s not like it’s an easy to produce a quality feature-complete competitor. That’s why webkit is doing so well - it’s packaged as a component, not a full browser. (Just as Firefox has Gecko, or whatever its current incarnation is called, in theory).

      So: what do we do? How do we avoid blinking?

      1. 12

        I think we lost when we allowed web standards to get so complex that they can’t be independently implemented without a billion dollar company funding a large team. I don’t think that this is solvable. The existing players are so far ahead that there’s really no catching up.

        1. 3

          Servo is not a billion dollar project.

          1. 6

            Mozilla’s annual revenue is half a billion dollars. Since it’s a non-profit, there’s no real valuation that I’m aware of, but just going off of typical P/E ratios, that would make them a multi-billion dollar company.

            1. 1

              Mozilla Corporation is a for-profit corporation owned by Mozilla Foundation, a nonprofit. That means the private part does have a value. They usually do profit times 10 in straight-forward sales of businesses. Using their 2016 financial, here’s the numbers to look at:

              Revenue: $520 mil

              Development cost: $225 mil

              Marketing: $47 mil

              Administrative: $59.9 mil

              Net gains: $102 mil (if I’m reading it right cuz it’s different than ones I did in college)

              They’re worth somewhere between $1-5 billion if looking at operating profit or revenues with no consideration for up/down swings in the future. Also, there’s two numbers there that look inflated: development cost; administrative. For the former, they use a lot of developers in high-wage areas. They could move a good chunk of development to places where good talent, esp their real estate, is cheaper to free up money for more developers and/or acquisitions. For administrative, that’s a big number that’s above their marketing spending. I think that should be other way around. More money into marketing might equal larger share of users.

      2. 6

        So: what do we do? How do we avoid blinking?

        It seems that Mozilla’s answer to that question is the Servo project. I guess we could start contributing.

        1. 6

          While I like rust and servo as a research project - mozilla does not hold a good track record when it comes to providing a browser as an reusable component. It has been a long time since Gecko could be easily embedded in other browsers, and this does not seem to be a priority for servo either.

          FWIW I think the main competitor to Blink is actually Webkit in the sense that it is the easiest open source browser for someone to modify. I would prefer to see people put their effort there.

          1. 13

            GeckoView is an upcoming embedding API. It’s supposed to fix that and already used in some Firefox products, most notably Focus.

            1. 2

              This needs to be on desktop platforms, too, though, not just Android. But I’m happy to see the progress.

            2. 2

              I haven’t seen any code using GeckoView on the Desktop, is it Android only or can it be used to build Desktop browsers?

              1. 6

                It runs a where Gecko runs. Which is Linux/Windows/OSX on Intel, ARM, ARM64 etc.

                First iterations happened to be in mobile because we need to cash in on the Quantum improvements on mobile. That’s not due to technical constraints.

                1. 3

                  oh that is awesome news. I was looking at the repo but could only find examples for Android and it being mentioned as an Android component. I wish there was a sample for the Desktop, something like QtGeckoView would make it quite popular.

                2. 1

                  Is it Java, though? Because, if so - ick. It would be much better to have a C, C++ or Rust API - something that doesn’t automatically add a large runtime overhead. I don’t foresee many desktop browsers being built on top of a Java API no matter how powerful/easy-to-use it is.

                  (Not that I think Java doesn’t have its place, I just don’t think it fits this niche particularly well, except for the obvious case of Android).

                  1. 1

                    No. On Android, we use embed GeckoView within a Java projects (obviously). This is mostly based on our Android Components.

            3. 1

              That also looks incredibly easy to use. That’s cool.

      3. 1

        Thanks for the feedback. I’ve realized the mistake about that typo too late and unfortunately the URL is tied to it. Fixing it makes a new URL and I can’t edit the URL here. :-(

        I agree with you, building an engine as a component that is easy to embed and build upon is the reason why WebKit became the dominant force here. I wish Mozilla paid more attention for the embedability of Gecko (which I’ve heard is a mess to build your product on top of). There is no easy way out of the current mess we’re in, people who are concerned about that can basically throw some effort and action towards Mozilla strengthening the remaining engine before it is too late.

        1. 1

          Fixing it makes a new URL and I can’t edit the URL here.

          Can you add a redirect?

          1. 1

            I will look into crafting a redirect tomorrow as I don’t want to disrupt the little server today. This is not a jekyll blog. I think that adding a redirect using .htaccess should work but as the server is being accessed a lot right now, I am a bit afraid of breaking the post and potential readers reaching a broken URL.

    9. 4

      there’s an idea floating around to require aa CORS dance for local addresses. https://wicg.github.io/cors-rfc1918/ As always, there’s a problem with backwards compatibility that makes many implementers shy away.

      1. 6

        How ridiculous.

        Here, we propose a mitigation against these kinds of attacks that would require internal devices to explicitly opt-in to requests from the public internet.

        Or, you know, you could change the web browsers so that they can’t make requests to private addresses from public addresses. If I’m on https://lobste.rs/, I don’t want someone to be able to link to http://192.168.1.1/sendMailTo=spam.target@example.com/ or http://127.0.0.1/deleteAllMyShit/. Those devices should be able to sit on my network and be confident that I’m not stupid enough to let people on my network willy-nilly. And my web browser should enforce the distinction between public and private addresses.

        CORS is an utterly stupid system that only serves a useful purpose in situations that should not exist in the first place. The idea that your browser will send off your cookies and other credentials in a request to example.com when that request was made from Javascript from a completely different domain like malicious.org is batshit crazy.

        1. 4

          so that they can’t make requests to private addresses from public addresses

          we only have private addresses because of NAT. There are still networks that have public IPv4 addresses for all devices, or have RFC 1918 addresses for IPv4, but public addresses for IPv6. This restriction you propose does not make that much sense.

          I don’t want someone to be able to link to … http://127.0.0.1/deleteAllMyShit/.

          This is how “native” OAuth apps work on the desktop. So this is actually used. Oh the horror indeed.

          CORS is an utterly stupid system

          Agreed.

          1. 1

            we only have private addresses because of NAT.

            Private IP addresses have nothing to do with NAT.

        2. 3

          CORS is essential for using APIs from the frontend. It also let’s you do things like host your own copy of riot.im and still connect to matrix.org.

          1. 1

            Maybe a local daemon could be used to automatically log in to websites. Or maybe support message signing/encryption out of browser.

      2. 3

        to clarify: We can disallow and forbid all the things, turn the privacy crank up to 11 for all of our users. But most people won’t understand why websites are broken and will then use the other browser, because it just works for them.

        Whenever we want to improve privacy and security for all users, we need to make deliberate meaningful change. Finding the middle ground is hard. But if we don’t, we do our users a disservice by effectively luring them into using a less privacy-protecting browsers ;-)

        The alternative, is of course, education and user-defined configuration. We do that too. But lots of people are busy, have other priorities or are resistant to education. It’s not enough to just help the idealists ;)

      3. 2

        Is it not possible to make the browsers return an error at the same speed?

        1. 1

          this isn’t really possible, as far as I can tell. Unless you made every error take 10 seconds and rejected anything that took more than 10 seconds, it’s an unacceptable solution.

          1. 1

            Whats wrong with just holding quick fails for 10 seconds before returning and failing anything that takes longer than 10 seconds to reply?

            1. 1

              it’s just such a long time to wait.

              1. 1

                A hard value of 10 seconds would probably be too much, and it would not work anyway. The main problem is that the attacker can distinguish between error types using time measurements (whether its 3ms or a static 10s). Instead what you want is to delay one error time to take a similar amount of time to the other - maybe you could pick a random delay based on previous errors of the same type.

                This kind of mitigation approaches - at least for network times - are not that different from working on a really slow network. I dont expect the speed focused browsers like firefox/chrome to add this kind of thing. But maybe one of the more privacy aware spin-offs could implement this.

    10. 6

      I fully agree with this mostly due to accessibility. I find that more and more websites are harder to read and navigate.

      However for some problems I dont have solutions either, without bringing in some javascript, or changing browser internals:

      • the whole endless scroll concept works well for things like real time chat interfaces, but i dont think you can express it with html alone
      • a lot of things like deferred loading of images, which is done in javascript to speed up loading could probably be done by the browser
      • if the browser would let me do with frames what my window manager does with windows …

      Also I suspect people stress over custom design so much because the default stylesheet for the browser actually looks like crap.

      1. 3

        I agree with this. A default stylesheet with better typography would do a lot to reduce the appeal of css frameworks.

        A lot of the reasonable and valid use-cases for javascript probably ought to be moved into html attributes and the browser – things like “on click make a POST request and replace this element with the response body if it succeeds”. Kind of a “pave the cowpaths” approach that would allow dynamic front-ends without running arbitrary untrusted code on the client.

        1. 1

          “on click make a POST request and replace this element with the response body if it succeeds”.

          You can actually do that with a target attribute on the form going toward an iframe. It isn’t exactly the same but you can make it work.

    11. 5

      I’m not sure the blame falls entirely on the shell on this one. The shell has no way to signal to an external program what is the type of an argument. You would need some kind of exec() with typed arguments for that, or as the author suggests some type of in-band convention, but for that you either apps to support it.

      For critical commands like rm/ls and shell expansion (the example of a file called -ls ) can’t this be solved if those commands are builtins in the shell? If ls is a builtin it and the arguments are typed then it could avoid that corner case.

      1. 2

        I’m not sure the blame falls entirely on the shell on this one

        Absolutely. This convention would go across boundaries and mean changes to both the shell and the command line tools. Perhaps we could go further and embed metadata about unix programs in the ELF binary like a computer parseable description of the args and their types.

        1. 2

          The ELF metadata would be neat. As an alternative. I think the fish shell grabs some information on command argument completion from the manpages. Maybe you could do something similar to detect compliant commands and cache a table of commands somewhere.

    12. 6

      I wish the browser wars meant we got some more variety rather than more of the same. We are getting boxed in between two vendors (three if you count webkit/safari).

      While I understand everyone wants their browser to be snappy, and speed perception drives user adoption, I have other priorities.

      • I’d like the browser to help me with usability using larger fonts or disable some effects (gradients and low contrast are the new blink).
      • Videos sometimes dont play at all, or have choppy sound. But the native video players in my system can play the same stream just fine. Why can’t I just outsource playback to the OS?
      • input handling in the browser always defers to the web page. Sometimes I just want to scroll the page or paste on an input field - but the webpage defined some bindings that prevent me from doing it. I tried to hack around this with some webkitgtk code, but even then I was not 100% successful (lets face it I want normal mode in my browser)

      I’m savvy enough to have a long list of hacks to do some of this stuff. But it seems to be getting harder to do it. I consider Firefox to be the most configurable of the two, but each release breaks something or adds some annoyance that breaks something else. Currently I’m seriously pondering switching from firefox to chromium because alsa does not work with the new sandbox.

      The wide scope of browser APIs means they are more like full operating systems than single applications. In fact I think my laptop lacks the disk/ram to build chrome from source. Webkit is likely the most hackable of the bunch, but then again I have no experience with CEF. It seems likely that the major browsers will continue to converge until they become more or less the same, unless some other player steps up.

      1. 10

        Firefox is introducing support for decentralized protocols in FF 59. The white-listed protocols are:

        • Dat Project (dat://)
        • IPFS (dweb:// ipfs:// ipns://)
        • Secure Scuttlebutt (ssb://)

        I think that’s moving things in an interesting direction as opposed to doing more of the same.

        1. 7

          Hey! I made that patch! :-D

          so basically the explanation is simple. There is a whitelist of protocols you can have your WebExtension take over.

          If the protocol you want to control is not on that whitelist such as an hypothetical “catgifs:” protocol, you need to prefix it like: “web+catgifs” or “ext+catgifs” depending if it will be used from the Add-on or by redirection to another Web page. This makes it inconvenient to use with lots of decentralization protocols because in many other clients we are already using urls such as “ssb:” and “dat:” (eg, check out beaker browser). In essence this allows us to implement many new cool decentralization features as add-ons now that we can take over protocols, so, you could be in Firefox browsing the normal web and suddenly see a “dat:” link, normally you’d need to switch clients to a dat enabled app, now, you can have an add-on display that content in the current user-agent you’re using.

          Still, there is another feature that we need before we can start really implementing decentralization protocols as pure WebExtensions, we need both TCP and UDP APIs like we had in Firefox OS (as an example, Scuttlebutt uses UDP to find peers in LAN and its own muxrpc TCP protocol to exchange data, DAT also uses UDP/TCP instead of HTTP).

          I have been building little experiments in Firefox for interfacing with Scuttlebutt which can be seen at:

          https://viewer.scuttlebot.io/%25csKtp9VmxTjJoKy17O7GA6%2F3S8

          https://viewer.scuttlebot.io/%25uBev5w8m8iZGVbQDo9fpr%2BCXLB

          I hope to start a conversation in the add-ons list about TCP and UDP APIs for WebExtensions soon :-)

      2. 2

        Well, on Windows you have a 3rd option: IE.

      3. 14

        Rubbish.

        Direct access to all the XUL/XPCOM/whatever messy internals from extensions was a huge disadvantage. Firefox developers couldn’t change anything in the browser because some damn extension would break. Also these extensions barely worked in multi-process mode.

        A well defined, standardized extension API is a massive improvement. (And it makes me extremely happy as an addon developer — same code works in Chromium, Firefox and Edge!!)

        Actual technological advantages were added to Firefox recently, with Stylo, OMTP, and (not in release yet) WebRender. (In the future, WebRender will even render fonts and vector graphics on the GPU!)

        1. 10

          Advantages for developers directly translate to advantages for users. Namely, performance, security and reliability.

          nobody gives a shit

          This is literally false – Firefox has gained market share significantly with the “Quantum” release.

          Statistically, nobody gives a shit about powerful extensions. (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!) Most people only care about performance.

          And yet, Mozilla is constantly adding new APIs to WebExtensions to help angry ungrateful nerds get their unnecessary features back. (Most recently, tab hiding has landed, allowing implementations of Tab Groups and such.)

          1. 4

            Statistically, nobody gives a shit about powerful extensions. (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!)

            That is a big leap there, the other 50% are users too, not to mention those that do not report telemetry.

            The addon changes did make life easier from some extension developers because they get to use the same code for chrome and firefox. Not so much for others, extensions that shell out to the operating system or binary components - these are now much harder to do - just like in chrome.

            While I appreciate the improved speed, and the new shiny features I hope they don’t lead down a path that, drops support for many other capabilities e.g. does supporting webrender mean dropping support for targets that lack opengl 3?

            And yet, Mozilla is constantly adding new APIs to WebExtensions to help angry ungrateful nerds

            This is hardly fair. Many times those ungrateful nerds implemented extensions for features that were later made part of firefox that put the browser ahead competition - adblocking, video autoplay block, decent password managers, etc. Not to mention the reason why they are adding new APIs is because they removed the old ones :)

          2. 3

            (IIRC Mozilla telemetry reported about 50% Firefox users having zero extensions!)

            Not debating the rest of your points, but I would assume that the people who do use more “powerful extensions” are more apt to turn off Mozilla’s telemetry (I have no data to back this up, just a thought)

  1. 3

    I’m also very happy with the (relative) easy of use OpenBSD.

    I missed the existence of Void. Is there any real advantage over Debian besides no-systemd?

    1. 8

      To each its own poison. But I like void because

      • It is a rolling distro, if you are into that kind of stuff.
      • It has packages for openbsd programs variants e.g. netcat, ksh and doas.
      • the default network setup uses dhcpcd hooks and wpa_supplicant, so you can avoid networkmanager
      • it has a muslc variant, but many packages are not available for that
      • $ fortune -o void

      The tools for package cross compile and image building are pretty awesome too.

      1. 3

        While there are more packages for the glibc variant than the musl variant, I would not characterise this as “not many packages”. Musl is quite well supported and it’s really only a relatively small number of things which are missing.

      2. 2

        Thanks!, will try it next time when OpenBSD isn’t suitable.

    2. 6

      Void has good support for ZFS, which I appreciate (unlike say Arch where there’s only unofficial support and where the integration is far from ideal). Void also has an option to use musl libc rather than glibc.

    3. 5

      Void has great build system. It builds packages using user namespaces (or chroot on older kernels) so builds are isolated and can run without higher privileges. Build system is also quite hackable and I heard that it’s easy to add new packages.

      1. 1

        Never tried adding a package, but modifying a package in my local build repository was painless. (specifically dwm and st)

    4. 3

      Things I find enjoyable about Void:

      • Rolling release makes upgrades less harrowing (you catch small problems quickly and early)
      • High quality packages compared to other minimalist Linux distros
      • Truly minimalist. The fish shell package uses Python for a few things but does not have an explicit Python dependency. The system doesn’t even come with a crond (which is fine, the few scripts I have running that need one I just put in a script with a sleep).
      • Has a well maintained musl-libc version. I’m running musl void on a media PC right now, and when I have nothing running but X, the entire system uses ~120MB of RAM (which is fantastic because the system isn’t too powerful).

      That said, my go-to is FreeBSD (haven’t gotten a chance to try OpenBSD yet, but it’s high on my list).

    5. 1

      I’d use void, but I prefer rc.d a lot. It’s why I like FreeBSD. It’s so great to use daemon_option= to do stuff like having a firewall for client only, to easily run multiple uwsgi applications, multiple instances, with different, of tor (for relays, doesn’t really make sense for client), use the dnscrypt_proxy_resolver to set the resolver, set general flags, etc.

      For so many services all one needs to do is to set a couple of basic options and it’s just nice to have that in a central point where it makes sense. It’s so much easier to see how configuration relates if it’s at one single point. I know it doesn’t make sense for all things, but when I have a server, running a few services working together it’s perfect. Also somehow for the desktop it feels nicer, because it can be used a bit like how GUI system management tools are used.

      In Linux land one has Alpine, but I am not sure how well it works on a desktop. Void and Alpine have a lot in common, even though Alpine seems more targeted at server and is used a lot for containers.

      For advantages: If you like runit, LibreSSL and simplicity you might like it more than Debian.

      However I am using FreeBSD these days, because I’d consider it closer to Linux in other areas, than OpenBSD. These days there is nothing that prevents me from switching to OpenBSD or DragonFly though. So it’s about choosing which advantages/disadvantages you choose. OpenBSD is simpler, DragonFly is faster and has recent Intel drivers, etc.

      For security: On the desktop I think other than me doing something stupid, the by far biggest attack vector is a bug in the browser or other desktop client application, and I think neither OS will safe me from that on its own. Now that’s not to say it’s meaningless or that mitigations don’t work or that it’s the same on servers, but it’s more that this is my threat model for the system and use case.

  2. 4

    more audio and tactile outputs would be great - it would also improve accessibility to technology.

    having started wearing glasses this year due to age - my fonts are getting larger and larger on the high resolution screen that I own…

    1. 3

      replying to myself - one of the issues I notice is that the focus on the visual means that much information is lost as we end capturing written word as images - rather than text.

      while ocr and image recognition systems help, when you p-score tells you it’s a giraffe when you know it’s a cat there is likely to be information loss.

    2. 2

      For tactile outputs there are a couple of microfluidics based prototypes. For braille there was BLITAB and a couple others. There are also some tactile screens where the keyboard rises. The tech seems promising, but other than promotion articles and events I have yet to see one.

      My eyes would really appreciate a large size e-ink display for work. Most of my time is spent reading or writing text anyway. The largest one I’ve found was a 13in screen.

  3. 4

    I love these, is there any way to automate applying these to a given Firefox Profile?

    It’d be so nice to have these set as part of a local Ansible run, for example

    1. 7

      You can find prefs.js inside the profile folder. You can just add entries like user_pref("media.eme.chromium-api.enabled", false); there — it is a text file you can edit.

    2. 1

      Yeah, that would be cool, I’ve been trying to fully automate my desktop setup using nix.

    3. 1

      If marionette is enabled (or maybe webdriver?) you can also alter settings at runtime. The official python package for this is marionette_driver. I use my own code for the marionette bits, but I setup firefox settings from shell scripts.

      1. 1

        As far as I know, all WebDriver support in Firefox is implemented by a proxy that connects to the Firefox instance itself via Marionette protocol.

        And WebDriver protocol is too cross-browser to support preferences. So if you want to randomize sme options in runtime (to mess with fingerprinting, I guess) or to allow/block Javascript by a script (I actually use this), native Marionette client is needed.

        1. 2

          As far as I know, all WebDriver support in Firefox is implemented by a proxy

          Yes, geckodriver is the proxy. Webdriver does support browser specific options, for example you can set profile preferences when starting up geckodriver, but I dont know if webdriver provides an api to do it after the browser is started.

          I manage firefox instances using a little CLI https://github.com/equalsraf/ffcli/blob/master/ff/MANUAL.md and lots of shell script shenanigans.

          or to allow/block Javascript by a script (I actually use this)

          You mean suspend script execution? How do you do that using marionette?

          1. 1

            No, I just start with scripts disabled, and then I manually trigger preference modification (like your prefset) to reenable scripts if I want them enabled. And I generally have many Firefox instances under different UIDs, so the effect is formally local but actually affects only one site anyway. (And launch new instances using rofi — which is similar to dmenu, and I have a way to make some bookmark there be associated with scripts enabled immediately). I gave up on managing the ports when I start too many instances at once (race conditions are annoying), so now they just live in their own network namespaces.

  4. 1

    Now that I’m no longer so caught up in this pet project article of mine, I realize how BS it is. Deleted.

    1. 3

      can you delete the lobste.rs post too then? i just tried to find it, and read the comments down to here for not much reason

    2. 3

      Which makes this whole thread completely useless as I have no idea what other posters are referring to.

  5. 3

    I actually agree with this but I also think it will not work for some people or work setups. I use a tilled window manager to assign windows to desktops and switch efficiently. But if your work involves repeatedely going through the same windows I think a larger screen is preferable, you save your fingers a lot of work.

    As a side note, now that I use progressive lens I find my home screen too big and want to switch to a smaller one :)

  6. 1

    Several methods for different purposes.

    I keep a small pocket notebook for keeping track of daily taks - one or two entries per day at most (too many entries usually mean I got very little done). I write these at the end of the day. The main goal is to be able to look back at the end of the month or later.

    For work notes it depends a lot on what I am doing at the moment. For meetings/presentations I usually write stuff down in Vim (vim-pad and some other plugins)

    When I’m working alone I sometimes like to think by writing down free text on paper, I have a larger notebook just for this.

    For very quick one line notes I have a small program that writes a rotating log file. I dont use it much to be honest. Most times I just jot it down in my pocket notebook.