1. 4

    I choose YAML over JSON for a project recently where the client is supposed to edit the metadata, but I think this was probably a poor choice due to the fiddlyness of getting whitespace consistent.

    1. 4

      YAML is a superset of JSON, so if you don’t use any of the more advanced features of YAML you can still use JSON for the templating, etc.

      1. 3

        YAML is only a superset of JSON after a certain YAML revision. Most YAML libraries I’ve dealt with don’t actually implement that version of YAML, and so will reject JSON data due to syntax errors.

        Your mileage may vary, check your library documentation.

        1. 1

          If I have the client write JSON then it’s not obvious to me what benefit I get from using YAML at all.

      1. 7

        I am not sure what this article adds. It’s a rant, but there are no concrete examples of bad documentation, no examples of Apple documentation that is well-done, no examples of what you were concretely trying to archieve and how the Apple documentations fails there. This could be a nice analysis, bit I didn’t learn much beyond the title (except that apparently Ember.js documentation is good and Rust policy requires documentation of every symbol).

        1. 6

          This stack overflow thread of sample Apple Watch code which hasn’t worked basically ever is an excellent example https://stackoverflow.com/questions/46086536/apple-sample-code-for-watchkit-extension-background-refresh

        1. 3

          Given the state of 3D printing technology, it seems reasonable that someone could take the old school printing press route by 3D printing a document and using that to apply ink to paper.

          This is much slower, more inconvenient, and costly of course, but if you need to make lots of copies of something without recognizable handwriting or tracking, it’d work.

          I’m curious now as to whether there’s any existing software to convert postscript to the required data a 3D printer would need

          1. 4

            You could get a plotter relatively cheaply, with native postscript support and no tracking dots!

            1. 2

              But now you have a physical block to dispose of, and ink and rollers. And you’ll likely add fingerprints while printing.

              The idea is neat, though. You can 3d-print a block which uses a nice dithering pattern, and print that by hand.

            1. 22

              I really liked the part about the monoids- it cemented the power of the abstraction in a way I hadn’t seen before. In order to show that an algorithm was safely parallelizable, he just needed to find a monoid. That’s super cool.

              Not sure how much I believe his conclusion, though. He linked the wc code, and it doesn’t really looked optimized at all.

              1. 3

                Yeah the monoid abstraction is one of the single most powerful (and simple!) abstractions that’s helped me even in languages outside of Haskell. You don’t need a fancy type system to make use of it either - if you can prove something satisfies the laws then you as the developer can use that knowledge to make engineering decisions.

              1. 7

                See also: spoon theory.

                I’m autistic and have ADHD. I have many friends who are neurodivergent, physically disabled, deal with chronic pain, mental illness, etc. The shared resource pool of willpower and cognitive power is something we are extremely aware of as a major factor in our lives. It absolutely applies (to varying degrees) to everyone though.

                Good UX is an accessibility feature. Which sounds really obvious when you state it like that come to think of it.

                1. 7

                  Since this is a rather practical problem, I think this would be better discussed as a GitHub pull request? Personally, I’d have no issue, and I guess if you come with the patch, most wouldn’t either.

                  1. 16

                    In a reply to the first GitHub issue opened about this topic, it was stated that this should be a meta thread on the site rather than a GitHub issue.

                    1. 1

                      Oh, never mind then.

                  1. 3

                    On a side note, why there is no volume safety on PulseAudio?

                    Both the master and application volumes might be set very high for a number of reasons (e.g. after listening to a very quiet recording).

                    Something should measure the real output volume level and prevent sudden/loud noises.

                    1. 2

                      There’s no way to know what the actual physical volume will be coming out of the device plugged in. One pair of headphones might sound just fine at 50% volume and other might need 10% on the same computer. Usually speakers need the volume set higher as well. Some audio cards are smart about this and can detect speakers vs headphones to an extent, but it’s not an exact science by any means.

                      That said, 100% volume is almost never the right answer, and I’d love if I could get pulse audio to just never go above a certain point

                      1. 1

                        Of course the physical sound volume depends on many factors e.g. you can very well have an external amplifier. I wrote “output volume level”: the level on the sound card output after all the internal mixers, filters and amplifiers.

                        1. 1

                          That said, 100% volume is almost never the right answer, and I’d love if I could get pulse audio to just never go above a certain point

                          Be careful what you wish for, because I can think of just about a ton of april’s fools jokes I could pull with a feature like that… But I can also think of various malicious usages for a feature like this (for example ransomware asking for 5$ to get your sound back after it set it to 10% at max).

                      1. 12

                        Each clock cycle uses a little bit of power, and if all I’m doing is typing out Latex documents, then I don’t need that many clock cycles. I don’t need one thousand four hundred of them to be precise.

                        I recall there being some paper or article that shows doing the work quicker with a higher frequency/power draw then moving into the low frequency is less costly for power savings that doing the work slower at a lower frequency. Basically the CPU would spend more time in sleep states in an ‘on demand’ type governor (run at low freq, but elevate to high freq when utilization is high) vs a governor that always ran at the highest p-state (lowest freq) all the time. I’m having trouble finding the specific paper/article though.. I’ll keep searching.

                        1. 15

                          Search for “race to idle”

                          1. 8
                          2. 4

                            I remember reading the same thing, but I think it had the added context of being on mobile. Having the CPU awake also meant having other hardware like the radios awake, because it was processing notifications and stuff. In this case, the rest of the hardware is staying awake regardless, so I think it’s really just reducing the number of wasted CPU cycles.

                            I’d be interested if you or someone else could find the original source for this again to fact check!

                            1. 3

                              The counterbalance here is the increasing cost for each 100MHz as frequencies get higher. This is old, but https://images.anandtech.com/doci/9330/a53-power-curve.png shows the measured shape of a real curve. This StackExchange response helps explain why it isn’t flat.

                              So factors around race-to-idle include how that power-frequency curve looks, how much stuff you can turn off when idle (often not just the cores; see artemis’s comment) and how CPU-bound you are (2x freq doesn’t guarantee 2x speed because you spend some time waiting on main memory/storage/network).

                              Some of that’s workload-dependent, and the frequency governor doesn’t always know the workload when it picks a frequency. Plus you’ve got other goals (like try to be snappy when it affects user-facing latency, but worry less about background work) and other limits (power delivery, thermals, min and max freq the silicon can do). So optimizing battery life ends up really, uh, “fun”!

                              (Less related to race-to-idle, but the shape of the power curve also complicates things at the low end; you can’t keep a big chip running at all at 1mW. So modern phones (and a chip Intel is planning) can switch to smaller cores that can draw less power at low load. Also, since they’re tiny you can spam the chip with more of them, so e.g. Intel’s planning one large core and four small. Fun times.)

                              1. 1

                                Ooh, today somebody happened to post a couple pretty charts of recent high-end Intel desktop chips’ power/frequency curves, and charted speed per watt as a bonus. They also fit a logarithmic curve to it, modeling power needs as growing exponentially to hit a given level of perf, and it looks like it worked reasonably well.

                              2. 1

                                Yes, I remember reading the same thing. But, maybe on a CPU this old it isn’t as efficient as transitioning in and out of low power states? Just a guess, assuming his claim of +1 hour is true.

                                1. 1

                                  Maybe it’s not linear. The comparison between ‘low freq’ and ‘high freq’ for that study could be comparing something like 40% (of the available clock range) vs 90%? And maybe at 1% the CPU power draw is so much lower that it’s even better than race-to-idle (but perhaps considered to be an unlikely/uncommon configuration).

                                  1. 1

                                    The power consumption of a CMOS circuit scales with the square of the operating voltage, though, so intuitively I would expect that 100 ms at 0.5V to be more energy-efficient then 50ms at 1.0V. Chips are extremely complex devices, though, and I’m probably ignorant about a power-saving strategy or physical effect that side-steps this. Please let me know when you find that article - I’m curious to see which of my assumptions have been violated.

                                    1. 1

                                      I found it and put the link in another comment

                                    2. 1

                                      This makes a lot of sense and mirrors my experience with an X220 on OpenBSD. I got about one more hour of battery life (about 5 hours -> 6 hours ish) just by allowing it to run at a higher frequency but still spend 95% of its time at a lower one.

                                      Also, while the tools built into OpenBSD do a good job of controlling power usage, I found I was getting even better battery life in a standard Arch Linux instal with no configuration of powertop or anything else.

                                    1. 47

                                      keep your existing email account

                                      make a new email account with whatever service provider you want

                                      sign up for new accounts with your new email

                                      whenever you have to log in to an existing service with your current email, if you have time, switch it to your new email

                                      repeat until in a couple years everything is eventually switched over to your new one

                                      1. 20

                                        This. No need to delete your gmail account.

                                        I moved to Fastmail a year ago, and have been happy with it.

                                        Everytime I get an email on gmail, I spend a few minutes updating the email in whatever service sent me.

                                        The reason for not deleting is: There are emails you might only get once a year, like from the tax guys, MOT/TV license (here in the UK). You might have to react quickly, and sometimes its just easier to reply from gmail, and update the address later

                                        1. 6

                                          Not to mention missed opportunities with people that only know your Gmail account.

                                          1. 2

                                            Absolutely agree with this. I switched to Fastmail with my own domain something like 5 years ago and I still have my Gmail address. I have it set to forward to my “new” address, as well, so that I don’t even have to log in to Gmail. When I noticed that an email I care about originally came into my Gmail address I update it (or tell the person who sent it).

                                          2. 7

                                            This is exactly what I did, though in addition I forwarded by gmail to my new address.

                                            1. 7

                                              One missed step: Set up forwarding rule from old account to new account.

                                              1. 7

                                                Great advice. I was going to say this, but was pleasantly surprised to find it was already the top response. I will add: Your old account is still attack surface for anything that’s linked to it. Don’t get lax on the security just because you no longer use it every day.

                                                With gmail, if you delete your account, nobody else can ever register that username. This is a very important precaution since it prevents people from impersonating you. It is not necessarily the case with other email services. So, if you are applying this advice to migrate away from a mail service that isn’t gmail, look into whether it has that protection. If not, strongly consider never deleting the account.

                                                1. 3

                                                  Excellent advice. I also used this opportunity to migrate to a password manager, and ensure that I have updated, unique passwords everywhere.

                                                  1. 1

                                                    Same that I did, I moved to protonmail and just check my gmail once in a blue moon at this point in case someone forgot I had updated it. I’ve had my gmail since the early invite-only beta days and it gets bombarded with spam and garbage almost constantly as well as a lot of people using my email address to sign up for things in the states that apparently don’t do email verification…

                                                    I also used to get emails addressed to someone working at NVidia, got a medical insurance claim form at one point I think, as well as an invite to a wedding…

                                                    1. 2

                                                      Did you go to the wedding?

                                                      1. 1

                                                        No. But I did reply to the invite saying o wouldn’t be able to make it.

                                                    2. 1

                                                      I’ve been doing this, along with having Gmail forward all my email to my new account. I did that so I’d have copies of all my emails. I also did a dump and then import of all my previous emails too.

                                                      Honestly, it’s been fine. I’ve also unsubscribed from a lot of things and deleted a few accounts.

                                                    1. 7

                                                      But—think about this: I don’t have to take on cloud hosting! I don’t need to scale the app! This is a huge relief. URGENT QUESTION: Why are we trying to even do this?

                                                      Erm, yes you do. Unless you can afford to run your computer 24/7 so someone can see the site, or you have maybe 50 other people that use dat that are willing to seed your site.

                                                      The other problem is that Dat is simply a web-only technology. That is, unless the development of libdat has picked up recently, from being totally dead. Although, you can’t blame them because last time I looked at the source code there really isn’t a ‘libdat’ protocol. A major chunk of the protocol is built on the back of two nodejs-only libraries that handle dht swarm communication. Much of it was completely unspecified other than in those two implementations. The amount of ‘dat’ on top of that was simply a port number and a couple of async libraries.

                                                      1. 2

                                                        Agreed that to reliably host a Dat archive, you need to maintain a server, or use a cloud service like https://hashbase.io/.

                                                        The Dat protocol has recently been documented in much more detail, which can be found at https://www.datprotocol.com/ - particularly the “How Dat Works” guide is excellent: https://datprotocol.github.io/how-dat-works/

                                                        Alternate implementations, in Rust (https://datrs.yoshuawuyts.com/) and C++ (https://datcxx.github.io/), are currently in progress. These are currently not fully usable yet - in the case of the Rust implementation, the implementation is blocked on the standardization of async APIs in Rust (https://areweasyncyet.rs/). The lead developer of the Rust implementation is part of the Rust async working group, so this should all be coming along soon. Currently, it’s possible to read the Hypercore data structure, but not replicate it across the network, as far as I’m aware.

                                                        1. 1

                                                          As the number of users scale, theoretically the number of users seeding the site also scales, yes?

                                                          This has certainly proven to be true for torrents, where I’ve been able to download even seemingly-obscure torrents at any hour of the day without issue.

                                                          1. 5

                                                            This has certainly proven to be true for torrents, where I’ve been able to download even seemingly-obscure torrents at any hour of the day without issue.

                                                            I have several torrents that get only tens of kilobytes with a single peer, and I have encountered numerous others that have several leechers with no listed seeders. For an example of this, a while back ctrix handed out USBs of previously unreleased music and music that was (used to be?) on his website, at gigs he did. That USB was put on filesharing sites, I’ve seen a magnet link for it. However, the torrent is as dead as a doornail.

                                                            The author says that dat “truly RESISTS centralization” when it’s the opposite. It’s openly encourages such centralization because of the ‘one writer’ policy. Common sense says that if you can only make a website from one machine (i.e. moving or losing machines causes the site to become immutable), then you’ll do it from a 24/7 hashbase server or something similar.

                                                            Ultimately the question is: Do you want to store a copy of every single site you’ve been to?

                                                            Most users don’t.

                                                            As can be seen with Torrenting, it rests in the hands of people who can afford Terabyte hard disks, fast internet, and seed boxes. Because the ranking of what is seeded is popularity, the internet equivalent of sites like CNN and such will get extremely high bandwidth.

                                                            As the number of users scale, theoretically the number of users seeding the site also scales, yes?

                                                            Exactly! So, those who can afford servers will be more popular (Or conversely, those who are more popular will need to afford servers less) because their site is available 24/7, so they have to spend less on scaling.

                                                            At the risk of sounding over-critical:

                                                            In other words, those who aren’t able to have seedboxes and fast internet, won’t be affected or touched by Dat, and those who already can have seedboxes and fast internet, will be better off because they suddenly don’t have to pay as much for those things. This is like the thousands of other efforts by the middle class to aid the ‘poor’ and give the ‘power to the people’ – because they have no idea what poverty really looks like, because they have no experience of what being poor is like, and because they haven’t thought about the impacts of what they do past “this will be really cool and might work”.

                                                            1. 3

                                                              They were not thar obscure then ;)

                                                              I did try download some actually obscure torrents and sometimes it even took several months, if I ever managed to complete it.

                                                          1. 3

                                                            The author’s claims about poor documentation are, amusingly, bolstered by the problems with his other arguments. For instance, IPNS (despite the name) is really a way to support the kind of mutability he wants (and has nothing much to do with names), and despite the research he’s done he somehow didn’t manage to understand that.

                                                            IPFS really does ‘just work’ in the sense that a sanely-structured website (one that uses no leading ‘/’ & is totally static) can just be given to a single command & become immediately available. Newcomers like OP have an extremely hard time getting anybody to tell them what that one command is.

                                                            1. 3

                                                              You’re right that IPNS is the built-in way of handling mutable references. The author is right that it’s unusable.

                                                              Not only is IPNS slow and unreliable, but entries also have a TTL of 1 day. This requires entries to be re-published at least that often, or else the name becomes unresolvable (which is bad news if it’s hard-coded anywhere, like a DNS record).

                                                              In the short term this is annoying, since it places a constant maintenance burden on the publisher. Keep in mind that we’re talking about static sites being served in a decentralised way from a multitude of unreliable peers, which is otherwise “fire and forget”. Even though this might only require a simple cron job, the site owner might need to set up reliable, single-point-of-failure infrastructure like a VPS just to run that cron job. Perhaps someone could offer this as a service, but that (a) defeats the point of IPNS being distributed and (b) requires handing over private keys. Also worth noting that this is orders of magnitude more maintenance than DNS (my domain name is renewed once a year, not once a day).

                                                              In the long term this is a race condition waiting to happen. After publishing an update, we might find it gets overwritten by a re-publishing of the previous entry that was happening concurrently. Considering how slow IPNS can be, I can imagine this being quite likely. Avoiding this would require some coordination layer between the updating mechanism and the refreshing mechanism, like a lock file. Whilst the solution is pretty trivial (again, things become trivial if we don’t mind introducing single points of failure), it seems silly to me considering that IPNS itself is meant to be a coordination layer.

                                                              a sanely-structured website (one that uses no leading ‘/’ & is totally static)

                                                              That’s a No True Scotsman fallacy, since it implies that using absolute URLs is ‘insane’ rather than ‘a perfectly sensible approach for many circumstances’. One of those circumstances was the author’s site, which is why they encountered problems when using IPFS and needed to write a script to relativise them. It’s absolutely right for the author to document the issues they faced, since I imagine many other people trying to do the same thing will hit the same problem. I know of another person who encountered exactly this issue, and had to write a script to relativise their site’s URLs before it would work on IPFS: me!

                                                              If you want a technology like IPFS to be adopted, I would recommend finding out what problems people encounter with it, and either fixing those within the project (if possible) or else pre-empting the issue by pointing out to new users that they might encounter this problem and providing help (like relativising scripts) for those who do. This is what the author did.

                                                              If you want a technology like IPFS to be adopted, I would not recommend that commonly-encountered problems be brushed aside without any help given, or to invent your own definition of “sane structure” and blame prospective users for not following it. That’s an effective way to alienate people.

                                                              1. 1

                                                                entries also have a TTL of 1 day

                                                                This is new information to me, and indeed would make IPNS unusable.

                                                                That’s a No True Scotsman fallacy, since it implies that using absolute URLs is ‘insane’ rather than ‘a perfectly sensible approach for many circumstances’.

                                                                Using absolute paths is a bad practice because it introduces exactly the kind of fragility the author experiences. Literally any rehosting of the HTML within a subdirectory would break such absolute paths (including trying to view the files on localhost without a server). In other words: it doesn’t make sense to blame IPFS for the author’s decision to do linking in an extremely fragile way, any moreso than it makes sense to blame the developers of xfs for the same when trying to view the pages from the filesystem.

                                                                There are cases when absolute paths are necessary, & where dealing with that fragility makes sense. And, if you choose to do it that way, fine. It’s absurd to make a decision like that and then blame a static file serving tool for failing to automatically work around it.

                                                                1. 1

                                                                  Using absolute paths is a bad practice because it introduces exactly the kind of fragility the author experiences.

                                                                  If you mean “a bad practice when planning to host a site on IPFS” then I agree; if you mean “a bad practice when making Web sites” then this is the same No True Scotsman fallacy as before.

                                                                  it doesn’t make sense to blame IPFS for the author’s decision to do linking in an extremely fragile way

                                                                  Who’s “blaming” IPFS? I’m certainly not. It could be construed that the author is “blaming” IPFS, but from my reading they’re just pointing out a common pitfall that they hit which the project didn’t warn them about or help them to work around (and that less-savvy developers might not be able to work around it like they did, or like I did).

                                                                  If there is anything to “blame”, it would be attempting to switch out the storage/protocol underlying an existing site from HTTP to IPFS and expecting it to work with no changes. Yet I don’t think the author had such an expectation, and neither did I. We both hit a problem, found a way to work around it, and documented those workarounds (here’s my own blog post on the subject). It would be nice if the IPFS project pointed out the problem and provided a solution (e.g. “if you want to upload a Web site to IPFS, you may find that absolute links are broken; here is a script which will update those links, which you can run before ipfs add to avoid the problem”).

                                                                  It’s absurd to make a decision like that and then blame a static file serving tool for failing to automatically work around it.

                                                                  Nobody is doing that. This is a straw man.

                                                                  1. 1

                                                                    The author complained that a website written in a fragile and non-portable way needed to be modified in order to work on IPFS. The same website would also need to be modified in the same way to support most rehosting situations aside from being at the top level of a different domain. In other words, the author’s complaint is that they made a poor decision when writing their website, not anything specifically to do with IPFS.

                                                                    IPFS is a filesystem (albeit a distributed network filesystem). When you put a directory on a filesystem (instead of putting the contents of that directory onto the root of the filesystem), that directory has a name, and that name is part of the absolute path. If you fail to put the name of the directory you’re referencing in an absolute path, regardless of what filesystem you are using, your absolute path will be wrong and refer to a different (probably nonexistent) file.

                                                                    Heavy use of absolute paths is fragile across not just web development but development in general. Their use in older windows programs is the cause of much consternation among folks who have a bigger secondary disk than their primary disk (a fairly common situation that elaborate hacks exist to work around). All over the unix build & command line world, mechanisms exist specifically to make it straightforward to avoid absolute paths. And, the virtual directory ‘..’ is available for use in URLs specifically so that absolute paths can be avoided in them.

                                                                    Absolute paths should be avoided, in general. They shouldn’t be used in websites. They shouldn’t be used in software. They are fragile & break normal behavior with regard to directory encapsulation. In cases where they must be used, they should have their prefix configurable so that moving the directory doesn’t break all links.

                                                              2. 3

                                                                What is that one command you mention, if you don’t mind me asking?

                                                                1. 2

                                                                  It’s something like ipfs add -r -w $path for a completely static subdirectory (which gives you a single hash you can supply to IPFS).

                                                                  With regard to IPNS, you need to have initialized IPNS first & then updating the hash involves some other command. I’ll look it up.

                                                                  (Basically, how IPNS works is that it’s a static hash that mutably points to some other static hash for a whole directory. So, you have some IPNS hash that you own, and you periodically point it at the output of running ipfs add -r -w on a modified directory.)

                                                                2. 2

                                                                  Is the author’s impression wrong that IPNS is unusably slow, to the extent that updating normal DNS on a site update is more workable?

                                                                  Do you happen to have an example of such a website that’s hosted on IPFS?

                                                                  1. 3

                                                                    Is the author’s impression wrong that IPNS is unusably slow, to the extent that updating normal DNS on a site update is the more workable?

                                                                    IPNS is not, in my experience, any slower than IPFS.

                                                                    IPFS has some performance issues (running a daemon is effectively a slow fork-bomb) and there are difficult-to-avoid propagation issues in any totally-distributed network (with respect to which, IPFS is in my experience better than SSB).

                                                                    Do you happen to have an example of such a website that’s hosted on IPFS?

                                                                    The obvious example is ipfs.io. There are others, & since I’m sure some lobste.rs users are running them, they’ll probably comment with their own. (I don’t run one because if I did, it wouldn’t get enough traffic to remain persistent if my machine went down.)

                                                                    1. 3

                                                                      IPNS is not, in my experience, any slower than IPFS.

                                                                      Please note there’s an issue on go-ipfs with the very title “IPNS very slow”, as of yet still open, as far as I can see. Personally, I haven’t experimented with IPFS/IPNS recently; I’d be very happy to learn and see some confirmation somewhere if things have improved since a year ago (which roughly corresponds to the time I last experimented with them).

                                                                      1. 1

                                                                        I haven’t used it in a couple years. It might have gotten worse since I last used it, or it might not have. I know the forkbomb issue has gotten worse (to the point where I don’t even run the ipfs daemon anymore because I was finding myself needing to kill & restart it every 12 hours so I didn’t run out of ram).

                                                                        The open issue you linked to appears to be a case of what I mentioned earlier – which is to say, decentralized replication can be slow when you have few peers. IPNS is basically an extra layer of abstraction over IPFS, so if you’ve literally just added something, whatever replication delays occur with IPFS will be doubled over IPNS. This is like if you’ve put an archive of torrents up on bittorrent & both the archive and each torrent inside it are seeded only by you – until you have a certain number of peers, you’re gonna be slow, and a naive DHT won’t prioritize nearby nodes the way it ought to.

                                                                        In other words, this is a side effect of the main design problem with IPFS in general: that no balancing mechanism exists to ensure peers cache & pin content, so content availability is extremely sensistive to popularity in the short term.

                                                                1. 1

                                                                  Raspberry Pis have 100Mbit networking.

                                                                  If you need a decent CPU as well I’d get an Odroid XU4 (those even have gigabit iirc).

                                                                  1. 4

                                                                    I read the question as “I want my 100mbps ethernet link to be the bottleneck instead of than the CPU running the VPN software being the bottleneck”.

                                                                    So, most or all models of raspberry pi don’t check the box, right?

                                                                    1. 3

                                                                      Anecdotally, I pull 100mbps through wireguard using a Raspberry Pi 3 without issue. As long as you aren’t relying on it for complicated firewall rules you should be fine.

                                                                      1. 2

                                                                        That’s good to know. I have an old raspberry pi at the moment that can’t crack 20mbps through WireGuard. Hence my question :)

                                                                        1. 2

                                                                          Older Raspberry Pis are really slow. Anything that’s not a Pi 3 (or better) won’t work (as you’ve found).

                                                                          1. 1

                                                                            Pi 3 is not very good either (other than the terrible I/O: it doesn’t even support AES instructions!). Try the ROCK64.

                                                                            1. 1

                                                                              I guess wireguard doesn’t use AES

                                                                              1. 1

                                                                                According to this article/post strongly but tangentially related to our topic here, no, wireguard does not use AES.

                                                                          2. 1

                                                                            Wow. Thank you for this data point.

                                                                      1. 1

                                                                        These days I just use dns-based blocklists. Just run dnsmasq with an adblock blocklist locally, or on your home network with a raspberry pi.

                                                                        1. 5

                                                                          You say that, and it’s fine for some sites, but a lot of them have anti-adblock scripts baked in alongside the site logic. The only way you’re going to work around that is with redirect rules, like what uBlock Origin does. It also isn’t possible to do annoyance removal, like getting rid of fixed banners, using DNS.

                                                                          1. 3

                                                                            For the sites that it doesn’t work for, I close the tab and move on. It wasn’t worth my time anyway.

                                                                            1. 1

                                                                              To me, attempting to get blanket web-wide annoyance removal feels like freeloading. That’s not why I block ads. It’s my prerogative to avoid privacy invasion, malware vectors, and resource waste; if the site owner goes to lengths to make it hard to get the content without those, that’s their prerogative, and I just walk away. I’m not going to try to grab something they don’t want me to have. (The upshot is that I don’t necessarily even use an ad-blocker, I simply read most of the web with cookies and Javascript disabled. If a page doesn’t work that way, too bad, I just move on.)

                                                                              1. 1

                                                                                I figure that living in an information desert of my own making is not a very effective form of collective action. There simply aren’t enough ascetics to make it worth an author’s time testing their site with JavaScript turned off. And if it isn’t tested, then it doesn’t work. If even Lobsters, a small-scale social site that you totally could’ve boycotted, can get you to enable JavaScript, then it’s a lost cause. Forget about getting sites with actual captive audiences to do it.

                                                                                People need to encourage web authors to stop relying on ad networks for their income, and they need to do it without becoming “very intelligent”. An ad blocker that actually works, like uBlock Origin, is the only way I know of to do that; it allows a small number of people (the filter list authors) to sabotage the ad networks at scale, in a targeted way.

                                                                                1. 1

                                                                                  Thank you for bringing up Mr. Gotcha on your own initiative, because that sure feels like what you’re doing to me here. “You advocate for browsing with Javascript off. Yet you still turn it on in some places yourself.”

                                                                                  That’s also my objection to the line of argument advanced in the other article you linked: “JavaScript is here. It is not going away. It does good, useful things. It can also do bad, useless, even frustrating things. But that’s the nature of any tool.” I’m sorry, but the good-and-useful Javascript I download daily is measured in kilobytes; the amount of ad-tech Javascript I would be downloading if I didn’t opt out would be measured in at least megabytes. That’s not “just like I can show you a lot of ugly houses”; it inverts the argument to “sure, 99.9% of houses are ugly but pretty ones do exist as well, you know”. Beyond that, it’s a complete misperception of the problem to advocate for “develop[ing] best practices and get[ting] people to learn how to design within the limits”. The problem would not go away if webdevs stopped relying on Javascript, because the problem is not webdevs using Javascript, the problem is ad-tech. (And that, to respond to Mr. Gotcha, is why I enable JS in some places, even if I mostly keep it off.)

                                                                                  In that respect I don’t personally see how “if you insist on shovelling ads at me then I’ll just walk away” is a lesser signal of objection than “then I’ll crowdsource my circumvention to get your content anyway”. But neither seems to me like a signal that’s likely to be received by anyone in practice anyway, and I think you operate under an illusion if you are convinced otherwise. I currently don’t see any particularly effective avenue for collective action in this matter, and I perceive confirmation of that belief in the objectively measurable fact that page weights are inexorably going up despite the age and popularity of the “the web is getting bloated” genre. All webbie/techie people agree that this has to stop, and have been agreeing for years, yet it keeps not happening, and instead keeps getting worse. Maybe because business incentives keep pointing the other way and defectors keep being too few to affect that.

                                                                                  Until and unless that changes, all I can do is find some way of dealing with the situation as it concerns me. And in that respect I find it absurd to have it suggested that I’m placing myself in any sort of “information desert of my own making”. Have you tried doing what I do? You would soon figure out that the web is infinite. Even if I never read another JS-requiring page in my life, there is more of it than I can hope to read in a thousand lifetimes. Nor have I ever missed out on any news that I didn’t get from somewhere else just as well. The JS-enabled web might be a bigger infinity than the non-JS-enabled web (I am not even sure of that, but let’s say it is), but one infinity’s as good as another to this here finite being, thank you.

                                                                                  1. 2

                                                                                    But neither seems to me like a signal that’s likely to be received by anyone in practice anyway.

                                                                                    I, personally, can handle a script blocker and build my own custom blocking list just fine. I can’t recommend something that complex to people who don’t even really know what JavaScript is, but I can recommend uBlock Origin to almost anyone. They can install it and forget about it, and it makes their browser faster and more secure, while still allowing access to their existing content, because websites are not fungible. Ad networks are huge distributors of malware, and I don’t mean that in the “adtech is malware” sense, I mean it in the “this ad pretends to be an operating system dialog and if you do what it says you’ll install a program that steals your credit card and sells it on the black market.” I find it very easy to convince people to install ad blockers after something like that happens, which it inevitably does if they’re tech-illiterate enough to have not already done something like this themselves.

                                                                                    uBlock Origin is one of the top add-ons in Chrome and Firefox’s stores. Both sites indicated millions of users. Ad blocker usage is estimated to be between 20% in the United States, 30% in Germany, and around that spot in other countries, while the percentage of people who browse without JavaScript is around 1%. I can show you sites with anti-adblock JavaScript, that doesn’t run when JavaScript is turned off entirely and so can be defeated by using NoScript, indicating that they’re more concerned about ad blocker than script blockers. Websites that switched to paywalls cite lack of profitability from ads, caused by a combination of ad blockers and plain-old banner blindness.

                                                                                    Don’t be fatalistic. The current crop of ad networks is not a sustainable business model. It’s a bubble. It will burst, and the ad blockers are really just symptomatic of the fact that noone with any sense trusts the content of a banner ad anyway.

                                                                                    1. 1

                                                                                      Oh, absolutely. For tech-illiterate relatives for whom I’m effectively their IT support, I don’t tell them to do what I do. Some of them were completely unable to use a computer before tablets with a touchscreen UI come out – and still barely can, like having a hard time even telling text inputs and buttons apart. Expecting them to do what I do would be a complete impossibility.

                                                                                      I run a more complex setup with minimal use of ad blocking myself, because I can, and therefore feel obligated by my knowledge. And to be clear, for the same reason, I would prefer if it were possible for the tech-illiterate people in my life to do what I do – but I know it simply isn’t. So I don’t feel the same way about those people using crowdsourced annoyance removal as I’d feel about using it myself: I’m capable of using the web while protecting myself even without it; they aren’t.

                                                                                      It’s a bubble.

                                                                                      I’m well aware. It’s just proven to be a frustratingly robust one, quelling several smaller external shifts in circumstances that could have served as seeds of its destruction – partly why I’m pessimistic about any attempt at accelerating its demise from the outside. Of course it won’t go on forever, simply because it is a bubble. But it’s looking like it’ll have to play itself out all the way. I hope that’s soon, not least because the longer it goes, the uglier the finale will be.

                                                                                      And of course I would love for reality to prove me overly pessimistic on any of this.

                                                                            2. 2

                                                                              I use /etc/hosts as a block list, but it’s a constant arms race with new domains popping up. I use block lists like http://someonewhocares.org/hosts/hosts and https://www.remembertheusers.com/files/hosts-fb but I don’t want to blindly trust such third-parties to redirect arbitrary domains in arbitrary ways.

                                                                              Since I use NixOS, I’ve added a little script to my configuration.nix file which, when I build/upgrade the system, downloads the latest version of these scripts, pulls the source domain out of each entry, and writes an /etc/hosts that sends them all to 127.0.0.1. That way I don’t have to manually keep track of domains, but I also don’t have to worry about phishing, since the worst that can happen is that legitimate URLs (e.g. a bank’s) get redirected to 127.0.0.1 and error-out.

                                                                              1. 2

                                                                                For anyone interested in implementing this without pi-hole, I have a couple scripts on github which might help. I adapted them from the pi-hole project awhile back when I wanted to do something a bit less fully-featured. They can combine multiple upstream lists, and generate configurations for /etc/hosts, dnsmasq, or zone files.