Threads for eloy

    1. 0

      I totally agree with this thesis.

      You should be using some kind of VPN or tunnel when you’re away from home anyway, and if you are it just doesn’t matter.

      1. 11

        did you even read the article?

        1. 4

          As a matter of fact I did.

          The author talks about running a VPN being not a great plan, because commercial VPN providers, malware, business interests etc.

          But I run my own VPN server on commodity infrastructure.

          And tbh, I consider that to be Way Good Enough for most people’s purposes.

        2. 3

          The article says you shouldn’t use a commercial or free VPN provider; I assume what feoh means is running a VPN server they set up themself.

          1. 9

            Author of the article here: I have to admit that I edited the first sentence of the article after reading the comments here, to better clarify that I’m talking about not using a VPN.

    2. 3

      I’m not really recommending this for anything serious, but it might be the ESP32: https://hackaday.com/2021/07/21/its-linux-but-on-an-esp32/

      Due to the external hardware this one is probably not the cheapest, but very impressive nonetheless: https://hackaday.com/2012/03/28/building-the-worst-linux-pc-ever/ (although it does not meet your hardware requirements)

    3. 4

      Hardware specifically produced for Linux distros will keep being a niche. However, Linux on already existing phones might become a big success. It’s way better maintainable than Android ROMs which keep trying to hack ancient vendor blobs on newer versions.

      1. 3

        I thought Project Treble was supposed to help with that. Did it?

        1. 3

          Yes, but only with the newest phone models. There are billions of phones produced in the last 10 years.

    4. 4
      • Apparently, HDMI / Ethernet is actually standardized, but since it can only do 100baseT, it can’t be what those switches are doing.

      • I guess the reason for this is that HDMI cables are easily available, available in the lengths that they want, and tested for the data throughput that they need. The more I think about it, the more obvious it seems that HDMI cables would work great for network patching, except if the HDMI Forum tried to sue you for it.

      1. 1

        Apparently, HDMI / Ethernet is actually standardized, but since it can only do 100baseT, it can’t be what those switches are doing.

        Yes, I knew about this, I’ll add it to the article to avoid any confusion.

        The more I think about it, the more obvious it seems that HDMI cables would work great for network patching

        I don’t really agree. Ethernet cables have a clip so they don’t fall off easily. DisplayPort or DVI would have been better choices.

        1. 3

          From a pure cable perspective, why is DisplayPort better than HDMI? Their cables have a similar profile unless I’m missing something.

          1. 4

            Because full-sized DisplayPort cables have a clip that keeps them from falling off.

            1. 2

              Huh, looks like you’re right. TIL. Can’t believe I hadn’t noticed that before. DisplayPort cables tends to be plug-in-and-forget-about-it-for-months for me.

            2. 2

              God I hate those clips so much. Every monitor I have has the ports hidden behind a plastic “lip” that I guess is supposed to shield them from dust or something, and just makes the tiny buttons on most DP cables nigh on impossible to press with the apparently-oversized meat-sausages I call fingers :(

    5. 7

      My personal opinion is that support for ARMv6+VFPv2 should be maintained in distributions like Debian and Fedora.

      My personal opinion is exactly the opposite. Raspberry Pi users are imposing unreasonable burden on ARM distribution maintainers. For better or worse, the entire ecosystem standardized on ARMv7 except Raspberry Pi. The correct answer is to stop buying Raspberry Pi.

      1. 3

        I faced this issue directly, being as I was a distribution developer working on ARM at the time. I feel your pain.

        However, they made the choices they made for cost reasons, and the market has spoken. I can’t argue with that.

      2. 2

        It could be worse. At least the Pi is ARMv7TDMI. Most AArch32 software defaults to Thumb-2 now and the Pi is just new enough to support it. I maintain some Arm assembly code that has two variations, ARMv6T2 and newer, everything else. I can probably throw away the older ones now, they were added because an ultra low-budget handset maker shipped ARMv5 Android devices and got a huge market share in India or China about 10 years ago and a user of my library really, really cared about those users.

        1. 1

          shipped ARMv5 Android devices and got a huge market share in India or China

          Interesting, do you know which phone models? The oldest Android phones I could find are ARMv6.

          1. 1

            No idea, sorry. I never saw them, I just got the bug reports. Apparently they’re all gone (broken / unsupported) now. It was always a configuration that Google said was unsupported, but one handset manufacturer had a custom AOSP port that broke the rules (I think they also had their own app store).

      3. 2

        I also agree wiťh you that the correct answer is to stop buying Raspberry Pi, especially their ARMv6 products. But for most beginners in electronics, it seems like “Raspberry Pi” equals “single board computer”. They aren’t going to stop buying them.

        I don’t love MIPS64 or i686 either, but the reality is that the hardware exists and continues to be used. Maintainers should deal with that, IMHO.

        1. 3

          I am just tired of getting issues like https://github.com/rust-embedded/cross/issues/426. This is just a tiny corner. What a horror this is being replicated 100x for every toolchain out there.

    6. 6

      big endian does not exist for ARMv7

      That’s wrong, it’s switchable at runtime via the SETEND instruction on 32-bit Arm. (including from user-space!)

      The target triplet for 32-bit Arm (big endian) with hardware floating point is armeb-linux-gnueabihf.

      1. 1

        Thanks, I did not know that! Added it to the article, including the things you mentioned in your other comment.

    7. 7

      Huh, isn’t the whole idea behind ChatOps to execute code remotely in Slack? ;-)

      1. 7

        This is actually a business decision, so Slack can compete with Terraform, Ansible et al.

    8. 20

      This is technically 100% true but completely misses the point (ahem). Yes, Android and iOS sandbox their apps and have a better security model to prevent applications from accessing eachother’s data. But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device. To me that’s pure madness and a never-ending arms race which, indeed, requires very high levels of security. Besides, many apps ask for way too many permissions (the typical example being a flashlight app requiring access to your contact lists and network access), and a majority of users are happy to just click OK, anyway, because it’s entirely unclear what they are saying OK to.

      Now, there’s still something to be said for sandboxing even of trusted applications, to prevent them from accessing your data after having been exploited through a vulnerability. For this, I’d love to see something like OpenBSD’s pledge on Linux.

      I, for one, am very happy that there are new developments outside the Android/iOS duoculture.

      The non-software parts of the article do make some sense, because we still cannot trust hardware manufacturers, but this just supports my initial point: if you can trust the software (or firmware) not to be actively spying on you, a lot of these security measures are unnecessary. Then, kill switches just become some additional measure to know for a fact that your device isn’t accidentally recording, and to protect against external actors trying to track you via your bluetooth/wifi MAC address.

      1. 10

        But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device. To me that’s pure madness and a never-ending arms race which, indeed, requires very high levels of security.

        Applications do not need to be user-hostile to be considered untrusted. They just need not to be formally proven.

        Once you notice this, you can see the value of mitigation (such as done by openbsd, pledge/unveil/W^X/layout randomization), sandboxing (as done by android, docker) and systems designed for enabling actual least-privilege, which pretty much implies capabilities (not to be confused by POSIX capabilities) and a pure microkernel, multiserver design.

        This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux), and Huawei on HarmonyOS.

        1. 8

          This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux), and Huawei on HarmonyOS.

          I wonder how many of those limits are related to GPL2 more than technical reasons…

          1. 8

            If if was about the license, they’d just save effort by reusing bsd/mit license code.

            1. 3

              Yes, I suspect porting the Android userland over to a BSD derivative kernel would be a vastly easier task than writing a whole new OS. FreeBSD already has a linux-compatible ABI that supposedly works fairly well, if that’s even necessary.

        2. 5

          They just need not to be formally proven.

          Just insecure. Formally proven apps can be insecure if what’s proven doesn’t block the attack vector. An easy one is formal correctness or memory safety not stopping information leaks from shared resources. Even if software does, the continuous number of hardware-based leaks means verified software is no guarantee.

          That we probably won’t see them get that under control means anything on complex, insecure hardware must be considered compromised with security measures just limiting damage of this versus that component, attacker, or whatever.

          Edit: Your other comment mentioned seL4 is proven to do separation. It’s proven to do so under a number of assumptions. Some are false. So, it can’t do separation in those cases.

          1. 5

            Just insecure.

            Right. My intent was to say that a non-formally-proven app should always be assumed insecure, and sandboxed accordingly.

            Which doesn’t go to say we should disable the sandboxing when the app gets formally proven; We should be sandboxing everything that can be sandboxed. Working with capabilities is the only way I see going forward.

            1. 4

              That all sounds much better. Layer by layer, component by component, prevent what we can, and catch the rest. :)

        3. 3

          This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux)

          Fuchsia smells like a Senior developer retention program to me.

          Given Google’s focus, even if that project was serious, what’s the change it would still be alive 3 years after shipping?

          1. 8

            Fuchsia’s been active for several years now, the android runtime has been running on it for a few years also, and it has been active throughout.

            I very much doubt that it isn’t the operating system Google plans to use as base for pretty much everything in the not so distant future.

      2. 6

        But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device.

        Suppose you’re wrong, just once, and some FOSS developer betrays your trust. Maybe not even that - maybe the download server for one of their dependencies gets trojanised and it’s built into a binary you’re running.

        Problem 1: If the app wasn’t sandboxed you have now been comprehensively owned. Every secret on your account is now void.
        Problem 2: There’s no reason you would notice if problem 1 occurs.

        I too am pleased to see developments outside Android/iOS but I can’t get on board with the notion that we should lower our guard because software hasn’t come from a corporation with corporate interests.

        1. 3

          Fair point!

          Now as far as I can tell, Android is Linux, so it shouldn’t be fundamentally impossible to port the Android security model / sandboxing system to Librem, even though this post seems to imply that Android is somehow completely different from it and inherently more secure.

          1. 2

            As far as I can tell, there’s so little “Linux” in Android you really would need to do a lot of hard work to get a similar system running.

            And all the different little Linux-based phones would have to set aside their differences concerning distros and whatnot and agree on how the kernel should be patched to enable whatever features this new userland requires.

            Sadly many of these Linux-based phones run old Android kernels because the hardware manufacturers never open-sourced their drivers.

            It’s a complete world of pain and grief, which no one would invest in because the market leaders are so huge.

            Despite that I’m still a Sailfish user.

            1. 2

              Sadly many of these Linux-based phones run old Android kernels because the hardware manufacturers never open-sourced their drivers.

              Somewhat true. They open sourced their kernel forks, but not the userland drivers.

      3. 1

        100% agree on this.

        I rather use unsandboxed applications from people I trust than sandboxed applications from people I don’t trust. It’s the same for distribution repos vs. Flatpak.

    9. 5

      No system is perfect, and I will choose a vulnerable one over a malicious one, obviously.

      1. 2

        Then you should use use GrapheneOS, Replicant or LineageOS on regular hardware. I wouldn’t call them malicious but they are far more secure than PureOS or postmarketOS.

    10. 12

      I don’t like the systemd-resolved, it has caused me a lot of trouble with captive portals.

      1. 15

        It’s also great fun when you install something yourself like dnsmasq (because you didn’t know that systemd provides its own) and then have to wonder why it doesn’t work or why is it milling one of your CPU cores at 100% – been there, done that. The downside of systemd putting its feet everywhere is that it’s much easier to step on its toes at every opportunity.

      2. 9

        wanted to set up unbound on fedora recently. I followed the docs, stuff didn’t work, Networkmanager, dnssec-trigger conflicted, resolv.conf was flaky on a supposedly static setup. Removed the integration to Lennartware, disabled NM managing resolv.conf (it can still be done), and did stuff manually the old way. Works like a charm since then.

        I’m no fan of the direction Linux is heading. The vision would be acceptable for me, but the route taken is duct taping and untested low quality code. This way the road will be long and bumpy, and I’d prefer to stay behind.

        edit: I took a glimpse inside the dnssec-trigger scripts. I demand cleaner and thoroughly tested code from my junior colleagues. (it did not work according to the poor docs)

        1. 9

          NM has a long history of forcefully taking over interfaces and network config. For most folks that just want to ‘get online’ it’s OK and actually pretty easy to use because of it. But the moment you try to do anything more than that, you’re better off completely removing it. At least systemd-network will allow you to do some fairly complex network configurations.

          1. 3

            systemd-network

            I should probably have gone that way. For some reason it is using NM currently, and was working since install (only a DHCP config was needed), but when I started to customize it it really bite me. Next time I’ll check that out, because seems far better now after having read about it.

            My only excuse that I haven’t used fedora for 10+ years (only windows, obsd, and other people’s linuxes, which had their networking working ootb/handled by other crew)

      1. 6

        My morning work ritual is to open my computer, scan for urgent email, make tea, and open Lobste.rs. Drinking my morning tea and checking the top stories and recent helps me get my brain in “tech” mode instead of “home” mode. And unlike Hackernews or Reddit, Lobste.rs doesn’t start my day with outrage or a firehose of content. Just enough intelligent articles and comments to get excited about technology and to consume with that one cup of tea.

      2. 5

        I turn off my network connection.

    11. 5

      Would it make sense to use concat(sha1, sha256) hash algorithm? This wouldn’t change the prefixes while improving the strength of an algorithm (by including SHA256 in a hash).

      1. 10

        No, because trees are hashes-of-hashes, so if you change anything at all about them, their hashes will change, and therefore commit hashes will change.

      2. 2

        I don’t think that a different the prefix is a problem anyway. The problem is backwards compatibility and doing a large overhaul of the entire source code, which contains the hashes as hardcoded arrays.

    12. 2

      Don’t minimize that HTML

      Okay, sure

      Minify your SVGs

      Huh?

    13. 2
      1. The fact that the web needs “saving” every few years or so seems like a serious design flaw.

      2. Why does an informational site need to set cookies? Just show me the text.

      1. 1

        seems like a serious design flaw

        Yes. The way it was built was not with any security in mind. A lot has been added later, such as TLS and all kinds of HTTP headers. It’s slowly improving, but steady. I think.

        1. 2

          I don’t think baking in encryption etc. from the beginning would have changed anything, all else being equal.

          All and every HTTP call could be via HTTPS, and we’d still have to deal with intrusive tracking implemented to more effectively serve ads.

          1. 1

            I don’t think baking in encryption etc. from the beginning would have changed anything, all else being equal.

            It would have prevented a lot of MitMs. But I agree, apart from that it doesn’t fix a lot of problems.

    14. 3

      Facebook is on the list of supporters. Strong whiff of astroturf.

      Edit apparently the publisher is this outfit: https://webfoundation.org/about/. I retract my accusation of astroturfing, and substitute “like the UN: well-meaning and without any real power”.

      1. 4

        Indeed, it feels like bluewashing to me, at least for the commercial organisations. Some of these companies are known for the internet.org project in Africa, which only included restricted internet access to some services, thereby violating net neutrality. Which I can totally understand from their business model, but combined with their commitment to this initiative it’s hypocrite imho.

    15. 8

      I tried to document some things I learned about filesystems on Linux and macOS in a document: fsdoc.pdf. It’s not complete and a bit shitty, but if you have anything more that is interesting, do leave me a pull request :)

      1. 1

        Cool write-up! While scrolling, I read:

        For this reason, adoption of acl is not widespread.

        I don’t think that’s true, SELinux and AppArmor have this and are default enabled on a number of distributions.

    16. 3

      I’m not an IRC expert, but I assume that I can connect from for example multiple machines running irssi and get all my messages simultaniously? You mention that it only supports TLS, but is that the connection between my client irssi and pounce, between pounce and the upstream server or both?

      1. 3

        Yes. You’ll want to have each irssi set a different username to indicate to pounce that they should both be getting their own copies of messages (see “Client Configuration” in the manual page).

        It is TLS-only in all directions.

    17. 14

      I don’t host my email, because I think it’s too much of a risk. Email should always work, period. With a self hosted environment I can’t assure that.

      1. 19

        That’s exactly why I do self-host. If you rely on somebody like Google, you’re at their mercy as far as what actually gets through or not.

        1. 8

          You’re always going to be at the mercy of 3rd parties when running your email. If your IP ends up on a blocklist you’re doomed, every provider will blackhole your email. You’re one person, you’ll struggle massively to get it lifted, if at all - meanwhile your email is being blackholed. Google end up on a blocklist, they’ve huge leverage and will have it fixed instantly.

          Email operates on trust, its really hard to gain trust when you’re one person with no history. Especially when you don’t even own the IP space, so you’re relying on the trust of your untrustworthy ISP members.

          1. 9

            That’s my point. Google and other providers are silently blocking incoming emails. I’d rather be in charge of what gets through to me. Of course you’re always at the mercy of third parties regardless, but self-hosting makes it one less.

            By the way, I have a side-project that sends several thousand emails everyday. I’ve had to deal with blocklists a few times, but it’s really not that bad. It’s also trivial to switch outgoing IP addresses.

            1. 6

              I agree. I’ve recently noticed that Google is being way too aggressive in dropping the mail, including from some mailing lists, not to mention the private domains.

              As for your second point — apparently, I actually have had my domain name itself blocked by Gmail, expectedly due to sending myself some lists of domains through crontab, so, I’ve actually had to switch my domain for outgoing mail for now.

      2. 7

        When self-hosting, you at least have access to logs. You can see, if other side greylisted you or accepted mail immediately. Mail service providers are hiding all kind information, both about incoming and outgoing connections. I have self hosted my email long-long time, over 15 years. Sometimes there is little bit trouble, but nothing too serious. Most practical advice: don’t use well known cheap VPS providers. Those IP-s are bad neighbourhood, most problems with delivery are going from that.