1.  

    Can’t help but think that this follows Betteridge’s Law of Headlines…

    1. 1

      If I’m reading this right, the oft-repeated nightmare that Quantum computers would render crypto ineffective was, possibly a little overstated?

      1. 6

        I’m not sure this was ever really a question.

        It was always clear that Shor’s algorithm only applied to very specific problems. Unfortunately it turned out these were the exact problems that were used in pretty much all mainstream public key cryptography. But there always were alternatives.

        One likely quantum safe cryptosystem is McEliece, which was developed in the 70s. It is not very practical due to very large keys, so it’s likely not gonna be the one that your future browser will use.

        1. 2

          Wikipedia has a good summary. As someone with very, very limited understanding of the mathematics of cryptography, I take the tldr; to be: current symmetric encryption and hash algorithms are probably fine, but will have to double their key size; current public-key algorithms are broken, but there are replacements waiting in the wings.

        1. 4

          Refresher for everyone after the recent FB outage?

          I wonder if it would have been worse had they taken down the AS for all of FB, instead of just the DNS services.

          1. 6

            From my reading of the changes this seems more like a problem with how JodaTime deals with the changes than the actual changes themselves, and, if this were done without warning I would understand the author’s complaints but AIUI these changes have been contemplated and publicised for a while now.

            1. 9

              Yep!

              Technically, the data has moved, not been deleted. But the file containing the moved data is never normally used by downstream systems, thus to all intents and purposes it has been deleted.

              So the data is still there, but he’s just going to act like it’s completely vanished because the place it’s moved to is … a file he previously didn’t use.

            1. 8

              I recently started running my own email server for receiving email, and sending email through Sendgrid. Sendgrid has a free plan that allows for 100 emails per day. Here’s the related Lobsters post in case you are interested: https://lobste.rs/s/s10jr0/running_my_own_email_server

              1. 2

                How good a sender reputation does Sendgrid have? Have you had any instances where mail bounced or didn’t get delivered?

                1. 1

                  I haven’t had any problems. Sendgrid has a fairly good sender reputation as far as I know. It’s one of the larger email handling companies.

                  1. 1

                    In my experience if you’re on free tier, you get an IP with a bad reputation. You have to pay to get a good IP.

                2. 2

                  I’m nervous about things like SendGrid. If I send a mail from my server to yours, then I can read it and so can anyone with access to your server. If I send through sendgrid, they’re able to see the plain text of every email that I send. I find it quite hard to believe that they’d offer this as a free service if they weren’t data mining that.

                  1. 3

                    They’re offering a free service because it costs them very little and is useful for getting people to buy their commercial offering. Test for free is an excellent method.

                    As for security:

                    • if you’re sending to a mailing list, you weren’t going to encrypt

                    • if you’re sending private email, you need to encrypt the payload end-to-end

                    • if you’re sending really secret messages, you want to avoid traffic analysis, too, so email is not for you

                    The concerning thing about SendGrid et al is that they continue to devalue individual email servers and make it easier for the NSA or agency of your choice to do mass surveillance.

                    1. 2

                      Yes, I also consider E-mail “wide open,” so deliverability matters more to me than security, even though both are nice.

                    2. 1

                      I just started using SendGrid, and I’m a little wary, but I think they offer the service as a loss leader, more than a source of revenue. They did send a nag email every day for a couple of weeks after I signed up, but that seems to have stopped.

                      That being said, I’ve always considered email definitely not secure, and open to be read by anyone, so them being able to see the plain-text is not really a concern for me.

                  1. 18

                    This is one of the things I wanted to write in response to https://lobste.rs/s/ezqjv5/i_m_not_sure_unix_won but haven’t really been able to come up with a coherent response. Anyone who believes that “in the good old days” UNIX was a monolithic system where programs could be easily run on different UNIXes wasn’t there. Hell, even if you stuck with one vendor (Sun), you would have a hell of a time upgrading from SunOS to Solaris, not to mention, HP-UX, AIX, SCO UNIX (eugh), IRIX and many others. Each had their “quirks” and required a massive porting effort.

                    1. 3

                      Hi, author of that original post. You’re definiltey not wrong, unfortunately. My concern with that original post was the fact that Linux was heading in the same direction of doing their own thing, rather than POSIX or the Unix-way. We had a chance to do it better, with hindsight this time.

                      (Whether the Unix-way ever truly existed is another point I’m willing to concede!)

                      Having had time to think about it more, Linux does deserve more credit than I gave for it. By and large, porting Linux stuff to BSD now is easier than some of the commercial Unixen of late (yes, I was there, if only for a few years). But it does feel like we’re slowly going backwards.

                      1. 6

                        As a flip side to that, I think that getting away from POSIX and “The UNIX way” (whatever that means), is actually moving forwards. “The UNIX way” was conceived in the days when the standard interface was a jumped up printer, and 640KB of RAM was “enough for anyone”. Computers have exploded in capability since then, and “The UNIX way” was seeming outdated even 30 years ago (The UNIX-HATERS mailing list started in 1987). If you told Dennis Ritchie and Ken Thompson in the 70s that their OS would power a computer orders of magnitude more powerful than the PDP-11, and then told them it would fit in your pocket… Well, I dunno, Ken Thompson is still alive, ask him.

                        Anyways… My point is that the philosophical underpinnings of “The UNIX Way” have been stretched to the breaking point for a long time now, and arguably, for computer users, rather than developers, it broke a long time ago and they went to Windows or Mac. It’s useful as a metaphor for the KISS principle, but it just doesn’t match how people interface with Operating Systems today.

                        1. 2

                          The Bell Labs people did do ‘Unix mark II’ in the late 1980s and early 1990s in the form of Plan 9. It was rather different from Unix while retaining the spirit (in many people’s view) and its C programming environment definitely didn’t attempt to stick to POSIX (although it did carry a number of elements forward). This isn’t the same as what they might do today, of course, but you can view it as some signposts.

                          1. 1

                            My apologies, I thought the Unix way/Unix philosophy/etc were widely understood. Probably the most famous of these was Doug McIlroy’s “Make each program do one thing well.” Even if we’re building orders of magnitude more complexity today, I think there are still lessons to that approach.

                            I agree we have to move with the times, but thus far reinventions have so far looked like what Henry Spencer warned about with reinventing UNIX, poorly.

                            1. 1

                              “Make each program do one thing well.”

                              The precept is violated by a program like ls. Why does it have different options for sorting by size, ctime etc? Isn’t it more flexible to simply pipe that through sort?

                              sort itself has a -u option, unneeded as you can just filter it through uniq. Yet it’s a feature in both GNU and (Open)BSD versions.

                              1. 1

                                Are we at the splitting hairs or yak shaving stage now? I guess yaks can have Split Enz, like a Leaky Boat.

                                My original post was that it was disengenuous to say “Unix won” when Linux did. @mattrose disagreed, saying that the past wasn’t a cross-platform utopia either (true, alongside his quote from famed Unix-fan Bill Gates). I opined we had the opportunity to do better this time, but Linux is making the same mistakes to the detriment of OSs like BSD. Heck, even macOS. Also, that those Unix guys had good ideas that I assert have stood the test of time despite the latest in a long line eager to replace them. The machine I’m writing this on now is proof.

                                Se a vida é. Wait, that was the Pet Shop Boys, not Split Enz.

                                1. 2

                                  Proponents of “the Unix way” espouse a strange dichotomy: they propose that the philosophy is superior to all competitors, and decry that the competitors are trouncing it in the market[1].

                                  Something has to give. Perhaps the penchant for it is an aesthetic preference, nothing more.

                                  [1] both in the economic one, and the marketplace of ideas.

                                  1. 2

                                    I totally understand the penchant for “the UNIX way”, and actually share it. It makes everything really simple. UNIX, at it’s base, is sending streams of text from one file-type object to another. It makes “do one thing well” really easy, as it enables combining the output of one program into the input of another program, and so you can write a program that enables that kind of pipelining. Even with non-text streams you can write that kind of pipeline, like gstreamer does, but…

                                    From a user perspective, it’s a nightmare. Instead of having to know one program, you have to know 10 or more to do the same thing, and there is no discoverability. With Excel or its equivalent, I can sum easily select a column of numbers and get the sum of that column. The easiest “UNIX way” of doing the equivalent is, something like cat file.csv | awk -F"," '{ (s+=$3) ; END {print s}' and it took me a while to figure out how to invoke awk to do that, and that required me to know that

                                    1. awk exists
                                    2. awk is good at splitting text into columns
                                    3. awk takes an input field-delimiter option

                                    And that is completely outside of all the awk syntax that I needed to actually write the command.

                                    This is why “the UNIX way” is being trounced in the market. When there’s a more complex, but user-friendly option, the conceptually simpler option is crowded out.

                                    The trick is to try and keep in mind the Einstein aphorism ““Everything should be as simple as it can be, but not simpler”

                          2. 2

                            My personal experience is with the shared GPU drivers; one pile of C code used to work for both Linux and BSD. The main change was the move into the kernel. As GPUs required more kernel-side scaffolding to boot, kernel-side memory management for GPU buffers, etc. the code had to start specializing towards kernel interfaces.

                            In general, a lot of code has moved down into kernels. Audio and video codec awareness, block-device checksumming, encryption services, and more.

                        1. 5

                          That’s brave

                          1. 10

                            That seems strange that elementaryOS developers would try to deny others the same rights that were essential in allowing them to make and distribute elementaryOS in the first place!

                            The right they wanted to deny others was the right to distribute CDs using the name ‘elementaryOS’, which could reflect on them as an entity. They restricted nobody’s right to do anything with the software itself.

                            1. 4

                              The right they wanted to deny others was the right to distribute CDs using the name ‘elementaryOS’, which could reflect on them as an entity.

                              But that was already covered by GPLv2:

                              If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors’ reputations.

                              And on

                              They restricted nobody’s right to do anything with the software itself.

                              that is not what I understand by

                              Both distros have rules on their various subreddits (/r/elementaryos and /r/zorinos) that users cannot post links and sometimes information on how to build your own is removed. If someone builds their own .iso or shares the information to do so, they will have their post deleted and be banned.

                              To me, it seems they are limiting people’s ability to distribute of a variation of their GPL’ed product, which goes against the GPLv2 (and v3) text:

                              Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients’ exercise of the rights granted herein.

                              But in the end, I think the problem is more of moral origin (do not delete posts) than licensing one.

                              <sarcasm> Maybe a GPLv4 will address this kind of problem. </sarcasm>

                              1. 4

                                This is basically the less serious version of the RHEL agreement. Redhat distributes GPL binaries if you sign a support contract with them. Even though Redhat distributes the source to all of their programs, the support contract comes with very strict conditions against redistributing those binaries, and basically voids the contract.

                                Many people say that this goes against the spirit of the GPL, but I’m not one of them.

                                1. 2

                                  Many people say that this goes against the spirit of the GPL […]

                                  As far as I understand the GPLv2 §3, the Red Hat’s restriction violates the GPL:

                                  You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above […]

                                  1. 4

                                    You are allowed to redistribute the binaries. Redhat does not prevent you from distributing the binaries. All they do is stop providing support if they find out you have done so.

                            1. 12

                              I used to run the CI and build environments for a software product, and every time a dev came up to me and said “It builds in my environment” as a reason for breaking the build, I would just calmly say, “well, you should fix it so it builds on the build servers, otherwise I’ll just take your workstation and add it to the build environment”

                              That would usually give them a little perspective on the problem… I never did have to follow through on that threat.

                              1. 4

                                Alternative: You may have some non-obvious discrepancy between the dev and CI environments. The problem could range from silly dev issue (relying on hardcoded paths) to silly ops issue (CI run out of disk space and fails builds randomly) all the way through the spectrum in between. But there may be something to fix together and the whole exchange sounds bad (fix it for me, no you fix it for me).

                                There’s some learning to do on both sides…

                                1. 5

                                  Alternative: The build environment is documented, monitored, and well-known. The developer’s workstation is … not. I’m not saying that I wouldn’t work through the issue with them, but “It works on my machine” is the developer saying “you fix it for me”.

                                  1. 1

                                    Yep. In my last job the tests ran perfectly fine on dev machines 90% of the time and only failed on CI. But they ran on every developer’s machine - it was the CI.

                                    1. 3

                                      In my last job the tests ran perfectly fine on dev machines 90% of the time

                                      We’re those 10% because of you actually breaking the tests functionally or because you broke something else?

                                      To me, CI should be similar to the environment in production and if it fails it mean that there’s high chance that it wouldn’t work in prod. We can take the bet, but we often agree with devs that if it fails, nobody is willing to check if it would fail on prod or not (because you then have to take care of the outage).

                                      1. 1

                                        Those 10% were “developer made a mistake”. But CI is useless if 4 out 5 errors are not genuine and would not occur on developers’ machines or in prod. JUST on the CI, but I could’ve phrased that better.

                                1. 3

                                  This article only matters for libraries whose maintainers want to be exploited by corporations. Free Software maintainers do not need to change a thing with respect to this article’s requests.

                                  1. 2

                                    I don’t even think it’s relevant to most of those. It seems to be only relevant if you’re somehow packaging, or making available, a bunch of open source packages that you didn’t write or maintain.

                                  1. 1

                                    As an open source maintainer, I was hoping for some relevant insights from this, but as much as I respect Luis, there was nothing in this that was relevant to me, or most maintainers.

                                    Security is important, but it’s always been important, so setting this up as a new expectation is a little misleading. One does not need to be a security expert to write secure software, you just need to be aware of the footguns in the languages and systems you’re using, and

                                    1. be careful to avoid firing them
                                    2. be quick to patch when you’ve inevitably failed at #1.

                                    As for Legal metadata, I’ve been on both sides of the fence (both writing commercial software that leveraged open source, and writing open source software), and I’ve never heard of anyone unsatisfied with a simple LICENSE file distributed with the release.

                                    As for SBOM, I’m not even sure what this means. It sounds like more of a requirement for Enterprise to know what software they’re packaging and shipping, or making available to the public. It is not a requirement for open source maintainers that are full participants of the ecosystem of open source.

                                    If you’re distributing a piece of software that packages up a bunch of open source software, you’re right that you need to keep track of vulnerabilities in the software that you package along with yours, but if you distribute your software as a solitary piece of software that you maintain yourself, as most open source maintainers do, then you can then depend on the open source ecosystem, like Linux distributions, or even managers like homebrew, to ensure that the dependencies your software relies on are kept up to date and bug free.

                                    1. 3

                                      Laura Lemay was a tech writer at Sun and was one of the earliest tech writers at Netscape.

                                      1. 8

                                        Somewhat related story: on a non-work linux laptop my wife ended up with a directory of 15 million files. The full story is here http://pzel.name/til/2020/08/30/Large-directory-feature-not-enabled-on-this-filesystem.html I used find . to list all the files, which surprisingly did not hang.

                                        1. 1

                                          I was wondering if find . would hang in the same way. ls is actually notoriously bad at listing files when it gets over a certain amount.

                                        1. 1

                                          How do you discover the machine’s IP? Does it announce itself over avahi? I wonder if there’s some really clever way to announce the IP, like by beeping it out in binary over the PC’s internal speaker or something.

                                          1. 1

                                            It doesn’t look like they’ve solved that problem, but announcing it over avahi would be a good way of doing it. I think Morse Code over the internal speaker would be a much better solution :)

                                            1. 1

                                              For the local network, advertising with a well-known name over mDNS (e.g. freebsd-install.local) would be fairly easy to add. I’m more interested in using this for VMs, where I know the IP address because I just created the VM and assigned it an IP address.

                                            1. 12

                                              That’s a really interesting idea, but it’s not really rethinking the OS install, as Ubuntu did with their liveCD approach, or Fedora did with the big anaconda update a few years back. It’s just exposing a new interface for going through the exact same steps as you would at the console.

                                              Not that this isn’t actually a neat development, and I can envision it being quite handy. Imagine being able to set up a freebsd box with your phone.

                                              It’s just not the article I was hoping it was. Not the article’s fault…

                                              1. 1

                                                Exactly.

                                                Having to muck about just this week with installing and reinstalling and mesing with things, I think a good improvement would be a “smart” installer. I don’t actually mean AI, but I do mean things like “we detected you have an AMD GPU, we suggest this driver” and then a change icon if you wanna go crazy.

                                                And even more important: disk partitioning. You usually either have the option to do wipe everything and start from scratch, or pick one partition and ignore the rest. Everything else is manual. What if the installer read and said “Hmm, looks like an encrypted volume, pls gimme password so I can see what’s in there and re-mount your /home from there instead. Oh, btw, you have an NTFS disk, but no _Windows_ folder, just some music, let me add that as “/mnt/Music” automatically.

                                                But these things are probably relatively niche and I don’t think worth the actual effort.

                                                1. 2

                                                  correct me if I’m wrong here, but doesn’t autoinstall (8) get one somewhat there? I’ve just stepped into the OpenBSD world, but as for disk partitioning one can define a template, it seems.

                                                2. 1

                                                  This precisely. I was sort of hoping that this would be a “no install” installer; IOW, you boot up and can start working, and the system observes what you’re doing, and builds an install plan in the background.

                                                  Still, it’s not crap.

                                                  1. 1

                                                    I think that maybe the next step is to install the OS using curl and the step right after that is to make a provisioner using Terraform or something. You boot the VM, wait on the installer to be ready, then POST some JSON blobs at it and then it configures/installs itself without any human intervention.

                                                    Having an HTTP endpoint makes an API (of sorts) which lends itself to both phone-based usage and automation.

                                                  1. 1

                                                    It’s a bit hard to qualify what’s just “fancy” and what’s useful.

                                                    For example you can use tar for everything, but that doesn’t provide checksums or verification of contents. Zfs provides good features, but given other good tools is that too fancy? Etc…

                                                    1. 1

                                                      That’s the problem with this type of discourse. Everyone’s needs are different, and what the author describes as “fancy” may be an absolute requirement for somebody else.

                                                      1. 1

                                                        Yeah, I would actually describe “run your own network server, sshfs-mount it, and manually (!) run backups” as ‘fancy’. My own setup just borg backups my entire home directory to a local machine and to rsync.net at 4 am every day via systemd timers, and I have a status tray thing that turns red if either backup timer hasn’t succeeded in the past week.

                                                      1. 4

                                                        Should be merged into jx3cr6 .

                                                        My thoughts:

                                                        Now here’s the “unpopular opinion puffer meme”: that we’re in this situation means that Linux developers are implementing a form of open source vendor lock-in. With Docker, k8s, SystemD, etc, Linux is enticing developers to write code that only runs on Linux–that is, locks users into Linux.

                                                        It’s because of this open source vendor lock-in strategy that we inthe BSDs need some level of compatibility. Though FreeBSD has had a notion of containerization for over two decades, Linux’s Docker is way more popular. SystemD is wholly incompatible with any of the init systems in the BSDs. Yet some projects only support SystemD integration.

                                                        Perhaps this is because Linux offers functionality that is useful for developers, and the lowest common denominator is no longer adequate.

                                                        1. 4

                                                          FreeBSD users and devs seem to think that people who don’t use or develop for FreeBSD only do so because they haven’t heard of and used FreeBSD. The fact that some smart people have looked at FreeBSD and said “Nah” doesn’t seem to occur to them

                                                          1. 3

                                                            To be kinder, I actually like FreeBSD (it’s more “tasteful” than the average Linux distribution), but the reality is baseline POSIX is anemic for a modern application outside of the typical 90s Unix model, and no one really wants to catch up and standardize modern functionality (or shared common functionality). Especially when it comes to distribution/isolation.

                                                            I brought it before, but it’s a lot of wasted potential; FreeBSD’s biggest fans seem to ignore if not revile its most interesting aspects. They were first to the punch with jails - they could have led a container revolution. Instead, the tooling was painful and the rest of the world uses Docker instead.

                                                            1. 2

                                                              I kinda like FreeBSD. It’s very simple, and 20 years of UNIX administration really pays off, BUT.

                                                              The rough edges on it really rub me the wrong way. I know that there’s an easier way to do most of the things FreeBSD makes really hard, and the documentation can be really inconsistent and years out of date, with no indication that there might be a newer, better way of solving some problem that somebody has come up with since the documentation was written (see LDAP authentication for a good example of that).

                                                              1. 2

                                                                Yes, pkg is another papercut factory. I’d like pkg base, but that’s also predicated on pkg catching up to i.e. apk. Hope you don’t mix ports and packages!

                                                                I’ve also found the lauded Handbook slightly lacking.

                                                        1. 16

                                                          Zoe Knox (@thatzoek on Twitter) has not been working for this for that long and, for someone who struggles to fix a couple of bugs a week on my Open Source project in my spare time, the ambition of re-implementing Cocoa on FreeBSD and making a Mac compatible free OS is staggering, and the amount she’s gotten done is inspirational.

                                                          1. 4

                                                            A lot of the code comes from other projects. Some of it’s mine (yay!). A lot of it comes from Cocotron, which makes me a bit nervous - that project was somewhat infamous for copying code from elsewhere without attribution.

                                                          1. 1

                                                            I ran this in the late 90s. It was as good/bad as it sounds.

                                                            1. 2

                                                              This kinda seems like an advertisement for scalablepath.com