1. 4

    I ignored the warnings and tried to run this in Firefox, and it caused a hard crash of the whole browser. I’m impressed.

    1. 1

      I wonder which of the many subsystems being abused failed.

      1. 1

        Worked on my Firefox/MacOS, but with very low framerate.

        1. 1

          It definitely will work on iOS/MobileSafari with a few tweaks, too. The video takes over the page, and input is definitely not meant for mobile. But if I go back to the page (which pauses the video and thus the game) the checkboxes are updated correctly.

      1. 8

        It took nearly a decade and now we have it: The wireless LAN cable.

        scnr

        1. 1

          More like over two decades!

        1. 2

          Looking for a job. Things have been rough.

          Maybe we should have regular “I’m looking,” and “I’m hiring,” threads?

          1. 2

            There’s usually a monthly hiring thread, and similar threads exist on the orange site.

            1. 2
          1. 3

            If I’m understanding correctly you need to keep Mail.app open with their plugin running, and use their own app to find/track something. Is there a way to actually add a DIY tracker/device to the Find My app, or is that locked down without an MFi license?

            1. 4

              What’s the trend with UI’s designed for touch? It would make sense if the OS was designed for iPad. How many use Gnome on a touch device? Adding more whitespace doesn’t simplify a UI.

              1. 1

                Maybe to take advantage of laptop hardware with touchscreens?

                1. 1

                  Maybe, but I really doubt more than a small percentage of Gnome users have a touchscreen.

                  If this “improvement” really is intended to help touchscreen users, maybe we should just take a quick hardware survey to see how common touchscreens on Linux machines are?

              1. 4

                This looks cool, but why would I want to learn this language over Python, or a more mature alternative shell like Oilshell?

                1. 5

                  Good question - abs is a research vehicle for exploring the semantics of actor-based programming languages. As such, the syntax is somewhat restricted in some areas (to make static analysis easier), and the standard library is small. You’ll notice quickly that there are no file operations.

                  The closest recent analogue to abs semantics is the new actor-based concurrency model of Swift. The similarity extends to the problems with mutual deadlock and state mutation during process suspension - I’m really curious about the patterns and tools the larger community there will come up with to cope with these problems!

                  One (I believe) unique feature of abs is await-ing on member variables: multiple processes can, for example, cooperate on a list of work items stored in their actor, without the list growing unbounded. (Each consumer does await length(items) > 0, each producer does await length(items) < upper_bound, and cooperative scheduling guarantees that the field items always has between 0 and upper_bound items.)

                  For me, the nicest thing about abs is modeling systems that are timed – multiple actors, each with their own timed behavior, can run in parallel and the behavior of the whole system just composes naturally. There’s a small example here: https://abs-models.org/documentation/examples/single-watertank/ – the two actors in that example happen to wake up at the same time points, but that’s not necessary.

                  TL;DR: you shouldn’t learn abs if you need a mature implementation language, and this is what happens if you ask an academic why their thing is interesting. ;)

                  1. 2

                    Are you sure you’re talking about the same thing? The linked language seems to be a user-friendly shell scripting replacement.

                    1. 2

                      Oh wow, you’re right! I only saw “Abs language” and didn’t even click the link, how embarrassing. (The one I was talking about is at https://abs-models.org.) Terribly sorry for hijacking the discussion.

                    2. 1

                      Thanks for the great explanation!

                  1. 1
                    • Getting used to the labwc wayland compositor (stacking windows, wlroots) - I’d like to try embedding Lua into it, but first job is getting it to a place I can dogfood it

                    • Finding some people to review my new command line password manager moss (like pass but with age instead of gpg) and tell me what I missed security-wise

                    1. 2

                      Hey, you made a typo in your link to moss. I think you meant https://github.com/telent/moss

                    1. 3

                      I really hate syntactically significant white space. Not because it’s a bad idea, but because to this day nobody can really agree on spaces vs. tabs, or tab width.

                      1. 2

                        Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.

                        1. 12

                          If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.

                          If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.

                          1. 2

                            when LetsEncrypt stops working

                            That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!

                            1. 5

                              LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.

                            2. 2

                              But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?

                              I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)

                            3. 6

                              It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.

                              If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.

                              This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.

                              1. 3

                                There isn’t any reason. The author says they did it only because they can.

                                1. 2

                                  I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.

                                  1. 2

                                    To collect expiration notification emails from all the CAs! :D

                                    1. 2

                                      The article says “Just because I can and just because I’m interested”.

                                    1. 3

                                      Personally I like to prevent hostname collisions even on disparate networks, just in case I ever link them with a VPN. Usually I use something like home.$DOMAIN where $DOMAIN is a personal use/development domain (and sometimes I’ll make DNS entries if appropriate), but you could just as easily do something like foo.home.arpa if you don’t need to use a real domain.

                                      If you have a DNS server with a home.arpa or foo.home.arpa zone you can also address these over VPN as if they were real domains, without opening up your firewall.

                                      1. 1

                                        I thought Gnome 40 was already considered stable. Why are new distro releases still shipping 3.x?

                                        1. 4

                                          Because Gnome 40 was released after Debian 11 features freeze.

                                          1. 3

                                            That’s right: bullseye soft freeze was February, GNOME 40 released in March.

                                            IMHO (as a Debian developer) we should have delayed the soft freeze and got 40 into bullseye, if there was sufficient confidence that 40 really was stable enough (we’d have had to evaluate that before 40 actually shipped)

                                            1. 1

                                              Does gnome not make it into back ports?

                                            2. 1

                                              FWIW (not much, I know!), I think that this is exactly the right approach to take. There’s always one more update, one more feature. If you are aiming for stable releases (and I think Debian should), then you gotta draw a line in the sand at some point.

                                              I love that Debian is run so well. I only wish more projects had a similar, healthy respect for stability.

                                          1. 1

                                            This is awesome! I was considering building something similar for offloading compilation of AUR packages for my Pinebook Pro. Is native (not cross-compiled) aarch64 support a goal?

                                            1. 1

                                              Lambda doesn’t support aarch64 functions so I’m not sure how you could do ARM builds using it without doing some sort of cross-compilation. If there’s a high-performance ARM cloud functions provider out there, I’d potentially be interested in trying it.

                                              1. 1

                                                That makes sense. I’m not familiar with Lambda, but I thought it might support a Graviton host

                                                1. 1

                                                  Could run inside of qemu, would just be a constant factor.

                                              1. 3

                                                Not to discount this work here, but Vim is still available for Amiga. I know I’d prefer that!

                                                1. 4

                                                  I really wonder why they used Weston. Is it still the “reference” Wayland compositor? I thought wlroots now fits that.

                                                  1. 6

                                                    Yes, Weston is still the reference implementation. wlroots isn’t really in a position to change that; it’s just a really good implementation, but with no special ties to the Wayland project.

                                                    1. 2

                                                      Collabora has sold in Weston in quite a few places you’d perhaps not look (the RDP implementation is also a reason in this specific case) - and they had their hands in the WSL cookie jar as well. See the slides to https://aglammjapan2019.sched.com/event/L8Vr for a treat.

                                                    1. 2

                                                      Is the lowercase i just a restyled U+0069 or is it a U+0131 ?

                                                      1. 2

                                                        It’s a restyled lowercase i, as the site notes :)

                                                        since the dotted glyphs for i and j are obviously inferior to the undotted variants, those are used instead.

                                                        1. 6

                                                          Just FYI, several Turkic languages use dotless I and ı as completely separate letters from dotted İ and i.

                                                          1. 2

                                                            Welp. Not sure what I can do about that :V

                                                            1. 7

                                                              Live up to the name, and dot the dotless versions. Obviously.

                                                      1. 27

                                                        I’d recommend a NUC here. I’ve tried using an RPi 1, and then an RPi 3 as desktops, but both were painful compared to a NUC, which was drama-free. I’ve never had any problems with mainstream Linux on mine. IIRC, it comes with either SATA or M.2.

                                                        1. 4

                                                          I’ve also used an Intel compute stick when traveling. It has the added benefit of not needing an hdmi cable.

                                                          1. 2

                                                            It has its benefits, but it was slow when it came out five years ago… I used one for a conference room and it really is disappointing. A NUC would have been better. Harder to lose if you do take it traveling, too.

                                                          2. 3

                                                            I agree with this: If you don’t want a laptop, a very small form factor PC is a better choice than a more barebones SBC for use as a general-purpose PC. The NUC is great, though there’s some similar alternatives on the market too.

                                                            I have a Zotac ZBOX from a little while ago. It has a SATA SSD, Intel CPU and GPU, and works great in Linux. In particular it has two gigabit NICs and wifi, which has made it useful to me for things like inline network traffic diagnosis, but it’s generally useful as a Linux (or, presumably, Windows) PC.

                                                            The one I own has hdmi, displayport, and vga, making it compatible with a wide selection of monitors. That’s important if you’re expecting to use random displays you find wherever you’re going to. It also comes with a VESA bracket so it can be attached to the back of some computer monitors, which is nice for reducing clutter and cabling.

                                                            1. 2

                                                              Never heard of a NUC before now but I can agree that trying to use an RPi as a desktop is unpleasant.

                                                              1. 1

                                                                Yeah the Pi CPUs are very underpowered, it’s not even a fair comparison. They’re different machines for different purposes. I would strongly recommend against using a Pi as your primary Linux development machine.

                                                                I think this is the raspberry Pi 4 CPU, at 739 / 500:

                                                                https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A72+4+Core+1500+MHz&id=3917

                                                                And here’s the one in the NUC I bought for less than $500, at 7869 / 2350 :

                                                                https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-8260U+%40+1.60GHz&id=3724

                                                                So it’s it’s 4-5x faster single-threaded, and 10x faster overall !!! Huge difference.

                                                                One of them is 1500 Mhz and the other one is 1600 Mhz, but there’s a >10x difference in computer. So never use clock speed to compare CPUs, especially when the architecture is different!

                                                              2. 2

                                                                Yeah I just bought 2 NUCs to replace a tower and a mini PC. They’re very small, powerful, and the latest ones seem low power and quiet.

                                                                The less powerful NUC was $450, and I got portable 1920x1080 monitor for $200, so it’s much cheaper than a laptop, and honestly pretty close in size! And the CPU is good, about as powerful as the best desktop CPUs you could get circa 2014:

                                                                https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-8260U+%40+1.60GHz&id=3724

                                                                old CPU which was best in class in a tower in 2014: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4790+%40+3.60GHz&id=2226

                                                                (the more powerful one was $800 total and even faster: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-10710U+%40+1.10GHz&id=3567 although surprisingly not that much faster)

                                                                This setup, along with a keyboard and trackball, is very productive for coding. I’m like the OP and don’t like using a laptop. IMO the keyboard and monitor shouldn’t be close together for good posture.

                                                                In contrast the tower PC in 2014 was $700 + ~$300 in upgrades, and the monitor from ~2006 was $1000 or more. Everything is USB-C too on the NUC/monitor setup which is nice.

                                                                I guess my tip is to not upgrade your PC for 7-10 years and you’ll be pleasantly surprised :) USB-C seems like a big improvement.

                                                                1. 4

                                                                  Yeah I just bought 2 NUCs to replace a tower and a mini PC. They’re very small, powerful, and the latest ones seem low power and quiet.

                                                                  NUCs are great machines, but they are definitely not quiet. Because of their blower-style fan, they become quite loud as soon as the CPU is just a bit under load. Audio proof: https://www.youtube.com/watch?v=rOkyFLrPc3E&t=341s

                                                                  1. 2

                                                                    So far I haven’t had a problem, but it’s only been about 3 weeks.

                                                                    The noise was the #1 thing I was worried about, since I’m sensitive to it, but it seems fine. For reference I replaced the GPU fan in my 2014 Dell tower because it was ridiculously noisy, and I have a 2012 era Mac Mini clone that is also ridiculously noisy when idle. The latter honestly 10x louder than the NUC when idle, and I have them sitting side by side now.

                                                                    The idle noise bothers me the most. I don’t have any usage patterns where you are running with high CPU for hours on end. Playing HD video doesn’t do much to the CPU; that appears to be mostly GPU.

                                                                    I’m comparing against a low bar of older desktop PCs, but I also think Macbook Airs have a similar issue – the fan spins really loud when you put them under load. For me that has been OK. (AdBlock goes a long way on the Macbooks, since ads code in JS is terrible and often pegs the CPU.)


                                                                    I think the newer CPUs in the NUCs are lower power too. Looking at the CPU benchmarks above, the 2014 Dell i7 is rated a 84 W TDP. The 2020 i5 is MORE powerful, and rated 10 W TDP down and 25 W TDP up.

                                                                    I’m not following all the details, but my impression is that while CPUs didn’t get that much faster in the last 7 years, the power usage went down dramatically. And thus the need to spin up fans, and that’s what I’ve experienced so far.

                                                                    I should start compiling a bunch of C++ and running my open source release process to be sure. But honestly I don’t know of any great alternative to the NUCs, so I went ahead and bought a second one after using the first one for 3 weeks. They’re head and shoulders above my old PCs in all dimensions, including noise, which were pretty decent at the time.

                                                                    I think the earlier NUCs had a lot of problems, but it seems (hopefully) they’ve been smoothed out by now. I did have to Google for a few Ubuntu driver issues on one of them and edit some config files. The audio wasn’t reliable on one of them until I manually changed a config with Vim.

                                                                2. 1

                                                                  I have also been using a NUC for a year now, and it works well. A lot of monitors also allow you to screw the NUC to its back, decluttering your desk.

                                                                  Just watch out, it has no speakers of it’s own!

                                                                1. 1

                                                                  Wow, this brings back memories. MSJVM 2.0?

                                                                  1. 7

                                                                    I love systemd services and templates and the features it has to restart things, but I think this is a pretty absurd thing to say:

                                                                    and run it as a systemd service, because how else are you going to start this thing anyways, cron?

                                                                    Does anyone seriously think that Linux didn’t properly support services before systemd? It’s not like /sbin/init was symlinked to bash

                                                                    1. 5

                                                                      i’m dealing with this right now on Alpine Linux, which doesn’t use systemd. other than xdg autostart, there are no user services. the autostart thing is very very limited compared to systemd user services..

                                                                      1. 3

                                                                        the point i was making here is that there’s some sort of a controversy around adoption of systemd across major linux distribution in general, and in Debian in particular. It was a tongue-in-cheeck comment about how i am making a stand and declaring systemd as a standard, even though there is an ongoing controversy, but i’m too tired of that debate to engage into it in that specific post.

                                                                        I find the fact that you not only bit on the bait but also that your comment here is the most highly rated kind of strange, to be honest. You’d think comments regarding the actual setup would be more interesting than a single line commenting on that controversy.

                                                                        Or maybe people assumed I truly didn’t know about alternatives to systemd, in which case I apologize: it is obvious, to me, that there are multiple ways of starting processes outside of systemd (cron being the actual, truly most common occurence i find in the wild, because it allows regular users to inject themselves in the boot process without root).

                                                                        i hope that clarifies things…

                                                                      1. 5

                                                                        I have a bunch of keyboards. One of my special interests is high-quality rubber dome boards. I have a buyer’s guide on my web site, if anyone is interested.

                                                                        Right now, though:

                                                                        • On my Windows 7 PC, I use an Acer 6312K, with Alps-like Acer switches. Quite nice.
                                                                        • On my old Macs, I use an original Apple USB Keyboard for the iMac G3. Not very nice.

                                                                        Other notable keyboards include IBM Model M, Topre Realforce 104UG, Dell QuietKey RT7D5JTW. None of them see any use at the moment. Just ordered a Dell AT102W. Looking forward to trying it!

                                                                        1. 3

                                                                          Since you like rubber dome boards, I’m curious what you think about a Sun Type-5. I have one whose layout is very nostalgic for me, that I have been thinking of converting to USB with a spare Pro Micro or similar.

                                                                          1. 3

                                                                            Would love to try one, but haven’t had the chance. It’s absolutely beautiful, and I’ve heard good things about it, apart from the difficulty to use it with other computers. Sounds like a worthwhile project!

                                                                        1. 19
                                                                          [ $USER != "root" ] && echo You must be root && exit 1
                                                                          

                                                                          I’ve always felt a bit uneasy about this one. I mean, what if echo fails? :-)

                                                                          So I usually do

                                                                          [ $USER != "root" ] && { echo You must be root; exit 1; }
                                                                          

                                                                          instead… just to be safe.

                                                                          1. 10

                                                                            Indeed, echo can fail. Redirecting stdout to /dev/full is probably the easiest way to make this happen but a named pipe can be used if more control is required. The sentence from the article “The echo command always exists with 0” is untrue (in addition to containing a typo).

                                                                            1. 3

                                                                              Don’t you need set +e; before echo, just to be extra safe?

                                                                              1. 3

                                                                                I had to look that up. set +e disables the -e option:

                                                                                          -e      Exit immediately if a simple command (see SHELL  GRAMMAR
                                                                                                  above) exits with a non-zero status
                                                                                

                                                                                That’s not enabled by default, though, and I personally don’t use it.

                                                                                1. 1

                                                                                  Or &&true at the end, if it’s okay for this command to fail. EDIT: see replies

                                                                                  It’s as much of a kludge as any other, and I’m not sure how to save the return value of a command here, but bash -ec 'false && true; echo $?' will return 0 and not exit from failure. EDIT: it echoes 1 (saving the return value), see replies for why.

                                                                                  1. 2

                                                                                    You probably mean || true. But yeah, that works!

                                                                                    1. 1

                                                                                      I did mean || true, but in the process of questioning what was going on I learned that && true appears to also prevent exit from -e and save the return value!

                                                                                      E.G.,

                                                                                      #!/bin/bash -e
                                                                                      f(){
                                                                                      return 3
                                                                                      }
                                                                                      f && true ; echo $?
                                                                                      

                                                                                      Echoes 3. I used a function and return to prove it isn’t simply a generic 1 from failure (as false would provide). Adding -x will also show you more of what’s going on.

                                                                                2. 2

                                                                                  I personally use the following formatting, which flips the logic, uses a builtin, and printd to stderr.

                                                                                  [ "${USER}" == "root" ] || {printf "%s\n" "User must be 'root'" 1>&2; exit 1; }

                                                                                  When I start doing a larger amount of checks, I wrap the command group within a function, which turns into the following, and can optionally set the exit code.

                                                                                  die() { printf "%s\n" "${1}" 1>&2; exit ${2:-1}; }
                                                                                  ...
                                                                                  [ "${USER}" == "root" ] || die "User must be 'root'"
                                                                                  
                                                                                  1. 2

                                                                                    I also always print to standard out, but I’m pretty sure most shells have echo as a built-in. The form I usually use is

                                                                                    err() { echo "$1" 1>&2; exit 1; }