1. 2

    I really want to use Guix, it checks all my boxes except the one where it works “out of the box” on my Intel NUC. :(

    1. 1

      Strange, why doesn’t it work on your NUC?

      1. 1

        This blog post goes into a lot of detail about how he installed guix on his NUC, but the short answer is there are no acceptable free drivers for the wireless card.

        https://willschenk.com/articles/2019/installing_guix_on_nuc/

    1. 3

      I just bought four thinkpad x220/x230’s because I want to get my nephews and nieces off of proprioware and teach them what a computer is… It’s a long shot but I hope maybe one of them will catch the bug (and the better I do this preparation phase the more likely it is to work).

      Right now I am setting up guix on each of them, the next step is to create X sessions that can start them off with some instructions so they aren’t totally lost. Really hoping that the “hacker-y” computer with the terminals and the tiling WM will look cool to their peers.

      I’ll need some kind of remote management solution that can double as a communication mechanism. I was thinking tmux and ssh :)

      Finally, I may write a small (Icelandic as natural language) DSL for interacting with the window manager (translating guix would be a bit much since they know some English but I was looking into it a bit). I figured it could be a cool way to learn programming, especially if there are some malleable windows (some terminal TUI app) that the DSL can also touch.

      1. 1

        This sounds like an awesome project! Do you plan to document any of this in a blog or something?

        1. 1

          Yeah I’m working on something like that as well but it won’t be anywhere for a little while.

      1. 4

        from 13ms to under 1ms

        13ms means being over a frame behind, at 90Hz display refresh. For gamers, thas is a very measurable competitive handicap. Or at less than <1ms, advantage.

        I am hopeful for a world where “gaming keyboards” routinely get objectively reviewed for latency.

        1. 1

          Is there any evidence that being 1 frame behind sometimes actually leads to losses in competitive gaming?

          1. 2

            Given the number of factors involved it’d be quite difficult to be certain, but I have little trouble believing it for twitchy reaction-centric games (eg classic shooters).

            1. 2

              I know for a fact if someone were speedrunning something it would be a massive issue. There are lots of things within speedrunning that need to be executed on a single specific frame.

              1. 3

                Wouldn’t most frame-perfect inputs have a decent lead-in time or at least be repeatable? In both cases you would know you need to press 13ms earlier and adjust accordingly - probably without even thinking about it.

                Same with competitive games, anticipating your opponent is probably more important than having a sub-13ms reaction time(!).

          1. 4

            I run a mostly-full wayland setup on Arch Linux, and was recently bit by this bug that causes Audacity to OOM the system only when it runs under Wayland. The workaround is to run it in under Xwayland. Sigh.

            https://bugs.archlinux.org/task/67547?project=5&string=audacity

            Detecting XWayland with xlsclients

            xeyes is my favorite way to detect whether a client is running under Xwayland. It’s not super practical in every situation, but it sure is more fun.

            1. 3

              If you’re printing a pointer, you should probably use the pointer specifier, %p… That’ll work with pointers, regardless of your architecture, without messing with uintptr_t or inttype.h macros.

              1. 2

                The article mentions %p, and how it’s not available on their target architecture.

              1. 64

                Except that, as far as I can tell, Firefox isn’t produced by a malicious actor with a history of all sorts of shenanigans, including a blatantly illegal conspiracy with other tech companies to suppress tech wages.

                Sure, if your personal threat model includes nation states and police departments, it may be worthwhile switching to Chromium for that bit of extra hardening.

                But for the vast majority of people, Firefox is a better choice.

                1. 13

                  I don’t think we can meaningfully say that there is a “better” choice, web browsers are a depressing technical situation, that every decision has significant downsides. Google is obviously nefarious, but they have an undeniable steering position. Mozilla is more interested in privacy, but depends on Google, nor can they decide to break the systems that are created to track and control their users, because most non-technical users perceive the lack of DRM to mean something is broken (“Why won’t Netflix load”). Apple and Microsoft are suspicious for other reasons. Everything else doesn’t have the manpower to keep up with Google and/or the security situation.

                  When I’m cynical, I like to imagine that Google will lead us into a web “middle age”, that might clean the web up. When I’m optimistic, I like to imagine that a web “renaissance” would manage to break off Google’s part in this redesign and result in a better web.

                  1. 19

                    Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.

                    Not to mention, a few years ago I clicked on a Google search result link and immediately had a malicious EXE running on my PC. At first I thought it was a popup, but no, it was a drive-by attack with me doing nothing other than opening a website. My computer was owned, only a clean wipe and reinstallation helped.

                    I’m still a Firefox fan for freedom reasons but unfortunately, the post has a point.

                    1. 11

                      a few years ago I clicked on a […] link and immediately had a malicious EXE

                      I find this comment disingenuous due to the fact that every browser on every OS had or still has issues with a similar blast radius. Some prominent examples include hacking game consoles or closed operating systems via the browser all of which ship some version of the Webkit engine. Sure, the hack was used to “open up” the system but it could have been (and usually is) abused in exactly the same way you described here.

                      Also, I’m personally frustrated by people holding Mozilla to a higher standard than Google when it really should be the absolute opposite due to how much Google knows about each individual compared to Mozilla. Yes, it would be best if some of the linked issues could be resolved such that Mozilla can’t intercept your bookmark sync but I gotta ask: really, is that a service people should really be worried about? Meanwhile, Google boasts left, right and center how your data is secure with them and we all know what that means. Priorities people! The parent comment is absolutely right: Firefox is a better choice for the vast majority of people because Mozilla as a company is much more concerned about all of our privacy than Google. Google’s goal always was and always will be to turn you into data points and make a buck of that.

                      1. 1

                        your bookmark sync

                        It’s not just bookmark sync. Firefox sync synchronizes:

                        • Bookmarks
                        • Browsing history
                        • Open tabs
                        • Logins and passwords
                        • Addresses
                        • Add-ons
                        • Firefox options

                        If you are using these features and your account is compromised, that’s a big deal. If we just look at information security, I trust Google more than Mozilla with keeping this data safe. Of course Google has access to the data and harvests it, but the likelihood that my Google data leaks to hackers is probably lower than the likelihood that my Firefox data leaks to hackers. If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.

                        1. 1

                          If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.

                          That narrows down where you live, a lot.

                          Secondly, I’d assume that any data leaked to hackers is also available to Governments. I mean, if I had spooks with black budgets, I’d be encouraging them to buy black market datasets on target populations.

                          1. 1

                            I’d assume that any data leaked to hackers is also available to Governments.

                            Exactly. My point is that governments occasionally make an effort not to be malicious actors, whereas hackers who exploit systems usually don’t.

                      2. 6

                        I clicked on a Google search result link

                        Yeah, FF is to blame for that, but also lol’d at the fact that Google presented that crap to you as a result.

                        1. 3

                          Which nicely sums up the qualitative difference between Firefox and Google. One has design issues and bugs; the other invades your privacy to sell the channel to serve up .EXEs to your children.

                          Whose browser would you rather use?

                        2. 3

                          Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.

                          Sure, but I’d argue that’s a very different thing, qualitatively, from what Google has done and is doing.

                          I’d sum it up as “a few shady things” versus “a business model founded upon privacy violation, a track record of illegal industry-wide collusion, and outright hostility towards open standards”.

                          There is no perfect web browser vendor. But the perfect is the enemy of the good; Mozilla is a lot closer to perfect than Google, and deserves our support on that basis.

                        3. 8

                          These mitigations are not aimed at nation-state attackers, they are aimed at people buying ads that contain malicious data that can compromise your system. The lack of site isolation in FireFox means that, for example, someone who buys and ad on a random site that you happen to have open in one tab while another is looking at your Internet banking page can use spectre attacks from JavaScript in the ad to extract all of the information (account numbers, addresses, last transaction) that are displayed in the other tab. This is typically all that’s needed for telephone banking to do a password reset if you phone that bank and say you’ve lost your credentials. These attacks are not possible in any other mainstream browser (and are prevented by WebKit2 for any obscure ones that use that, because Apple implemented the sandboxing at the WebKit layer, whereas Google hacked it into Chrome).

                          1. 2

                            Hmmmm. Perhaps I’m missing something, but I thought Spectre was well mitigated these days. Or is it that the next Spectre, whatever it is, is the concern here?

                            1. 11

                              There are no good Spectre mitigations. There’s speculative load hardening, but that comes with around a 50% performance drop so no one uses it in production. There are mitigations on array access in JavaScript that are fairly fast (Chakra deployed these first, but I believe everyone else has caught up), but that’s just closing one exploit technique, not fixing the bug and there are a bunch of confused deputy operations you can do via DOM invocations to do the same thing. The Chrome team has basically given up and said that it is not possible to keep anything in a process secret from other parts of a process on current hardware and so have pushed more process-based isolation.

                        1. 5

                          For example, we recently had a bug in Alpine where pipewire-pulse was preferred over pulseaudio due to having a simpler dependency graph.

                          As a pipewire user, this sounds like a feature to me, not a bug. Pipewire works, pulseaudio never did.

                          1. 3

                            I was directly involved in the bug referenced in the blog post. Pipewire does not work for some use cases, e.g. where echo cancellation is needed (phones). So it’s not a real alternative for Pulseaudio, yet, until it can replace all functionality.

                            1. 2

                              In my case, I welcome it. Because it does take care of what I actually consider important. It allows pro audio, literally jack pipelines running on pipewire as-is, while still working for e.g. music players, videogames, videoconference and so on.

                              I used to have a convoluted setup with a fake alsa device feeding into jack. Now I just use pulseaudio.

                              It doesn’t allow me as low a latency as jack did (I have to run it at ~10ms, when I could do jack at 5ms), but this is fortunately tolerable for me at this time.

                              1. 3

                                I’m looking forward to replacing pulseaudio, but there’s no shortage of people clamoring for the immediate end to pulseaudio just because their specific use case can be satisfied by pipewire, with no consideration for all of the things that pulseaudio provides that pipewire currently does not.

                                1. 2

                                  My perspective is that if you can do pro audio (and with low latency) as pipewire is able to, general purpose use is, at least, a possibility.

                                  If you fundamentally cannot (as it’s the case with pulseaudio), then it will never be able to target general purpose. It will never be more than a toy.

                                  A rewrite could change this, but a rewrite is what pipewire already is.

                            2. 2

                              Unless you need multiple users to play audio at once using the same audio devices, in which case pipewire doesn’t seem to have anything to address that at the moment. I tried to switch after a reinstall but I couldn’t get that to work nor find anyone actively working on the problem.

                              1. 1

                                Interesting use case. I hope that’s at least a bug in some bugtracker. Lots of families out there do have a single computer they share among the family members.

                            1. 4

                              APK definitely has been nice to use. It’s lightning fast, and just kinda “does what I mean”. On the other hand, I’ve had a lot of troubles with APT. Having errors with packages being kept back and me not knowing why, strange errors, and a lot of having to do apt install --fix-broken. But then again, there are also a lot more package on APT so it might not be a fair comparison, I tend to use APK in less risky situations (more on the server side). Always interesting to read about these things tho!

                              1. 7

                                I get irrationally angry every time apt tells me “you have held broken packages”. If the machine was sentient, I would yell back, “no, you have held broken packages! I haven’t touched your repositories! You messed up here, apt, not me!”

                                1. 1

                                  I feel that every kinfd of software should have a “no, I didn’t screw up, you screwed up” mode.

                                  1. 1

                                    other than admitting guilt, what else would that mode do? would you really trust the thing that screwed you up to unscrew up the situation?

                                    1. 3

                                      would you really trust the thing that screwed you up to unscrew up the situation?

                                      I mean, with humans who can proactively admit their mistake, that’s usually exactly what I do. Typically works fine.

                                      Somewhat less so with machines though.

                                      1. 1

                                        They are talking about software (so, machines), not humans.

                                      2. 1

                                        I’d expect it to provide some kind of detailed trace that lists the assumptions that were made and explains how it ended up in the error case, such that I can mark the faulty assumption and let the application create an error report for me that I can submit to a bug tracker if I want to.

                                1. 16

                                  I think its speed is one of the thing which makes apk (and therefore alpine) so well suited to containers.

                                  It used to be that the slowness of apt wasn’t a huge issue. You would potentially have to let apt spin in the background for a few minutes while upgrading your system, and, once in a blue moon when you need a new package right now, the longer-than-necessary wait isn’t a huge issue. But these days, people spin up new containers left right and center. As a frequent user of Ubuntu-based containers, I feel that apt’s single-threaded, phase-based design is frequently a large time cost. It’s also one of the things which makes CI builds excruciatingly slow.

                                  1. 4

                                    distri really can’t happen fast enough… the current state of package management really feels stuck in time.

                                    1. 1

                                      I feel like speed could be a non issue if the repository state was “reified” somehow. Then you could cache installation as a function, like

                                      f(image state, repo state, installation_query) -> new_image_state
                                      

                                      This seems obvious but doesn’t seem like the state of the art. (I know Nix and guix do better here, but I also need Python/JS/R packages, etc.)

                                      The number of times packages are installed in a container build seems bizarre to me. And it’s not even that; right now every time I push to a CI on Travis and sourcehut it installs packages. It seems very inefficient and obviously redundant. I guess all the CI services run a package cache for apt and so forth, but I don’t think that is a great solution. I use some less common package managers like CRAN, etc.

                                      1. 2

                                        Part of it is no doubt that hosted CI platforms don’t do a great job of keeping a consistent container build cache around. You usually have to manually manage saving and restoring the cache to some kind of hosted artifact repository, and copying it around can add up to a nontrivial chunk of your build time.

                                        At my previous job, that was a big part of our motivation for switching to self-hosted build servers: with no extra fussing, the build servers’ local Docker build caches would quickly get populated with all the infrequently-changing layers of our various container builds.

                                      2. 1

                                        This sounds reasonable, until you realise it means that containers are constantly being rebuilt rather than just persisted and loaded when needed.

                                        1. 3

                                          Yeah, but they are. Look at any popular CI - TravisCI, CircleCi, builds.sr.ht, probably many, many others. They all expect you to specify some base image (usually Debian, Ubuntu or Alpine), a set of packages you need installed on top of the base, and some commands to run once the packages are installed. Here’s an example of the kind of thing which happens for every commit to Sway: https://builds.sr.ht/~emersion/job/496138 - spin up an Alpine image, install 164 packages, then finally start doing useful work.

                                          I’m not saying it’s good, but it’s the way people are doing it, and it means that slow package managers slow things down unreasonably.

                                          1. 2

                                            If you’re rebuilding your OS every time you want to test or compile your application, it’s not the package manager making it slow, no matter what said package manager does.

                                          2. 1

                                            Persistence can be your enemy in testing environments.

                                            1. 2

                                              Sure re-deploy your app, but rebuild the OS? I understand everybody does it all the time (I work in the CI/CD space), but that doesn’t mean it’s a good idea.

                                        1. 7

                                          I love Fantasque Sans Mono; it’s so damn cheerful and twee every time I look at a terminal or editor.

                                          1. 5

                                            l and I look identical in that font :(

                                            1. 26

                                              As in the font used on lobste.rs which made your comment a bit hard to parse ;)

                                              1. 1

                                                yeah, fonts shouldn’t introduce ambiguity by displaying different characters the same way.

                                              2. 6

                                                l and I

                                                Perhaps I’m missing something, but if I type them in the code sample input box on Compute Cuter (selecting Fantasque Sans Mono) they look different to me?

                                                1. 3

                                                  I also see clearly identifiable glyphs for each when I try that. The I has top and bottom serifs, the l has a leftward head and rightward tail (don’t know what you call em), and only the | is just a line.

                                                2. 1

                                                  Honestly when is that ever a real issue? You’ve got syntax highlighting, spellcheck, reference check, even a bad typer wouldn’t accidentally press the wrong key, you know to use mostly meaningful variable names and you’ve never used L as an index variable… So maybe if you’re copying base64 data manually but why?

                                                  1. 9

                                                    My friend who’s name is Iurii started spelling his name with all-lowercase letters because people called him Lurii. Fonts that make those indistinguishable even in lowercase would strip him of his last resort measure to get people to read his name correctly. (Of course, spelling it Yuriy solves the issue, but Iuirii is how his name is written in his id documents, so it’s not always an option)

                                                    1. 2

                                                      It could be, and it’s not just limited to I, l, and 1. That’s why in C, when I have a long integer literal, I also postfix it with ‘L’: 1234L. Doing it that way makes it stand out easier than 1234l. And if I have to do an unsigned long literal, I use a lower case ‘u’: 5123123545uL. That way, the ‘u’ does stand out, compared to 5123123545UL or 5123123545ul.

                                                    2. 1

                                                      cf

                                                  1. 22

                                                    Sadly that’s a really difficult bit of software to use. The license states that you cannot use it in any capacity without emailing the author.

                                                    I’d be very hesitant to engage with this at all, unfortunately.

                                                    1. 5

                                                      Yeah that licence is … interesting. I could understand emailing for permission to modify it, but just to use it seems a bit over the top.

                                                      1. 18

                                                        This is an effort to fight individual exploitation in the FOSS community.

                                                        By writing proprietary software. ;)

                                                        The fact that the source code is available doesn’t make it less proprietary.

                                                      2. 1

                                                        Where do you see the license?

                                                        1. 3

                                                          It’s at the bottom of README.md.

                                                        2. 1

                                                          Not at all. Compiling and running the code privately or for educational purposes would fall under fair use.

                                                          Exploitation is a huge problem in the community, and it starts with little acts like this to fight it, even if it isn’t what people are used to. And as time goes on I will refine what I do to help the problem. ☺

                                                          1. 5

                                                            Exploitation is a huge problem in the community, and it starts with little acts like this to fight it, even if it isn’t what people are used to.

                                                            I’m not sure this achieves anything, honestly. Other than of course, being proprietary software in an effort to “fight exploitation in the FOSS community”.

                                                            1. 2

                                                              what “exploitation” are you referring to?

                                                              1. 1

                                                                The most recent event, which really opened my eyes, was the the one where Amazon took over ElasticSearch.

                                                                My code can still be used under fair use, and is available for reading.

                                                                1. 16

                                                                  Amazon didn’t take over ElaaticSearch…Elastic chose to relicense it under a proprietary license, and then Amazon forked the latest Apache 2.0 licensed version into a competing product.

                                                                  1. 1

                                                                    Any software licensed under terms that prevent Amazon (or any other party) from doing this is not free. Maintainers of software that claims to be free software should not be able to prevent users from modifying that software in ways they disapprove of.

                                                              2. 2

                                                                That’s nice for users of your software in countries where Fair Use exists as a concept in copyright law.

                                                                In the UK for example, the concept of Fair Use is described as Fair Dealing, and a defence exists to copyright infringement if it is for the purpose of ‘academic study’, ‘criticism or review’, or ‘reporting of current events’.

                                                                Running this bot, for example, for my own use in a channel unrelated to its development, I don’t believe would reasonably fall into any of those three buckets.

                                                                Have you considered a strong licence like AGPL-3.0?

                                                            1. 3

                                                              This isn’t a panacea. Hosting your own email is certainly a lot of work, and the anxiety of if you set up enough DKIM et al to appease other mail providers is real.

                                                              1. 2

                                                                I think that’s an exaggeration.

                                                                Yes, it’s more than zero effort- Yes, you also wanna set up DKIM, etc. Every guide I’ve seen posted here explains how to do that though. Once it runs it runs though. E-Mail compared to other things, even static web hosting feels a lot more stable. The only thing that happened in the last two decades was those anti-spam things, that involved running an additional service and it used to be more complicated. Now rspamd does parts of that for you. It still wasn’t hard back then and it’s not now.

                                                                It’s probably nothing you should do as your first project (even though it was my first vserver back in the days before any professional experience and it still worked fine). But it’s certainly a lot more straight forward and a lot less maintenance than other things I self-hosted or that I’ve done professionally.

                                                                The whole “you will get blocked by everyone” is simply not true. You won’t get blocked for sending from a not known IP (else all those “pay extra for dedicated IP” from mailgun, mandrill, etc. wouldn’t make sense), you also won’t get blocked for sending from an unknown domain. Google is the most draconic here. But if you set up SPF and DKIM and maybe enter the correct reverse IP into the according input field at your hosting provider you are good.

                                                                On top of that E-Mail works like barely any other protocol in unreliable setups. There’s retries of queued emails, there’s bounce notifications, etc. E-Mails don’t just disappear silently, even if something would be down or broken.

                                                                So to someone reading this. If you really are anxious, just setup a server, look at how it works, let it run for a while (a 5 USD server somewhere can get you far), use it for stuff that isn’t critical and see for yourself.

                                                                1. 1

                                                                  I think you’re underestimating how bad residental/small business ISP and low-end VPS IP pools can be treated in blocklists. If I hosted email on my lonesome, I would probably get a service like Mailgun or an SMTPd on a server provide with a good reputation to front for my real mail server,

                                                                  Either way, it’s still operational workload someone may not want to take on - people have enough in their life. There’s a reason why email providers exist (and businesses pay for them), and I’ve said it before - if it’s not a core competency/differentiator for you, why bother with it? It’s going to become a cost and liability centre for you.

                                                                  1. 1

                                                                    I think you’re underestimating how bad residental/small business ISP and low-end VPS IP pools can be treated in blocklists.

                                                                    The article was speaking about vultr, not some residential ISP. I also think if you’d use that you’d see very quickly that pretty much nobody accepts emails from those ranges.

                                                                    For low end VPSs: I haven’t come across that yet.

                                                                    If I hosted email on my lonesome, I would probably get a service like Mailgun or an SMTPd on a server provide with a good reputation to front for my real mail server.

                                                                    In my experience the bounce rate on Mandrill and Mailgun is couple of hundred times higher than some tiny hosting company somewhere, because there’s certainly people spamming using Mailgun. Also Mailgun and others for that very reason offer dedicated IP that don’t suffer from their reputation.

                                                                    Either way, it’s still operational workload someone may not want to take on

                                                                    Sure. Then don’t do it.

                                                                    I am just saying it’s pretty low compared to a lot of other services (most, if not all I ever ran both privately and in business). Self hosting of course implies that you have to do stuff yourself. Just like cooking for yourself means you need to take care of buying ingredients yourself and make sure it doesn’t burn all by your own. So I’d consider that implied.

                                                                    However I really do think that there is a lot of FUD in that area (of course, after all people wanna sell you their mail related products). That’s the reason why I am writing this. People claim like the whole world is gonna fall down on you when the reality is that that’s not the case. And of course you should have a plan in case of problems, but the same is true for what if your smartphone dies, what do you do when your Gmail or whatever account becomes inaccessible (technical problems, some for of attack, your account getting blocked for some reason, etc. All things that happen).

                                                                    Fifteen years ago, I was a naive teenager and just went for it, for fun. Not having much money I had to settle for a tiny vserver of some unknown company though. Then I found it works surprisingly well. When I mentioned it to others people said it’s only because it is such an old and therefor my IP has good reputation. If I ever switched server I’d suffer, because there would be a lot of troubles, because nobody trusts the server. I don’t even send that many emails, so I think most of the internet doesn’t even know about that server existing.

                                                                    Later I switched to another provider, because I wanted to host more things and that 512mb vserver was getting small for all the services already running on it. So I switched over to another provider. In the process I switched to another OS family, switched to a completely different SMTP server software and switched from dovecot 1 to dovecot 2. Oh and I switched from Spamassassin to rspamd. Everything worked. Planning was ten minutes, switchover was quick as well. Few hours at most. To the best of my knowledge nobody noticed.

                                                                    I am certainly not experienced in the area of emails. I just set up these two servers with over a decade in between where the only thing I changed was adding SPF and DKIM. The only maintenance I did was doing the standard update procedure of the OS and packages when it informed me it wants me to.

                                                                    One thing I have to admit is that I got really lucky regarding disasters. None in those fifteen years. If a disaster would hit it would mean I’d have to copy over some files from a backup onto a new server and changing some DNS records.

                                                                    If I’d ever run into issues or for whatever reason wanted to change things there’s nothing that prevents me from switching my domain over to use some form of a managed service.

                                                                    Oddly enough though, Gmail users in those years had more issues with their emails not working properly than me. But just to be clear, it’s a sample set of one. It most likely has also to do with luck (and bad luck on Google’s side). Nobody should take this as something representative.

                                                                    There’s other occassioans though where I also was also setting up email servers: Centralizing status mails (think of what Debian and the BSDs send out daily). Those were just the minimum required parts to have mail servers accept emails. Here Google is the hardest one I’ve seen in the wild. The fact that these status mails might contain text that are hosts to spamming mail servers trying to connect doesn’t make that any better. ;)

                                                                    Only for smaller fun projects. Nothing serious that anyone really relies on there though.

                                                                    1. 1

                                                                      The article was speaking about vultr, not some residential ISP

                                                                      By default, vultr blocks outbound SMTP. You need to raise a support ticket for them to unblock it, explaining why, the volume of email you expect to send, and so on. That said, it took them about three hours to unblock it for me, as a new customer, and I haven’t had any problems since then.

                                                                2. 1

                                                                  Tbh the DKIM/SPF stuff is easy peasy. And it’s something you have to address even if you don’t host mail yourself but use your own domain for mail.

                                                                  The reasons I don’t host mail myself relate to availability and recovery/backups. If things silently break for whatever reasons, then I’m not receiving emails until I notice it. Some monitoring things can help with obvious issues, but not every situation where mails are silently not making it to my mail client. If things break real bad (from an upgrade or ??) then I hopefully have a way to quickly(!) restore backups and get it going again. Making sure that’s possible is not trivial, and requires maintenance if the expectation is that it stays reliable over time.

                                                                  A lot of important things I do for work, life, etc still require email, so it needs to be reliable and robust. I’m not ready to accept that risk, or comit to the additional time to get that right (and keep it right over time..)

                                                                  1. 1

                                                                    Some monitoring things can help with obvious issues, but not every situation where mails are silently not making it to my mail client

                                                                    Could you give me an example of emails silently not making it?

                                                                    A lot of important things I do for work, life, etc still require email, so it needs to be reliable and robust.

                                                                    Same here. E-Mail is my main form of communication. Both business and private.

                                                                    1. 2

                                                                      Could you give me an example of emails silently not making it?

                                                                      If your mailserver rejects some mail because it looks malformed, but turns out Google or some other provider is laxer about validating mail. This had happened to me hundreds of times when I used to host my on email. Now that I don’t, it’s not my problem :)

                                                                      1. 1

                                                                        Sure, some reasons have nothing to do with the email specifications.. e.g. your email server application/system hangs or experiences some OS/application/hardware failure. Some of these failures could result in silent data corruption. It’s really nothing exceptional here, just normal risks of self hosting stuff but now it’s a service that might be more critical/needed than some blog or other things that most folks self host.

                                                                  1. 2

                                                                    Looks very similar to tmux and screen with windows/panes.

                                                                    1. 5

                                                                      The difference being that it don’t care about detaching a session and just focuses on virtual-virtual terminals. The same author (who also created the interesting vis editor) also wrote abduco that just implements detaching.

                                                                      I used to use dvtm together with st, which is a good combination.

                                                                      1. 2

                                                                        I saw that, but if you’re just going to rely on another app to do session stuff, then how is this statement (by the author) true?

                                                                        Together with dvtm it provides a simpler and cleaner alternative to tmux or screen.
                                                                        

                                                                        Relying on multiple applications (with their own config, process, etc) to replace the functionality of one doesn’t seem simpler or cleaner on the surface, but they don’t explain how it is.

                                                                        1. 3

                                                                          I immediately see the benefit, personally. I’m generally not interested in tiling etc., but I want session management. It’s nice to be able to pick one without the other.

                                                                          1. 1

                                                                            It’s unixy. Tiling and session management are two separate features that don’t depend on one another. So why bundle them? I do get why some people are interested in it, but I also agree that I don’t need both all the time.

                                                                            1. 1

                                                                              I use abduco everywhere:

                                                                              • It’s made sucklessly, so it’s easy to hack, very fast to compile, compatible with TCC, only uses a Makefile etc
                                                                              • It changes your terminal minimally, only changing your cursor, which I don’t mind. In tmux and dvtm, I have to wrestle with weird $TERMINAL behaviour
                                                                              • The less features, the less documentation there is to read. The manpage is small, and there’s a small number of intuitive options
                                                                        1. 2

                                                                          All of these assume that the developers are to be trusted. What if that is not the case? What if Daniel goes rogue?

                                                                          1. 12

                                                                            Trust is an interpersonal thing. If you don’t trust Daniel or anyone who reviews his work then either you just have to use something else that is controlled or reviewed by people you trust enough.

                                                                            No technical measure will get you around that.

                                                                            1. 1

                                                                              You put curl in a sandbox on your machine.

                                                                              1. 1

                                                                                That’s ok for privilege escalations, but many other backdoors are possible. Sandbox won’t help you if curl is patched to generate tls keys guessable by a 3rd party.

                                                                                1. 1

                                                                                  Then you can either not run it and run something else, or you can audit the source code and then build it (instead of relying on distro/other package management to build possibly unknown source for you).

                                                                                  1. 1

                                                                                    It’s a nice idea, but not realistic for any normal project. People don’t have time/budget/skills to do this. Realistically it’s cheaper to write the part of curl you want to use yourself than to audit curl to a a degree where you have confidence there’s no hidden backdoor.

                                                                                    1. 1

                                                                                      My point being, those are your options. That’s it. If you trust no one, then write things yourself.

                                                                            1. 5

                                                                              Ah they could have gone with Alpine Linux, which has first class arm support and already operates without systemd (so no need to endlessly keep pulling it out of the distro you’re based on as systemd-isms creep in.)

                                                                              1. 7

                                                                                Void is another great option with first class ARM support. It also offers a choice between glibc or musl versions, and even has an unofficial port to ppc.

                                                                              1. 4

                                                                                A handful from my setup:

                                                                                • h: navigate through directory history
                                                                                • b: bookmark directory paths, jump to bookmarks
                                                                                • git-sel-changed and git-edit-changed: use fzf to select from changed files in a git repo. git-edit-changed just wraps git-sel-changed.
                                                                                • VimwikiMakeLink: use fzf to select a page / tag to link to in my vimwiki setup.
                                                                                • ,f: a vim keybinding to quickly open files by way of fzf
                                                                                • ,F bound to FragmentMenu: include output from a fzf menu of scripts starting with fragment-. Date formats, brief markup templates, text decorations, etc.

                                                                                Of these, I use h and the vim stuff by far the most, but it’s a great general pattern and I should extend it to other things.

                                                                                Relatedly, rofi works pretty well for doing similar tasks under X. More or less a nicer replacement for dmenu.

                                                                                1. 1

                                                                                  mind sharing how you set up h and b?

                                                                                  1. 2

                                                                                    First, in .zshrc, keep a log of directory history:

                                                                                    # Record directory history to a simple text file:
                                                                                    function chpwd {
                                                                                      echo "$PWD" >> ~/.directory_history
                                                                                    }
                                                                                    

                                                                                    I haven’t solved keeping the history for Bash, since I rarely use it on my desktop machine, but SO has Is there a hook in Bash to find out when the cwd changes? which suggests that wrapping cd in a function could work well enough:

                                                                                    function cd() {
                                                                                        builtin cd $@
                                                                                        chpwd
                                                                                    }
                                                                                    

                                                                                    Next, define h and b themselves. In a file that’s sourced by both Bash or ZSH, so should work fine in .zshrc or .bashrc:

                                                                                    # Jump around in recent directory history - takes an optional query string: 
                                                                                    function h {
                                                                                      if [ ! -z "$@" ]; then
                                                                                        cd "$(tail -2500 ~/.directory_history | tac | awk '!x[$0]++' | fzf --no-sort --height=50% -q $@)"
                                                                                      else
                                                                                        cd "$(tail -2500 ~/.directory_history | tac | awk '!x[$0]++' | fzf --no-sort --height=50%)"
                                                                                      fi
                                                                                    }
                                                                                    
                                                                                    # Bookmark list - if given a parameter, treats it as a path to add to the list:
                                                                                    function b {
                                                                                      if [ ! -z "$@" ]; then
                                                                                        echo "$(realpath "$@")" >> ~/.directory_bookmarks
                                                                                      else
                                                                                        cd "$(sort ~/.directory_bookmarks | uniq | fzf --no-sort --height=50%)"
                                                                                      fi
                                                                                    }
                                                                                    

                                                                                    The awk '!x[$0]++' bit is handy for a bunch of stuff - I actually keep it in ~/bin/unsorted-unique. Just filters out duplicates from the list.

                                                                                    Edits: Noticed I could use some extra quoting around directory names in h() and b().

                                                                                1. 2

                                                                                  Line continuation in Elvish uses ^ instead of \

                                                                                  Unlike traditional shells, line continuation is treated as whitespace.

                                                                                  Why? This would mess with me (and maybe others coming from posix/bash/everwhere else that doesn’t do it like this?), and there’s no explanation of why it’s whitespace and why it uses ^ instead of \

                                                                                  1. 3

                                                                                    \ is an ordinary bareword to make Elvish more ergonomic on Windows - Windows support is experimental right now but I do expect Windows to become a first-class platform supported by Elvish.

                                                                                    For the whitespace part, there are actually very few languages with the bash behavior, and I’ve been bitten by that in the past. I also find it more intuitive for line continuation to function as whitespace.

                                                                                    I considered adding the rationale for the features to the page, but that would make it too long and no longer a “quick” tour. Maybe some of the more non-obvious design choices should be added to the FAQ.

                                                                                    1. 1

                                                                                      Maybe make the backslash behavior platform specific, so it behaves normally-for-shells on non-Windows systems? Otherwise I can see this tripping up a lot of users — I can’t think of any language I use that doesn’t use backslash for line continuations, so it’s kind of reflexive to use it.

                                                                                      1. 1

                                                                                        That’s an option I considered, but the additional complexity doesn’t seem to be worth it.

                                                                                    2. 1

                                                                                      ^ for continuation is per Windows (cmd, maybe others too), but not the whitespace rule.

                                                                                    1. 2

                                                                                      Didn’t we used to have Glade?

                                                                                      1. 2

                                                                                        it’s still around, but very buggy and the gnome folks seem to be abandoning it (this post being case in point..)

                                                                                      1. 2

                                                                                        Can someone explain why is this spam?

                                                                                        Here’s some more details about the project, take from https://www.reddit.com/r/linux/comments/mao4ef/modularity_of_the_hardware_kind_a_lil_project_ive/

                                                                                        The gist: I’m basically applying encapsulation to circuitry, so that gadgets, in this case a Linux-running computer, can be built in a quick, mix-and-match style. Fast hardware prototyping becomes significantly easier, so that effort can be concentrated on the software development. For example, on the page linked below, I put a few demos such as a rapidly implemented automatic plant-watering device.

                                                                                        The (3D-printed) boxes of the blocks are openable, and repairable of course when needed. Also playing an important role in this particular video is the compact Raspberry Pi Compute Module, which contains the minimum brains of the full Raspberry Pi board.

                                                                                        1. 5

                                                                                          the link you submitted is really light on details, and has a big “count me in” button at the top. this is basically an ad.

                                                                                          1. 1

                                                                                            Thanks, that’s a fair point. I was coming from that thread on Linux sub, having watched the video and read the discussion thread. The site does have a timeline with videos and images, but light on details to read. I’ll keep this in mind for future submissions, thanks again.