Threads for Diti

    1. 4

      EditorConfig support is now built-in, nice!

    2. 1

      Thanks for the review. This, plus the fact it does not support hardware AV1 encoding, is a downer. I still need an Apple device (the only way to convert Apple ProRes RAW files from Atomos recorders), but it will have to wait.

    3. 4

      I’m not sure I get it, what makes devenv different/better than a shell.nix?

      1. 6

        Abstracted support for services, language versions, etc. It’s basically an alternative that’s both simpler and provides more features up front. But may not support the advanced tinkering. If you need to do more than devenv can provide, you can always revert you shell.nix

      2. 2

        Way easier to understand if just some random guy on your team wants to add a package or shellHook or whatever.

    4. 2

      I have been wondering how to rewrite my home-manager configuration into flakes, but couldn’t make sense of it. I like how straightforward it seems to be. Thanks for the article!

      It makes me want to use NixOS too, but I am not sure how to mix your examples (of using home-manager and have per-machine configurations) to make them work so that ONE one the machines is NixOS and uses your flake-enabled files, while still remaining useable by non-NixOS.

      1. 1

        For the NixOS machine you have the flake output nixosConfigurations.<hostname>, while for the non-nixos configurations you can just use homeConfigurations.<user>@<hostname> (at least I’m pretty sure).

    5. 2

      I let iMazing do daily backups (usually over wifi) to my Synology NAS.

    6. 4

      If you use 1Password, you can do it all within 1password:

      1. 1

        That only works on the fields that 1Password recognizes as email fields. It doesn’t work on a “login” field which must be an email, for example.

        1. 2

          That’s more than a little annoying. If you could do it from the UI and copy and then paste, that would be fine, but if you literally can’t do it, that’s miserable.

    7. 4

      Unsurprisingly, the contents of this book only targets Bash. :(

    8. 2

      Interesting article. I almost skipped over it because of its name mentioning a legacy technology (like if it had talked about TLS by mentioning “SSL certificates”).

      Does anybody know if broadcasting Atom feeds has any impact on Search Engine Optimization?

      1. 1

        RSS seems to be the colloquial term for feeds in general. And “RSS Feeds” gets people thinking about the right thing whereas “Feeds” is a little too generic.

        Google at least will subscribe to your feed and treat it similarly to a sitemap. I don’t know if this directly affects your ranking, I would assume not or at least not much.

    9. 1

      For anyone who doesn’t know, it exists in Elixir (Erlang) as well, in the form of IO Data.

    10. 2

      my recently adopted secret weapon for all this is

      I have .envrc in my global .gitignore file and just drop the configs into the different project directories. This way my shell startup is also faster since things are only loaded when they are required.

      1. 1

        Yeah, I was going to say, OP basically reinvented direnv!

    11. 1

      Some zshzle(1) and zshmisc(1) goodness.

      # Submit command but keep command & cursor at same position
      bindkey '^\n' accept-and-hold
      stty -ixon # Disable XON/XOFF output control (^S/^Q)
      bindkey '^R' history-incremental-pattern-search-backward
      bindkey '^S' history-incremental-pattern-search-forward
      # No longer need to quote URLs with yt-dlp (youtube-dl)
      alias ytdl='noglob yt-dlp'
    12. 6

      The one that probably gets the most use is

      alias _="cd $(mktemp -d) ; "

      It creates and drops you into a /tmp dir. These get automatically cleaned up on reboot. The sets directory at setting of the alias. It will always take you back to the same tmp dir for that shell session. Its pretty basic but prevents a lot of cutter.

      1. 2

        I learned that /tmp is not always mounted as tmpfs; on WSL, for instance, this directory persists even after reboot.

        To ensure mktemp -d creates an actual temporary directory, one should create and mount a tmpfs themselves (in their home directory, maybe?), and have a $TMPDIR environment variable pointing to this directory, before calling mktemp(1). Seems inefficient, though.

    13. 3
      alias g="git"
      alias la='ls -lA --color=auto'
      alias sc='screen -xRS'
      alias sl='screen -list'
      # With this function you can explore the filsystem,
      # and display contents of both directories and files
      # without going to the beginning of the line to
      # switch between ls and less.
      l() {
          if [ -z "$2" -a -f "$1" ] ; then
              less "$1"
              ls -l --color=auto "$@"
      # Create and enter a directory
      function mkcd { mkdir -p "$1"; cd "$1"; }
      1. 2

        extra niceties with git

        ga -> git add

        gap -> git add -p

        gb -> git branch

        gc -> git checkout

        gp -> git push

        gbb -> git for-each-ref --sort=committerdate refs/heads/ --format=%(committerdate) %(refname:short)

        (last one prints your branches sorted by last commit date, great for finding the “recent branches”

      2. 1

        I have sl aliased to ls to catch typos lol

        1. 1

          If you install the sl package, you get a steam locomotive blocking your terminal for a few seconds.

      3. 1

        I happen to use basically the same l and mkcd functions. Tip: I would be tempted to put && between mkdir and cd.

        Here is an extended mkcd function that also allows to carry files while changing directory. I use fish, so it’s in fish:

        function mkcd --description 'create, move zero or more files into and enter directory'
            set -l argc (count $argv)
            if test $argc -eq 0
                echo "Usage: $_ [carry files…] destdir/"
            mkdir -p $argv[-1]
            and if test $argc -gt 1
                mv $argv
            and cd $argv[-1]
    14. 12

      Hot take: Could font designers please just agree that the only valid way to write 0 for technical fonts is with a dot in the middle? 0-with-nothing is irritatingly ambiguous with O, 0-with-a-slash is irritatingly ambiguous with Ø, and I’ve never seen the 0-with-broken-edges actually used outsize of Brazilian license plates.

      1. 7

        Just pulled some statistics from what people download:

        The dotted-zero is indeed the most popular.

      2. 7

        I love slashed zeroes!

        I’ve never used Ø or had to.

        1. 17

          What a strange coincidence.

        2. 7

          An Ø bit my sister once.

          1. 5

            Ø bites cån be very painful!

            1. 3

              Yes but it’s not common for islands to bite.

      3. 4

        Nah, I like my slashed zeros. You just need properly distinguishable characters. Many font designers get it wrong.

      4. 2

        Or just let you choose. There were a few things about those fonts that bothered me initially, but with customisation they became my favourites.

        1. 7

          I’m at the sad and tired point in my life where I don’t want things where every nuance is customizable, I want things where the defaults are pretty good. :P

      5. 1

        What is your opinion on writing a 0 with a backslash, like in Atkinson Hyperlegible?

        1. 1

          Never seen it before in practice! I suppose I have no objective complaints. I might worry a little about dyslexic legibility, but no practical experience with it.

      6. 1

        Yeah, I agree. my eyes are pretty bad, and I struggle to read code at even 14pt sometimes. I pretty much exclusively use Source Code Pro as my main programming font because it has the most distinctly different letters and the dot-in-the-middle 0 and NO LIGATURES.

    15. 2

      Trying to eat healthier, to become fit again. How do you guys find inspiration to make your meals at home, with few ingredients?

      1. 2

        For me, I got quite a range of spices since they don’t go bad and then you can get write far with dishes, a curry is almost always just a few vegetables (and rice) away. Then there is the evergreen pasta, making tomato sauce from pulp is quite easy or make pesto yourself, that stores reasonably well and can be frozen. Other than that, rice cooked in broth with eggs cracked into the boiled rice is super easy to make and requires five or less ingredients.

        In general you don’t have to cook every day and reheating things can save you a lot of time, too. For me it’s a nice process to do, you can try out new stuff and refine recipes once you figure out what you like.

    16. 3

      It’s pretty amazing that JPEG is still around, with no replacement on the horizon (at least for general use on the Web). It was standardized in 1992, and supported in versions of Mosaic around 1993! I remember hearing in the 2000s that JPEG 2000 would take over, but it never did. Apple is pushing for HEIC and Google for WebP, but neither of them has that much adoption.

      (GIF is still around too of course, but as an ironic retro thing.)

      1. 2

        I personally hope AVIF will take the place of JPEG as the popular image format, mostly because it is the only “patent-free” image format which supports wide-gamut HDR signaling. Besides AVIF, the only reliable way to display HDR content is to convert a still image to a HEIF/H.265 (patented) video with the correct metadata…

        The primary sources about HDR in AVIF listed in the page I just linked have a lot of interesting info about it.

      2. 1

        Uh, no. Google mildly pushed for WebP about a decade ago, and only in the last few years it gained some real traction, with Apple finally adding the support.

        Today, they’re pushing for JPEG XL, which actually has the capabilities of replacing it all (but the standards are paywalled [just like with parts of AVIF], and generally very complex… though nothing really supports all of JPEG either).

    17. 3

      Maybe just curl from ipfs.

      As long as the key is the correct, you’ll always get the same data. About as good as a download link next to a hash.

      1. 3

        Even better: ipfs get from ipfs ;)

        But yes, curl from a public gateway is a close second

      2. 2

        Ooh, interesting. Hadn’t thought about IPFS as a solution.

        Use case is a bit different/nuanced though. I wanted something where I could insert some sort of verification string prior to running that would be trivial for the author to also include as a part of a release.

        Since IPFS doesn’t quite fit that description, it doesn’t feel like the right solution, but you did remind me that I should give it another look.

      3. 2

        An improvement but afaics that still means ultimately trusting an external entity (ipfs infrastructure) versus a locally calculated checksum.

        I haven’t looked very close at ipfs yet, it’s on my list as part of my archiving endeavours.

        1. 1

          There’s no need to trust the “ipfs infrastructure”, just the client implementation. Content keys are generated from secure hashing the content itself.

          1. 2

            If you use a client, sure; I presumed you meant curl…

      4. 2

        But then the script needs an IPFS client, which is, too, vulnerable to this. Unless you mean hitting on a specific server, which can be manipulated as well (one of my friends actually did that, for a prank).

      5. 2

        Unless the download somehow fail in the middle ? Take the following script:

        curl -o https://random.stuff/archive.tbz
        tar -C $HOME/.cache -xJf archive.tbz
        cp /$HOME/.cache/archive/blah /usr/bin
        rm -rf $HOME/.cache/archive

        Pretty simple, and downloading it from ipfs would work, but if the server chokes, and stops transmitting data at rm -rf $HOME, then the script will just cleanup your home directory without warning. You got the script from the correct URL though. So checking the hash (or best, signature!) after the download is complete remains a better option.

    18. 62

      I support this. There’s a Nix post on the front page almost every day at this point and it seems to make sense.

      1. 2

        Hijacking the top comment to say THANK YOU SO MUCH for including the tag!

    19. 3

      In 2021, what is the best method to pass secrets to CLI apps and shell scripts?

      1. 3

        Everyone will fight for their preferred method, but I’ll mention some of the more common ones:

        1. Have your app go get it’s own secret via some command you run(say via a config option) that returns the password via stdout. You just suck it in, and make generating the secret some other person’s problem.

        2. Have your app read it from a file or file descriptor (i.e. a pipe or an actual file or stdin) The actual file could be on an in-memory file system so it doesn’t live past reboots

        3. pass it via an ENV variable, this almost ensures a leak vector, since /proc and loads of debugging tools make ENV variables of processes quite easy to see.

        4. Hide it in some blessed “tool”, something like hashicorp vault, or gpg or whatever suits your fancy and only support said tool.

        I put them basically in my preferred order, but usually what I do is a mix of 1 & 2. I.e. by default I accept the password via STDIN, but have an option to run some command to get it, the command is defined by a config option.

      2. 1

        Not sure if files have always been the best way, but a very nice feature about them is you can control permissions.

        1. 3

          The issue with files is that you store unencrypted information on the filesystem, which is forbidden in a number of companies or regulations. We use network to fetch secrets which provides audit and easy secrets update, etc… but that doesn’t work well for cli apps that are used for tooling etc…

          1. 1

            It’s not a perfect solution and obviously, depending on companies and regulations, it may not be a solution, either, but it’s usually considered a slightly better option to use pipes for transferring data locally. They are not backed by non-volatile storage and the range of trivial snooping options is more restricted.

            Edit: this works quite well for CLI apps that are used for tooling, with the caveat that it’s pretty easy to misuse (e.g. sooner or later someone will write plaintext passwords to a file and pipe that). That’s kind of inevitable, though, and still slightly better than handing over secrets via command-line arguments, as you can at least chown files on volatile storage, whereas CLI arguments are basically public to other users on most systems, and stay in the shell’s history file for a while…

          2. 1

            Nobody says they have to be unencrypted, you can encrypt files at rest.

            They don’t even have to be files at all, named pipes are one way to make a file-like access model work without ever having data on disk, you can connect your network fetch to that.

            You can even simplify from there and use shell process substitution to wire up a network command to a file interface.

      3. 1

        I think a pipe could work; I was going to sketch it out, but there’s already an example in the thread, with an alternative:

        op encode < login.json | op create item "Login" -

        Or skip this encode step:

        op create item "Login" ./login.json

        In twelve-factor apps, environment variables are used for credentials and configuration.

        1. 4

          This article details some reasons why environment variables may not be a good option for secrets.

          TL;DR: it’s easy to leak env var values to child processes, debugging/error/crash logs, and access is hard to track

        2. 1

          The login.json is stored in plain-text, right? And if we change its chmod, the caller script should also have the lifted chmod?