1.  

    This is a game changer for me, according to the gron(1) manpage:

    The output of gron is valid JavaScript:
    
        $ gron testdata/two.json > tmp.js
        $ echo "console.log(json);" >> tmp.js
        $ nodejs tmp.js
        { contact: { email: 'mail@tomnomnom.com', twitter: '@TomNomNom' },
        github: 'https://github.com/tomnomnom/',
        likes: [ 'code', 'cheese', 'meat' ],
        name: 'Tom' }
    
    1.  

      I missed that. It almost certainly explains why things can round-trip so reliably.

      1.  

        This is very neat too, thanks for sharing!

      1.  

        I have nothing ill to say of this project, but if you’re looking for a condemnation of the OpenPGP ecosystem, look no further than the encrypt() function definition here.

        /**
         * Encrypts message text/data with public keys, passwords or both at once. At least either encryption keys or passwords
         *   must be specified. If signing keys are specified, those will be used to sign the message.
         * @param {Object} options
         * @param {Message} options.message - Message to be encrypted as created by {@link createMessage}
         * @param {PublicKey|PublicKey[]} [options.encryptionKeys] - Array of keys or single key, used to encrypt the message
         * @param {PrivateKey|PrivateKey[]} [options.signingKeys] - Private keys for signing. If omitted message will not be signed
         * @param {String|String[]} [options.passwords] - Array of passwords or a single password to encrypt the message
         * @param {Object} [options.sessionKey] - Session key in the form: `{ data:Uint8Array, algorithm:String }`
         * @param {Boolean} [options.armor=true] - Whether the return values should be ascii armored (true, the default) or binary (false)
         * @param {Signature} [options.signature] - A detached signature to add to the encrypted message
         * @param {Boolean} [options.wildcard=false] - Use a key ID of 0 instead of the public key IDs
         * @param {Array<module:type/keyid~KeyID>} [options.signingKeyIDs=latest-created valid signing (sub)keys] - Array of key IDs to use for signing. Each `signingKeyIDs[i]` corresponds to `signingKeys[i]`
         * @param {Array<module:type/keyid~KeyID>} [options.encryptionKeyIDs=latest-created valid encryption (sub)keys] - Array of key IDs to use for encryption. Each `encryptionKeyIDs[i]` corresponds to `encryptionKeys[i]`
         * @param {Date} [options.date=current date] - Override the creation date of the message signature
         * @param {Array<Object>} [options.signingUserIDs=primary user IDs] - Array of user IDs to sign with, one per key in `signingKeys`, e.g. `[{ name: 'Steve Sender', email: 'steve@openpgp.org' }]`
         * @param {Array<Object>} [options.encryptionUserIDs=primary user IDs] - Array of user IDs to encrypt for, one per key in `encryptionKeys`, e.g. `[{ name: 'Robert Receiver', email: 'robert@openpgp.org' }]`
         * @param {Object} [options.config] - Custom configuration settings to overwrite those in [config]{@link module:config}
         * @returns {Promise<MaybeStream<String>|MaybeStream<Uint8Array>>} Encrypted message (string if `armor` was true, the default; Uint8Array if `armor` was false).
         * @async
         * @static
         */
        export async function encrypt({ message, encryptionKeys, signingKeys, passwords, sessionKey, armor = true, signature = null, wildcard = false, signingKeyIDs = [], encryptionKeyIDs = [], date = new Date(), signingUserIDs = [], encryptionUserIDs = [], config, ...rest }) {
        

        So simple, even Johnny can encrypt. (Surely nothing could go wrong!)

        Contrast this with age.

        1.  

          age is the obvious replacement for OpenPGP, but it doesn’t have a JavaScript implementation (that I’ve found at least). Possibly a better replacement than OpenPGP.js would be TweetNaCl.js by Dmitry Chestnykh.

          export const encrypt = (json, key) => {
            const keyUint8Array = decodeBase64(key);
          
            const nonce = newNonce();
            const messageUint8 = decodeUTF8(JSON.stringify(json));
            const box = secretbox(messageUint8, nonce, keyUint8Array);
          
            const fullMessage = new Uint8Array(nonce.length + box.length);
            fullMessage.set(nonce);
            fullMessage.set(box, nonce.length);
          
            const base64FullMessage = encodeBase64(fullMessage);
            return base64FullMessage;
          };
          
          1.  

            At the risk of self-promotion, I did provide a JS alternative that’s better than TweetNaCl on two fronts:

            1. Type-safety (keys are distinct types rather than just Uint8Arrays).
            2. Supports additional authenticated data (e.g. for context-binding for ciphertexts).

            It also handles encoding for you.

            1.  

              Fair enough. However, to be a touch critical if I may, does Dhole Cryptography have an audit report? TweetNaCl.js was audited by Cure53 in 2017, and from the audit (also on the GitHub README):

              The overall outcome of this audit signals a particularly positive assessment for TweetNaCl-js, as the testing team was unable to find any security problems in the library. It has to be noted that this is an exceptionally rare result of a source code audit for any project and must be seen as a true testament to a development proceeding with security at its core.

              To reiterate, the TweetNaCl-js project, the source code was found to be bug-free at this point.

              […]

              In sum, the testing team is happy to recommend the TweetNaCl-js project as likely one of the safer and more secure cryptographic tools among its competition.

              Compare that to the audit of OpenPGP.js which appears to still have outstanding issues, or the wiki is out of date.

              1.  

                Dhole Cryptography wraps sodium-plus, which defers to libsodium.js (or sodium-native if you install it alongside). Libsodium.js is compiled from libsodium, while sodium-native is an FFI wrapper for libsodium. Libsodium has been audited.

            2.  

              age is the obvious replacement for OpenPGP

              Actually it would be more technically correct to say that age is a replacement for a subset of OpenPGP.js. For example, as far as I can see from the readme, age doesn’t sign encrypted files thus you can’t be sure who created the signed file. Age cannot be used to sign software binaries (for example). Age’s specification is a google doc compared to OpenPGP.js that is based on an IETF standard.

              1.  

                True. age only supports encryption and decryption. It does not do any signing. For that, you should use minisign or signify.

          1. 17

            As a security researcher and cryptography hobbyist, I appreciate that Neovim took out Vim’s unauthenticated and broken Blowfish encryption support. Blowfish is a 64-bit block cipher which is vulnerable to Sweet32. I brought this up to the Vim project, and unfortunately heated emotions took over, and the issue was closed. Further, Vim’s password hashing function for key derivation is weak. This issue remains open.

            Regardless, Neovim does not support :X to encrypt data. As such, if you want built-in encrypted text, you’ll either need to use the gnupg.vim plugin, or write something that hooks into an external tool, like OpenSSL.

            Edit: spelling

            1. 3

              Huh, that’s interesting. I didn’t know that was such a contentious ‘issue’ with the vim folks..

              1. 4

                That was a wild read. Reminds me of the gendered pronouns issue

                1. 7

                  It’s not a contentious issue with Vim users per se. It’s a contentious issue with cryptographers and Bram Moolenaar. But yeah, it’s a bit silly TBH.

                  1. 11

                    I use Vim’s blowfish feature; just for some files that aren’t super-sensitive but also don’t want showing up in a grep or be easily readable for anyone (they would have to get access to my computer first, which is already a significant barrier).

                    I dislike using gpg with a passion, and integrating external tools in Vim and doing it well is actually harder than it might seem (though not impossible). Clearly it can all be improved, e.g. by using libsodium or whatnot; actually, it seems there is even a TODO entry for that.

                    I’ve seen several discussions about this, and the problem is always the same: people (often actually the same people) will start being caustic, start ranting about blowfish, “brokenness”, “laziness”, etc. and at this point the entire conversation/issue gets sidetracked and you’re no longer talking about how to improve Vim’s crypt feature, the maintainers get exasperated (rightfully so, IMO), and nothing gets done because, basically, “fuck this guy”. This is what happened to your issue, which was fine, but got hijacked by this other guy who used it to rant.

                    From a UX point of view, it’s by far the most convenient way to encrypt a text file, and this has a lot of security value too. A number of crypto people tend to hyper-focus on algorithms, and those are not unimportant of course, but it’s just one part of the entire cryptosystem.

                    It’s not just Bram who opposes outright removing this feature, other Vim maintainers such as Christian do too.

                    tl;dr: someone just needs to make a patch to add libsodium integration instead of ranting about blowfish and it would all be better.

                    1. 5

                      I just use Mozilla’s sops. It supports a wide variety of key stores and uses $EDITOR.

                      1. 3

                        I dislike using gpg with a passion

                        And for good reason. PGP is littered with problems

                        by using libsodium or whatnot; actually, it seems there is even a TODO entry for that.

                        This is good news. I wasn’t aware this was on the TODO. Hopefully this comes sooner than later.

                        1.  

                          FYI: https://github.com/vim/vim/pull/8394

                          Looks like help/advice/feedback is requested, if you’ve got the time and inclination :-)

                          1.  

                            This is great! Thanks for doing this!

                            I comment on your PR about not replacing the sha256_key() KDF with crypt_pwhash_*. I don’t know if that’s something you plan on addressing or not, but because you reference issue #639, I thought I would mention it.

                            1.  

                              It’s not my PR; I just happened to notice it and thought you might be interested 😅

                              I did actually work on this a little bit a few weeks ago, but I’m a crappy C programmer and it was more complex than I thought, so I quickly gave up.

                              1.  

                                Ah, fair. Thanks for bringing it to my attention regardless. I’ll be following it.

              1. 2

                #facepalm

                A deliberate netsplit between the new ircd and old ircd. Then an apparent netplit within the old ircd cluster. Watching this whole thing implode from the sidelines is really quite remarkable.

                1. 3

                  I’d believe it if somebody told me they convinced them to change the server software without migrating anything on purpose, because this is the biggest self sabotage they could do after all that happened. Why should you stay on freenode when all your channels and accounts (and accounts of friends) are gone..

                  1. 4

                    Why does every vulnerability come with it’s own catchy acronym and slick webpage now?

                    1. 8

                      Generally easier to remember, identify, and communicate than CVE IDs.

                      Are we vulnerable to ALPACA?

                      versus

                      Are we vulnerable to CVE-2021-31971?

                      1. 6

                        At least they’re getting more adorable over time!

                        1. 6

                          Why does every vuln with a name get at least one person asking this exact question?

                          1. 3

                            Why does every vuln with a name get at least one person asking the question of why every vuln with a name gets at least one person asking why every vuln needs a name now?

                          2. 3

                            A Bug With A Logo? It worked the first time…

                            In fact, thinking back, this trick still worked for POODLE, in the sense of management-level people caring enough to ask about it and organising statements that “we’re now protected against POODLE” or “we’re unaffected by POODLE” etc. But no branded vuln since has had any real boost from its marketing, as far as I recall – nobody cares, just the way everyone discussing this aspect of Heartbleed could foresee right from the start. The marketing has just become this motion you go through because it’s what everyone does.

                            1. 2

                              Crypto bros are a very weird crowd worried a lot about publicity and street cred.

                            1. 1

                              I’ve been running Firefox since 0.8, but with that said, it has some usability issues (on Linux at least, such as poor stability with screen sharing under Wayland, curious UI element changes such as breaking the tabbed interface paradigm, etc.), sandbox security concerns, and noticeable performance problems (here’s an old HTML5 speed test for IE 11).

                              1. 5

                                Bullet journalling has worked really well for me. Coupled with a personal wiki, it’s very powerful. Has anyone else had serious success?

                                1. 6

                                  Yes, it’s great. I’ve been using it for 2,5 years in a manner similar to the article: Not artistic, just minimal and functional.

                                  1. 2

                                    Same here, been using Bullet journals for home and work for about three years, and really like it.

                                    I previously had separate written books for work and home, but since Covid am now using a paper journal (small Leuchturm 1917 pocket book) for personal activities, and Emacs journal mode and org mode for work. Mostly because I don’t have the desk space at home to lay out my old work journal, but I’m enjoying being able to quickly take notes in meetings, work with tables, embed links, and keep track of citations from co-workers.

                                    1. 2

                                      Googling “journal mode” yielded a lot of org-journal links, is that what was meant? I’m curious

                                      1. 2

                                        Ah, correct, org-journal is the proper name. I had forgotten that it was part of org.

                                        1. 1

                                          I’m using org-journal for this purpose and it works really well for me. I do use it with a weekly file not separate daily files (for which it is designed it seems) and have each day as a top level org heading.

                                      2. 1

                                        What do you use for a personal wiki, and how do you take advantage of it with bullet journaling?

                                        1. 2

                                          I use Roam, mostly. Also, I keep todos and logs in the BuJo but information and such in the Wiki. Does that make any sense?

                                          1. 1

                                            Yes. Thanks.

                                      1. 3

                                        I think the clever thing here is that it starts at the abbreviation and then builds a phrase. This makes it very easy to reason about the entropy while still using arbitrary logic to come up with the phrase to be memorized.

                                        Of course when you generate 20 and let the user pick one you lose some entropy.

                                        1. 2

                                          Of course when you generate 20 and let the user pick one you lose some entropy.

                                          Good point! This could be fixed by presenting different words for the same set of prefixes, rather than presenting different sets of prefixes. For example, if one of the randomly chosen prefixes was hin, one presentation could display Hinted and another Hindu.

                                          Of course, that won’t stop the user from just hitting refresh :)

                                          1. 1

                                            Of course when you generate 20 and let the user pick one you lose some entropy.

                                            Not of any significant value though. If the default security margin is 50 bits (as per the web implementation linked), then that’s 2^50 possible outputs. If 32 passwords are being generated per browser refresh, then that’s ~0.000000000003% of the full key space.

                                            Go ahead and refresh until you find something you like. You would have to refresh your browser ~17 trillion times before you reduced the keyspace by 1 bit, or 50%.

                                            1. 1

                                              Are you sure that math is right? I think you have to take into account that the user isn’t rejecting just those 31 phrases. They are using a rule to reject a huge portion of the search space. For example if you use an over-simplified rejection rule such as the user always picking the lexicographical first password out of the 32 choices the distribution is heavily skewed so guessing the password probably takes a fraction of the time on average.

                                              1. 1

                                                We probably need to clarify what we’re discussing, so there isn’t miscommunication or confusion. First, let me back up my math, then I’ll see if I understand your concern.

                                                The password generator generates 3-letter prefixes based on a random number of 0 through 1023, which provides each prefix with 10 bits security. It then uses those prefixes to pick words from a word list, and builds a mnemonic based on a massively large bigram DB.

                                                If five prefixes are generated uniformly at random, then the resulting password indeed has 50 bits security. The web interface allows you to change how many prefixes you want, resulting in the same number of words for your mnemonic. At every refresh, 32 passwords are generated.

                                                So, that’s 32 prefixes out of 2^50 possible = 32/1,125,899,906,842,624 = 0.000000000000028421709430404007434844970703125. 2^49 is half the size of 2^50. 2^49 possible prefixes = 562,949,953,421,312. If your browser is generating 32 prefixes per refresh, then you need to refresh your browser 562,949,953,421,312/32 = 17592186044416 times, or about 17.5 trillion times.

                                                At that point, odds switch to your favor that you will generate a duplicate prefix that has already been seen.

                                                They are using a rule to reject a huge portion of the search space. For example if you use an over-simplified rejection rule such as the user always picking the lexicographical first password out of the 32 choices the distribution is heavily skewed so guessing the password probably takes a fraction of the time on average.

                                                If I understand this correctly, you’re assuming the user would mentally pick a fixed point prefix such as “The first prefix must be see”. If so, then yes they lost a full 10 bits of security. However, they’ll have an 1-in-1024 chance that see is the first prefix, which is on average, 1,024 browser refreshes before they find it. If they make two fixed point prefixes, such as the first being see and the second being abs, then they have a 1-on-1024^2 or 1-in-1048576 chance of finding it. So, an average, 1 million browser refreshes.

                                                So while they have reduced their security margin greatly, they also increased their work load greatly, and I’m not seeing how that would be worth it. Of course, they could automate it with a script outside of the browser. Finding one prefix in 1,024 isn’t hard, nor is one in a million. Going beyond that might force them to wait it out though.

                                                Is this what your talking about?

                                                Edit: spelling/grammar

                                          1. 2

                                            I honestly don’t like the floating tab redesign. It looks and behaves more like a “button” to me than a “tab”. Now I feel like Mozilla broke the tabbed paradigm. Maybe that’s okay. Why do browsers have to have “tabs” and why not “buttons”? I’m not jiving with it though.

                                            1. 5

                                              I’ll be keeping an eye on the users, channels, and reporting server stats to see how things change over the upcoming months and years:

                                              Also the “top 10” at https://netsplit.de/networks/top10.php as well.

                                              1. 1

                                                Another site tracking Freenode vs. Libera.Chat servers, channels, and users:

                                                https://isfreenodedeadyet.com/

                                              1. 22

                                                HardenedBSD’s channel was taken over by Freenode last night: https://twitter.com/HardenedBSD/status/1397516575561879557

                                                What a wonderful thing to wake up to.

                                                1. 3

                                                  Sorry Shawn. That sucks.

                                                1. 1

                                                  Just curious, but why?

                                                  1. 1

                                                    Vigenere can thwart frequency analysis with longer keys. The site demonstrates frequency analysis with the key “LEMON” and provides a widget in the post to see how frequency analysis is affected as you enter different keys into the widget. The more unique letters in the key (approaching a pangram), also the less effective frequency analysis becomes (try “THE FIVE BOXING WIZARDS JUMP QUICKLY”).

                                                    If the key is the length of the message, then Kasiski examination is ineffective. If the key is also information theoretic secure and ephemeral, then the Vigenere cipher just became the one-time pad.

                                                    Edit: additional context, formatting

                                                    1. 2

                                                      This is going to take some time to digest.

                                                      1. 4

                                                        All you need to know is :q!

                                                        1. 7

                                                          If a joke is funny once it’s funny the next 900 times

                                                          1. 1

                                                            Indeed, I heard this line many times before!

                                                      1. 4

                                                        This is a really weird article, almost as if the author is coming from a place of insecurity. I’m curious what prompted the article to be written. What was happening at the time that was causing people to decide whether or not they should run Linux enough to prompt this?

                                                        1. 4

                                                          to me this looks similar to what happens with other movements that have a very loud minority with a big sense of moral grandstanding (like veganism). It prompts a lot of defensiveness on the other side out of the fear of being judged, and then you get terrible articles that are either “i eat meat but i’m still a good person” or “those freaky vegans are also evil, actually”

                                                          1. 4

                                                            I’ve been told more than once that I don’t care about quality, customer satisfaction, or doing things correctly because I used a language other than Rust…every group has its zealots.

                                                        1. 4

                                                          You can get rid of head -n1 at the end of the pipe by using seq 1 6 | shuf -n1.

                                                          1. 5

                                                            I know that’s not the point of the article, but my “Unix” doesn’t have seq or shuf. So i propose jot -r 1 1 6

                                                            1. 6

                                                              I’ve found a lot of “Unix philosophy” arguments online rely heavily on GNU tools, which is sort of ironic, given what the acronym “GNU” actually stands for.

                                                              1. 6

                                                                The “Unix” in GNU isn’t the ideal of an operating system like Unix (everything’s a file, text-based message passing, built on C etc. etc.), it’s the “Unix” of proprietary, locked-in commercial Unix versions. You know, the situation that forced the creation of the lowest common-denomination POSIX standard. The ones without a working free compiler. The ones which only shipped with ED.

                                                                1. 5

                                                                  BSD shipped with vi and full source code before the GNU project existed, and by the 1980s there were already several flavors of Unix. But AT&T retained ownership over the name Unix, which is never something that should have happened - it was always used as a genericized trademark, and led to travesties like “*nix”.

                                                                  RMS is a Lisp (and recursive acronyms) guy who never seemed to care much about Unix beyond viewing it as a useful vehicle and a portable-enough OS to be viable into the future (whereas the Lisp Machine ecosystem died). Going with Unix also allowed GNU to slowly replace tools of existing Unix systems one by one, to prove that their system worked. GCC was in many cases technically superior to other compilers available at the time, so it replaced the original C compiler in BSD.

                                                              2. 2

                                                                I found jot to be more intuitive than seq and I miss it. Not enough to move everything over to *BSD though.

                                                                1. 1

                                                                  I’m pretty sure it’s available (installed by default) on Linux systems (depending on distribution).

                                                                  1. 1

                                                                    On my VPS (Ubuntu 20.04 LTS)

                                                                    $ jot
                                                                    Command 'jot' not found, but can be installed with:
                                                                    sudo apt install athena-jot
                                                                    

                                                                    On my RPi 4 (Raspbian GNU/Linux 10 (buster))

                                                                    $ jot
                                                                    -bash: jot: command not found
                                                                    

                                                                    I first learned about it from the book Unix Power Tools, at which time I was running a couple of BSDs, so I kind of got used to it then…

                                                              3. 3

                                                                i don’t think that helps with the demonstration of programs that each do “one thing well”.

                                                                1. 3

                                                                  However it still helps in faster execution as that is one less program in the pipeline. I don’t see a problem with shuf containing the ability to output a certain number of lines as that still feels like it pertains to the subject matter of the program and it is quite useful. At least from what I’ve seen used with shuf it is probably the most used option for it too.

                                                                  1. 2

                                                                    Sure, in practice I wouldn’t pipe cat into grep or whatever. Whatever the purists say, flags are useful. But in a demonstration of how the pipeline works, I think it makes more sense to use one tool to shuffle and another tool to snip the output, than the shuffling tool to snip, that’s all I meant.

                                                                    In practice, I probably wouldn’t be simulating a dice roll in the shell, but if I was, my aim would be to get what I want as fast as possible. To that end, I’d probably use tail instead of head, as that’s what I use most often if I want to see part of a file. I’d probably use sort -R instead of shuf, because I use sort more often. That hasn’t dropped any of the parts of the pipeline, but it also doesn’t represent the “one thing well” spiel because randomizing is kind of the opposite of sorting.

                                                                    I guess that’s what I was getting at :)

                                                              1. 11

                                                                From A Scheme Shell, by Olin Shivers:

                                                                Shell programming terrifies me. There is something about writing a simple shell script that is just much, much more unpleasant than writing a simple C program, or a simple CommonLisp program, or a simple Mips assembler program. Is it trying to remember what the rules are for all the different quotes? Is it having to look up the multi-phased interaction between filename expansion, shell variables, quotation, backslashes and alias expansion? Maybe it’s having to subsequently look up which of the twenty or thirty flags I need for my grep, sed, and awk invocations. Maybe it just gets on my nerves that I have to run two complete programs simply to count the number of files in a directory (ls | wc -l), which seems like several orders of magnitude more cycles than was really needed.

                                                                I liked the example in the article, but we can also use it to show the shortcomings of Unix philosophy. Suppose we wanted to roll a 10-million-sided die (I.E., pick an integer between 1 and 10000000). Here’s what that looks like in terms of computing efficiency.

                                                                $ time ( seq 1 10000000 | shuf | head -n1 )
                                                                3574362
                                                                real	0m6.966s
                                                                user	0m6.423s
                                                                sys	0m1.277s
                                                                

                                                                I ran that on my Raspberry Pi because it was a good demonstration. It’s faster on x86 systems: 2.5 seconds on a 2012-era AMD CPU, and 1.4 seconds on an Epyc Rome system from 2020.

                                                                If we wanted to roll a 2**64-sided die, forget about it.

                                                                In practice, what you do in Unix philosophy is write a C program called randrange or similar, adding a new verb to your language.

                                                                Another point: one thing that makes Unix philosophy attractive is the compositionality. Compositionality is what some of us love about Forth (I’m an ex-Forther). The difficulty with both is “noise” that obscures solutions to problems. In Forth, the noise is stack juggling. In Unix, it is all the text tool invocations that massage streams of data.

                                                                1. 21

                                                                  On the contrary that one is ultra fast:

                                                                  % time env LC_ALL=C tr -c -d '0-9' < /dev/random | head -c 7
                                                                  5884526
                                                                  env LC_ALL=C tr -c -d '0-9' < /dev/random  0.01s user 0.01s system 95% cpu 0.016 total
                                                                  head -c 7  0.00s user 0.00s system 32% cpu 0.009 total
                                                                  
                                                                  1. 8

                                                                    In gnu land:

                                                                     time shuf -i 0-10000000 -n1
                                                                    3039431
                                                                    
                                                                    real	0m0.005s
                                                                    user	0m0.002s
                                                                    sys	0m0.002s
                                                                    

                                                                    1 command, and it’s super fast.

                                                                    -i equald the range to select from and -n equals the number of items to return.

                                                                    1. 3

                                                                      The shuf(1) can also be installed on FreeBSD from packages.

                                                                      On my 10 years old system:

                                                                      % time shuf -i 0-10000000 -n1
                                                                      1996758
                                                                      shuf -i 0-10000000 -n1  1.02s user 0.02s system 99% cpu 1.041 total
                                                                      
                                                                      % pkg which $( which shuf )
                                                                      /usr/local/bin/shuf was installed by package shuf-3.0
                                                                      
                                                                      1. 2

                                                                        Awesome! I didn’t have a BSD machine readily available, and I don’t remember the shuf details, so I didn’t want to claim it would work there. The shuf on the system I used is from GNU coreutils 8.32.

                                                                        It seems like the BSD shuf at least in version 3.0, it actually is generating the full 10000000, since it’s taking 1s and 99% CPU.

                                                                        The GNU version seems to skip that step, since it takes basically no time. I wonder if newer versions of BSD’s shuf also take that shortcut.

                                                                        1. 3

                                                                          Seems that is little more complicated :)

                                                                          The shuf(1) I used in the above example is from sysutils/shuf package - which is:

                                                                          “It is an ISC licensed reimplementation of the shuf(1) utility from GNU coreutils.”

                                                                          I also have sysutils/coreutils installed and gshuf(1) from GNU coreutils [1] is a lot faster (like Your example):

                                                                          % time gshuf -i 0-10000000 -n1
                                                                          8474958
                                                                          gshuf -i 0-10000000 -n1  0.00s user 0.00s system 63% cpu 0.005 total
                                                                          

                                                                          [1] The GNU coreutils on FreeBSD have additional ‘g’ letter in from of them to avoid conflicts - like gshuf(1)/gls(1)/gtr(1)/gsleep(1)/… etc.

                                                                          Hope that helps :)

                                                                          1. 1

                                                                            My question was, if newer versions of

                                                                            sysutils/shuf

                                                                            also had the ability to skip creating the full range, if only 1 output was requested. At least that’s my assumption on why GNU shuf is 1s faster than the copy from sysutils/shuf

                                                                            Otherwise I agree with everything you said, obviously.

                                                                            1. 1

                                                                              As I see here - https://github.com/ibara/shuf the sysutils/shuf port is at current 3.0 version.

                                                                              There is no newer 3.1 or CURRENT version of this ISC licensed shuf(1).

                                                                              1. 1

                                                                                Sorry, I apologize. I assumed there was likely a new version since you mentioned:

                                                                                On my 10 years old system:

                                                                                way back up there somewhere.

                                                                                Have an awesome day!

                                                                                1. 1

                                                                                  The 10 years old system referred to my oldschool ThinkPad W520 hardware :)

                                                                                  The system is ‘brand new’ FreeBSD 13.0-RELEASE :)

                                                                                  You too, thanks.

                                                                    2. 2

                                                                      I do find this interesting but at the same time I think it’s missing the point. I’m sure this comment was not intended to be and actually is not one of those clever “yes but what’s performance like” throwaway comments at meetings, but I wanted to pick up on it anyway.

                                                                      One thing that the spectrum of languages has taught me is that there are different jobs and different tools for those jobs. The point that I saw from the example was composability and STDIO pipelining, with an example simple enough not to get in the way of that for newcomers.

                                                                      You say “in practice”, directly after having just wondered about a 10-million sided die. Such an object, at least in my experience, is not something you come across in practice. As an ex D&D gamer, anything more than 20 sided is extreme for me and I suspect for most people.

                                                                      1. 2

                                                                        One thing that the spectrum of languages has taught me is that there are different jobs and different tools for those jobs.

                                                                        It’s true.

                                                                        The point that I saw from the example was composability and STDIO pipelining, with an example simple enough not to get in the way of that for newcomers.

                                                                        Oh no, I didn’t miss the point at all. I wasn’t criticizing the example; I think it is a good one that demonstrates Unix philosophy quite well. I was making a counter-point, that with Unix philosophy, sometimes the specific solution does not generalize.

                                                                        Another point worth making is that a solution involving pipes and text isn’t necessarily the correct one. For instance, consider the classic pipeline to count files in a directory: ls |wc -l. I use that all the time. The only reason it almost always gives correct answers is that by custom, nobody puts newlines in filenames, even though it is totally legal.

                                                                        mkdir /tmp/emptydir
                                                                        cd /tmp/emptydir
                                                                        fname="$(printf "foo\nbar")"
                                                                        touch "${fname}"
                                                                        ls |wc -l
                                                                        

                                                                        That gives the answer 2. So much for counting files with wc.

                                                                        You say “in practice”, directly after having just wondered about a 10-million sided die. Such an object, at least in my experience, is not something you come across in practice.

                                                                        It was a whimsical use of metaphor, though maybe God plays D&D with 2**64-sided dice? The problem of picking a random integer in the range 1 to X comes up frequently enough that Python has a function in its standard library for it: random.randrange.

                                                                      2. 2

                                                                        If we wanted to roll a 2**64-sided die, forget about it.

                                                                        $ time env LC_ALL=C tr -cd a-f0-9 < /dev/urandom | head -c 16
                                                                        a7bf57051bd94786
                                                                        env LC_ALL=C tr -cd a-f0-9 < /dev/urandom  0.00s user 0.00s system 68% cpu 0.012 total
                                                                        head -c 16  0.00s user 0.00s system 34% cpu 0.009 total
                                                                        
                                                                      1. 1

                                                                        Flagged, consumer product advertising.

                                                                        1. 3

                                                                          I’m not getting paid or rewarded by anyone to share it. It came across my Mastodon feed, I read up on it, thought it was neat, and checked to see if it was shared here. Seeing that it wasn’t, knowing that it’s open hardware and software, highly customizable, and a mechanical keyboard, I thought there would be of interest here.

                                                                          But I’m not getting any reward from sharing it, aside from magic Lobste.rs points, and that’s the entirety of it.

                                                                          1. 4

                                                                            If you were writing a review of one (eg. you purchased it and used it and had a writeup of your feels about it), I think folks would find it a bit more palatable and interesting that just a direct link to a product.

                                                                            1. 1

                                                                              Fair enough.

                                                                              1. 2

                                                                                It’s certainly not a big deal though, by any means.
                                                                                Have a good weekend!