1.  

    I was a little surprised that a local system settings of “email address” for dolt the tool was overwritten (er … without history or diffs!) by the email address used by dolthub when that account was created (and since I auth’d with github, that became a private address not used for anything else). Easy enough to fix via shell history, but the irony amuses me.

    They’re pretty up-front about the billing model for orgs, which is at least a nice change and avoids the bait-and-switch which has had people upset with, eg, Docker Hub. It’s not the cheapest around but I guess if you’re actually using this, you’ll want it:

    Sign up for a DoltHub Pro subscription for the ability to create private repositories for free, up to 1 GB.

    After the first GB, you are billed $50/month, which includes the first 100 GB of storage. Each additional 10 GB/month is $1.

    but since ssh is not mentioned as a remote when they talk about the available ones in the readme, it means that for sensible tinkerer-use there aren’t so many options for sharing. Anyone played enough to know what the sanest local-network hosting is? I’d try Minio, but the AWS support in dolt looks like it requires Dynamo too (presumably as a lock service).

    1. 2

      Keeps me wonder why no-one ever talks about rotating ssh host keys, as most people see it as something that should never change - hence I can’t even remember GitHub or GitLab updating their keys…

      1. 1

        Because unless you have a model to manage deployment of systemwide known-hosts files, or are using an ssh certificate authority for host keys, or use DNSSEC-signed DNS and all clients have validating resolvers and configure clients to trust DNS over local TOFU cache, unless you have one of those … the TOFU store (~/.ssh/known_hosts by default) will mean that rotating host keys causes pain.

        This is probably why GitHub is still on RSA+DSA, with no Ed25519 host-keys: the act of adding a new keytype will cause real-world breakage for many clients, many of whom won’t be aware that they’re using SSH under the hood. They’re accumulating technical debt by being unable to change and not working with their client community to manage a process to update with minimal disruption.

        1. 1

          Adding an Ed25519 wouldn’t be breaking though, just dropping an old key.

          And actual rotation is a bit easier since UpdateHostKeys was introduced, but you still need to keep the old key around long enough for a sufficient amount of clients to have connected and updated. What amount is “sufficient” for any given use case is harder to determine though.

          1. 1

            Different versions of different SSH clients handle this with varying levels of grace.

            In some versions of OpenSSH, adding a new key of a preferred type would immediately try to verify that host-key, ignoring the known ones, and hard-fail because it wasn’t already known but host keys of other types were.

            Now look at how long some OS distributions linger for, with real-world users using ancient OpenSSH with backported features.

            That’s the problem which services like GitHub have to consider: the end-users who are best able to understand the issue and adapt with help are the least likely to be stuck using ancient OS releases.

      1. 6

        A good start for people using static keys (not SSH certs) is to use keys per source host, and to keep a logbook of all the places that need to be updated when the key changes. This will tell you … a lot.

        Also, understand your threat model: are you using SSH agent forwarding? By default? To boxes out on the Internet? I see a lot of people who forward agents to bastion hosts and then ssh from the bastion onwards, meaning that a compromise of the bastion (or malicious admin there) can connect onwards as you while you’re connected, typically without prompting you (confirm-on-use is available for SSH but anyone who does bulk connections (ansible etc) won’t enable that). Just using ProxyJump in newer OpenSSH and NEVER forwarding the agent unless you absolutely have to will win here.

        Modern OpenSSH has Ed25519 on yubikeys via the sk-ssh-ed25519@openssh.com type; figuring out high-value targets vs regular usage and only allowing touch-key-to-continue usage can work great (up until aforementioned parallel usage via ansible).

        If you work from a personal device, make sure you have different SSH keys for work and personal use, because any key you use for work might belong to your employer and the private key might be disclosable upon demand. Rather than worry about if this is valid or not, or what the likelihood is, just use a different key for work vs personal. You can typically have up to 6 keys tried in a connection before it fails. This way, if your employer ever does demand this, you won’t give up access to personal systems.

        Consider what will happen if there ever is a crypto break and strongly consider using two different algorithms for your current set of keys, and treat a bundle of keys from the same host as equivalent. That way, if a crypto break means folks have to mass-deploy an PubkeyAcceptedAlgorithms / PubkeyAcceptedKeyTypes sshd config change, you won’t get locked out. Don’t demand this of everyone, it will just confuse many, but as long as enough key (!) employees do this, you have business continuity.

        Consider separate keys for Prod vs Code. Sooner or later, you’ll find that some kind of prod access does require forwarding an agent somewhere less than ideal. A compromise should not grant access to all the crown jewels of your company. A broken VCS server with people using agent-forwarding should not grant access to prod. Note that this also then means separate keyrings to actually have protection, since any key in the agent is available for signing operations for remote hosts. I’m using an overly complicated setup via an ssh-role shim script (and ssh shim wrapper) to support parallel agents started via a systemd template. So I have xyz-code and xyz-prod agents for an employer xyz. It’s not perfect but if you have admin access to “a lot” then it’s worth taking care to limit blast radius of an event.

        And then you get into SSH Certificate Authorities, which are orthogonal to every issue above except the work vs personal split.

        1. 1

          Also, understand your threat model: are you using SSH agent forwarding? By default? To boxes out on the Internet? I see a lot of people who forward agents to bastion hosts and then ssh from the bastion onwards, meaning that a compromise of the bastion (or malicious admin there) can connect onwards as you while you’re connected,

          I’ve always assumed that a malicious admin on the bastion could extract and reconstruct keys from the forwarded agent - not merely make use of them to connect on while the forwarding is active,

          I’m not sure how wise it is to trust keys, even with passphrases on machines with an untrusted admin. For example, I use passwords rather than keys to log on to my home system from work on the assumption that it’d be harder to exploit. The admins at work, being people that actually know me are probably especially untrustworthy because they might have reason to want to get onto my systems. If I used an ssh key they would at least have easy access to the encrypted key file.

          1. 4

            I’ve always assumed that a malicious admin on the bastion could extract and reconstruct keys from the forwarded agent - not merely make use of them to connect on while the forwarding is active,

            No. This is precisely what ssh-agent is designed to avoid. The only way that they could do this is if they were able to break RSA (or whatever public key algorithm SSH is using in a particular instance). The ssh-agent protocol is very simple. The intermediary just forwards the signing request to the host that owns the key and then passes the response back. To be able to extract the key, it would have to be able to reconstruct a private key from some known (or chosen) plaintext, the public key, and the cyphertext. If an attacker can do that, then they can reconstruct ssh keys from observing the cypertext of any ssh session.

            1. 2

              This feels… all wrong, and yet I can’t prove that it is. At least, with the searching I’ve done up to this point.

              That’s bad in and of itself. There surely is a good document detailing the SSH threat model somewhere.

          1. 7

            For many centuries, the existence of zero was not even acknowledged. Even after it was, it was highly controversial and took a long time to come into vogue.

            People are culturally prejudiced against 0. It makes sense that people find the use of 0 as an index less ‘natural’ than 1, but the reasons for that aren’t inherent to our psyche or to the world; they’re purely cultural.

            Zero was an excellent innovation and I whole-heartedly welcome it: into the domain of mathematics, into the domain of programming languages, everywhere else. It’s a generalisation of magnitude, and a more natural way to count.

            1. 2

              We totally should use Roman numerals for indexing.

              1. 2

                INTERCAL didn’t go far enough, huh?

            1. 2

              There’s a variant of this, git recent from https://csswizardry.com/2017/05/little-things-i-like-to-do-with-git/ which is worth looking at if you like wip but also want to see the latest commit message first-line.

              1. 16

                All languages may be differently broken, but this is one class of problem which option types and completeness checking handle, so several languages are managing to roughly stomp this problem down, just as most modern languages manage to make buffer overflows and pointer arithmetic … “much harder to achieve”.

                It doesn’t need to be impossible to pull off vulnerable code, if we can just make sure that the default patterns and syntax generally reward non-vulnerable code much better.

                And yes, syntax; too many folks look at stuff like design patterns and think that’s a guideline for writing template code, rather than a guideline for concepts where if something is useful enough, it should migrate into the language with syntactical support, just as design patterns such as “function calls” and “while loops” have done so. We went through too long a period of complete stagnation in mainstream programming languages here.

                Fortunately, we seem to be coming out of that malaise.

                1. 6

                  People are resistant to non-brokenness in languages.

                  Sum types are starting to come into vogue a little bit; but dependent typing is still a bridge too far, and complete formal verification is right out (except for research and a few small niches).

                1. 2

                  A clean standard framework for roles:

                  • device owner
                  • scoped device administrators
                  • desktop user
                  • services users

                  The owner gets to delegate admin rights to commercial services providing support; the owner might be you, as the desktop user, or it might be a corporation, making sure that support desks get to do things.

                  If hardware belongs to me, I get total control over what gets to happen with it. If hardware doesn’t belong to me, but I’m using it on behalf of someone else, then I need to play by their rules. There are a number of sociological power structure implications here; rather than bury our heads in the sand and pretend that there are no consequences, it would be good to build frameworks which are easy to comprehend, where people can know and acknowledge or refuse the trade-offs they accept.

                  Once you have these separations of concern, it becomes easier to say that Provider A gets to issue updates to web-browsers and other stuff, but doesn’t get to pull arbitrary data from outside of the apps’ constrained storage spaces. Provider B gets to manage system configuration, so that I always get up-to-date copies of Unicode, timezone databases, all the other foundational data which most people never think of. Provider C runs encrypted backups, diffing across the wire. The file-system allows for efficient delta tracking of multiple point-in-time providers: like snapshots, but tracking on a provider basis where they last saw things, so that plumbing in full backups is easy. The provider C should only be able to request “give me the encrypted chunks of data since the last checkpoint”. Provider D, which might be your estate lawyer, gets to provide offsite credential storage, with Shamir-Secret-Sharing access to decryption keys so that in the event of your passing, or your house burning down, or whatever, the relevant people can get access to your access keys for decrypting your backups, authenticating to various services, etc. Providers A, B, and E all publish feeds of current PKIX trust anchors.

                  All package management for software is built around a framework for provable history and authenticated updates, where the framework mostly provides the record of file checksums, and descriptions of vulnerable old versions. Both this tree, and subscribeable services providing data feeds of known vulnerabilities, can use a common language for describing that versions of product X before 2.13 are vulnerable to remote compromise over the network, unauthenticated, and so are too dangerous to run.

                  The device owner then gets to say which subscription services have authority to do what, perhaps in combination when certain of them agree. Some services might be free, some might be provided at the national level with governments making various threats against people not at least taking the data, some might be from the company’s security team, some might be from your local computer repair shop with whom you have a maintenance contract.

                  So when the image viewer has vulnerabilities, your data feeds let the system impose a cut-off date; once a system service (“render JPEG”) has a cut-off timestamp, then as long as you track the system ingress timestamp of data, it’s a cheap comparison to refuse to pass that data: you can still see your desktop, as much stuff as possible keeps working, but you lose ability to view new JPEGs until you install the newer version.

                  The providers intersection the package trees because Mistakes Happen, and those providers help with recovery from it. When two different providers, in their data feeds, are telling your system “vendor A lost their release signing key, this is the new signing key, it’s inline”, that’s data for local policy to evaluate. Some providers will regurgitate crap, with malicious rumors spreading easily. Others will be high-value curation. You probably only trust the equivalent of today’s OS vendors with statements about changes in release signing keys, for instance.

                  As well as package trees being frameworks, “local data distribution” should be an orthogonal framework. Content-addressable storage (git, etc) and strong checksums, combined with bittorrent, should make it much easier to have data cached and shared on the local network. I should not be retrieving phone/tablet updates 5 times over the wire, when the packages can be pulled once and shared locally, between devices directly. A TV device which is always on can then take part in that. By making this extensible enough, we ensure that stuff like OpenStreetMap data can then just be pulled once, at a particular revision, and still shared. With mobile devices, you then open the opportunity for person A who has good Internet at home to pull all the data for various services to their tablet, and then as they wander around other places they’ve designated to be allowed to pull data, the data can be shared to be freely available to others. The social center, church, youth group, whatever, all have a box on the local wifi which never pulls data over a paid thin Internet pipe, but provide for being local decentralized caches of lots of data, provided that there’s a signed tree over the data proving that this is worth keeping and giving some assurance that they’re not unwittingly hosting toxic data.

                  Those providers from earlier? One data feed they might provide is “list of data feeds worth tracking, and their signing keys”. So the director of the youth group doesn’t need to know or care about Open Street Map, they just know that all the poor kids are safe exchanging data and they’ve taken all reasonable steps to choose to limit it to be not porn, not toxic, not pirated commercial software.

                  These providers? Some will be community-based and free; some will be charging $2/year/household for just doing basic services. Some will be corporate-internal (security team, IT team). There are ways here to build sustainable local businesses.

                  There are more details around identity and access control and how they play off around protection against your every movement being tracked, vs sharing public data freely; these are important details but this comment should be a sufficient overview. I haven’t thought through all of those details but certainly have thoughts on some of them.

                  Network effects from doing all the above well will help a new OS spread. Details of the OS, such as filesystems supporting plugable point-in-time references for backups, for virus-scanners, for content indexing services (local device search), for automated home NAS duplication (not backups, just current view replication for resiliency), etc all support the higher level goals.

                  And please, all configuration of the system is going through a config snapshot system, which implicitly is a DVCS commit for changes. Change a checkbox for a system feature? Okay, it might only be a single commit on a branch which will only get rolled up and pushed remotely at the end of the day, but we should always be able to make changes freely and roll forward and backward. Packages installed? Those changes are a commit too. Person clicks “remember this config state”, that’s a tag (contents of the tag are from the GUI’s field, “notes about this state”), it gets pushed to whichever remotes matter. etckeeper is good, but it’s an after-the-fact bandaid when all configuration should be going through this to start with.

                  Turkey is served, I should stop here.

                  1. 7

                    If you’re implementing support for this in an mail-system, please fetch/cache the image upon receipt, instead of letting this be a user-tracking feature where the image is pulled when the message is opened! A wildcard EV cert and a sub-domain sender per recipient would let you adjust the URL to be retrieved per recipient and bypass tracking pixel protections.

                    1. 24

                      Brave of Google, to take action where the side-effect is to exclude competitor browsers at a time when they’re under scrutiny for anti-competitive practices

                      1. 15

                        Silicon Valley has a very friendly relationship with the incoming administration.

                      1. 1

                        The article seems to have the perspective that if the DKIM signature cannot be verified, or can easily be forged due to private keys available, then it would not be believable that “leaked” e-mails originate from the person named. Given how many people appearently believe in Nigerian princes contacting them, I think that most people probably simply do not care of this thing called “DKIM” if they even know about it. They will simply say that until otherwise proven, the person listed as sender is the sender. Thus, I think the article argues against a non-issue.

                        1. 4

                          I argue from personal experience that it is an issue: any time there’s an explosive leak, I have friend or relatives asking me either whether the mails can be shown to be genuine/fake, or “what’s this DKIM signature thing which the newspaper says proves it’s genuine, are they right or full of fecal matter?”

                          There may be a lot of people who’ll believe whatever they’re told, but there are also many thinking adults who, when presented by something almost too-good/bad-to-be-true, turn to people they know and trust and ask for confirmation.

                          1. 1

                            They will simply say that until otherwise proven, the person listed as sender is the sender.

                            So, consider a hypothetical that I think a lot of people here would be sympathetic to: a whistleblower wants to leak important, potentially scandalous information to a journalist, and knows that powerful entities want that information to stay secret and will retaliate heavily against the person who reveals it.

                            And consider a world where the DKIM-or-equivalent has deniability/repudiation features. Now the journalist could have a tool which takes a list of plausible alternate leaker identities and generates spoofed but cryptographically “valid” (for the timestamp at which they were sent) copies of the leaker’s email, from all of those alternative identities, and build their release of the information around that.

                            Is this a perfect solution to protect the leaker? No. Is it a significant and worthwhile additional layer of protection above what would be available with current best practices? Yes.

                            So why not do that?

                          1. 9

                            In practice, some people rotate periodically (I do so every three months, some people once a year), most seem to never rotate. Publishing the private key some time post-rotation makes sense to me and I will look into doing so, and probably TXT records in DNS pointing to the URL of the historical private keys.

                            The biggest issue is going to be that Google didn’t plan for this and so can’t rotate this easily.

                            For a mail provider for a domain, where they do not control the DNS of the domain, you need to plan out a multi-selector strategy. I went into this recently:

                            All the users of (Google Apps for your Domain)/(GSuite)/(Google Workspace)/whatever? All of those who’ve set up CNAMEs in DNS for Google’s one selector? They’re screwed if Google change today. Last time they rotated, they had very few such customers, and the key was compromised, so they went ahead and did it anyway. Today?

                            Google needs to set up a multi-selector strategy, then publish advice to their customers, then repeatedly audit the DNS for all the customers configured to use their service until they can see handling in place. This is going to be a headache.

                            1. 5

                              Interesting proposal.

                              It seems in the subtext the author is politically motivated to suggest this change, but I will ignore that aspect.

                              If keys are regularly rotated, this adds certain guarantees to email ordering. You can prove an email was sent in a given two week window. This does help with verifying correspondence after the fact.

                              However publishing private keys leads to very poor secondary effects. Suddenly you can’t be sure an old email is legitimate. Users testing dkim signatures will not know this. Despite being public knowledge, nobody has perfect information and this will lead to further negative social effects as people verify and break and verify leaked emails. I don’t think this proposal will actually fix anything for the negative social outcomes of proving a political figure actually sent a given email.

                              1. 6

                                It seems in the subtext the author is politically motivated

                                The author mostly seems to be interested in closing a side effect of DKIM which wasn’t anticipated at the time of its design/adoption. As a few people have pointed out in the various heated comment threads on other sites, the only thing you need for secure messaging is for intended recipients to be able to authenticate the message at the time it’s received. The system’s security doesn’t depend in any way on also supporting authentication at later times or by other entities, and in fact many secure messaging systems explicitly prevent that with deniability features. But nobody seems to have thought about it for DKIM, and now we have some high-profile cases where deniability would have been really useful for the victims.

                                Too many people are also reacting to the specific cases and basically going “GRAAR I WANT THAT POLITICIAN TO SUFFER” and thus talking themselves into not wanting deniability here, rather than considering the other consequences, like whether you want deniability for dissidents who use secure messaging systems – I suspect many of us do, and deniability for the dissident is not possible without also providing it to the politician.

                                1. 6

                                  Suddenly you can’t be sure an old email is legitimate

                                  Who is the ‘you’ in this context? There’s nothing stopping a receiving mail server from doing the DKIM verification and adding a header that contains the time stamp at which it verified. This guarantees to anyone that trusts their mail server that the DKIM signature was valid at the time that the mail was received but it doesn’t extend that guarantee to third parties. You could sign that header with a private key that is frequently rotated and never released, keeping a log of all of the public keys elsewhere, so that someone compromising your mail server would not be able to forge a DKIM verification in the past.

                                  If you then publish these keys in some externally verifiable ledger, then third parties might be able to verify the signature, but they’d still have to trust that you rolled over the private keys (if your mail server kept a copy of the old private keys, you’d be able to forge signature verification in the past, so it’s easy for someone else to claim that your validations are invalid).

                                  Now, if you really wanted to be able to publicly attest to all emails that you’ve ever received, you could run the verification in a confidential computing environment, for example a CCF service that verified the signatures and published the hashes of every verified signature in a publicly attestable append-only log, but unless you’re expecting to receive emails from someone who you can then blackmail over the contents of the emails, there’s no real incentive for anyone to do that.

                                  1. 1

                                    Merkle tree of objects consisting of Message-Id, Date, DKIM signature, verification results. Periodically use a public timestamping service to sign the top of the tree.

                                1. 2

                                  …artisanal greping…

                                  What does this mean? It sounds like flowery language for something we do every day: searching.

                                  1. 8

                                    My favorite line read today so far was in this article, and shows that the author is not taking himself seriously when he writes such things:

                                    Now for the fun part: a crapton of grepanalysis ✨.

                                    1. 3

                                      It is meant to be flowery language for exactly that.

                                      Not every technical post needs to be written drily. I enjoy Jordan’s blog.

                                      1. 3

                                        Sometimes the job is too boring to explain in a straightforward fashion.

                                      1. 1

                                        I’ve read many git workflows and one thing that confuses me about rebase-oriented workflows is the force-push aspect. Are you generally expected to be the only person/machine working on, or at least involved with, that particular branch? Do people not fetch proposed branches locally? Am I making too big of a deal out of inconveniencing other people when you force-push? :-)

                                        1. 4

                                          If the branches are namespaced to start author/ then the author gets to push --force-with-lease their branches as they see fit. Main branches (release, dev, whatever), those should strenuously try to avoid force pushes.

                                          1. 2

                                            Ooh, I didn’t know about –force-with-lease, I’ll have to read up on that, sounds very useful, thanks!

                                          2. 4

                                            If you’re following git flow or a similar strategy, you should be doing most of your work on a ‘feature branch’ which should generally only be worked on by one person at a time. Even if you aren’t using rebase, merges can result in annoying conflicts if multiple people are working on the same branch, so I personally recommend against it for most scenarios unless your entire team has a culture of ‘trunk-based’ development.

                                            1. 2

                                              I’m just curious if, in practice, people adhering to rebase-oriented flows tend to run into trouble with force-pushes, because as a squash/merge practitioner, it sounds like it would give me a headache.

                                              1. 2

                                                Generally the assumption is that every branch can be force pushed to at any time by its owner, except for master on the main project. However, no one rebases commits on that master branch (or on another designated branch).

                                                1. 2

                                                  If you’re doing it correctly, not really. You should only force-push onto feature branches that you “own” or with explicit permission of who owns the branch. Assuming you rebase on master/develop/etc. before a ‘merge’ of a feature branch into a primary branch, it should be a pure fast-forward every time. As with most strategies, you should almost never force push to a master branch.

                                              2. 2

                                                I have almost never worked on a branch that lived long enough that multiple people needed to push to it. A branch exists to work on a single self-contained thing and gets merged when it is done, in most flows I am used to.

                                              1. 1

                                                This inspired me to throw together something to publish notifications on a pubsub system; this one is bash, publishes to NATS using the nats CLI tool, needs jq and is running under Gitolite; for other systems, you’ll need a different way to get $GL_REPO (effectively, the repo slug) (and also $GL_USER):

                                                https://gist.github.com/philpennock/275099baeb529f6f1f2d0b6f1f5669f1

                                                Should be trivial to adapt to mqtt or whatever; if you need authentication, add it to the NATS_ARGS array. This was a quick hack which I’m now running at home, having put it into my gitolite admin repo’s local/hooks/ area (which is a disabled-by-default feature with security implications if you don’t trust the writers to that repo with shell access).

                                                1. 3

                                                  Looks like the default mode of NATS being at-most-once has bitten again, with the options for at-least-once being missed. When smart developers miss this, it’s a project problem in communication.

                                                  [disclaimer: I work for the main company behind NATS]

                                                  1. 1

                                                    I wasn’t aware there were other modes. Is there a way to configure NATS so that it will act like kafka?

                                                    1. 5

                                                      For what you are doing you don’t need that, and you don’t need at-least-once semantics really either. Treat NATS like HTTP and make every message a request, meaning you will wait for the other side to confirm. I also used NATS to create the control plane for CloudFoundry and never used anything but at-most-once semantics for scheduling, command and control, addressing and discovery and telemetry.

                                                      1. 2

                                                        Does this potentially run into issues with missed responses? If a response message doesn’t show up you’re not sure if it failed to arrive or your message was never received? Feels like this might be easier to avoid with HTTP.

                                                        1. 3

                                                          What happens when you don’t get a response from HTTP? That is also possible if the network stack hangs or breaks. Its the same thing TBH.

                                                          1. 2

                                                            Right. I guess I don’t know enough about TCP/HTTP failure or message queue send failure. Seems like NATS might drop messages under load which would have different failure characteristics than TCP (maybe not true?). Would retrying on timeout also have very similar characteristics to at-least-once?

                                                            1. 2

                                                              NATS will protect itself at all costs and will cut off subscribers who can not keep up. However, that really does not apply to request/reply semantics. So for request/reply I would say NATS and HTTP would behave similarly.

                                                              1. 1

                                                                cool, thanks for the knowledge

                                                                1. 2

                                                                  np

                                                                  1. 1

                                                                    The other half of it was that the Rust client for nats wasn’t as mature as I was hoping.

                                                                    1. 1

                                                                      You should look again, its a 1st class citizen now, but at the time you are probably right.

                                                  1. 4

                                                    How old is their git? On my install, --depth is documented with this:

                                                    […] Implies –single-branch unless –no-single-branch is given to fetch the histories near the tips of all branches. […]

                                                    Found when I went to double-check the docs for --single-branch to see what this manual refspec setting might be doing differently from that command-line option.

                                                    Is this a Groovy wrapping of Git problem, or just old Git?

                                                    1. 4

                                                      This would be the former. Jenkins+groovy+java git client = behavior you don’t expect at times.

                                                    1. 3

                                                      Good advice; last time this advice came up on Lobste.rs, my approach got voted down, but it’s realistic and honest: if I’m going to paste anything from an untrusted source near a command interpreter, I hit alt-v (or esc-v) to invoke a text editor (edit-command-line widget in zsh, bound in my shell) and then "+p to paste from the clipboard/selection into the text editor, where I can look at it, make sure I haven’t mis-selected text, that nothing else was happening; adjust as appropriate, and then save, which drops the edited line back in the shell line-editor, waiting for me to hit return.

                                                      It’s a tiny bit of obscure setup but having it a single keystroke away and the convenience of the other things I can do mean that it pays dividends, even if I’ve only … twice? seen something truly hinky in the results.

                                                      zle -N edit-command-line
                                                      bindkey '^[v' edit-command-line
                                                      bindkey '^[OP' edit-command-line
                                                      

                                                      Bracketed paste mode in Zsh is not perfect but is a nice guard, and enabled by default these days, so if I do slip up and forget to hit esc-v first, there’s still some limited protection: belt and braces together.

                                                      1. 2

                                                        If anyone runs macOS I do something similar with macOS’ Spotlight.

                                                        Spotlight is one Cmd+Space away, paste command there, edit, select all, copy, paste on terminal.

                                                        Simple and comes installed out of the box.

                                                        1. 2

                                                          I suspect an even safer is to look at the bytes in the clipboard with something like: xsel -ob | od -c. That way, nothing pasted can accidentally become an escape character for something else, and you can easily see any non-printing or control characters.

                                                        1. 1

                                                          There’s “so common that that go on the base images for Production”:

                                                          • jq
                                                          • curl

                                                          and then there’s the stuff which doesn’t quite meet that threshold:

                                                          • git
                                                          • git-crypt
                                                          • ag (silversearcher)
                                                          • tree
                                                          • direnv
                                                          • pcregrep
                                                          • socat
                                                          • oathtool, qrencode
                                                          • psql
                                                          • xmlstarlet
                                                          • rlwrap

                                                          The other honorable mention is for one which is login shell startup and only occasionally invoked directly by me, but when I do invoke it, it’s helping to save me from social failure:

                                                          • birthday

                                                          This doesn’t count “stuff which I put in for the system, rather than for me”, such as etckeeper. Nor programming languages or tools to support them (Python, pyenv, go, cargo, etc). Even though sudo etckeeper unclean has been invoked more often than you might think.

                                                            1. 7

                                                              Some more on macos, open to replace having to use finder and say to make audible alerts when make is done >.<. I also have this script to make notifications easy on the command line I call it notify and use it like notify “message” “title” (why title last? so I can just do notify message)

                                                              #!/bin/sh
                                                              message=${1:-""}
                                                              shift
                                                              title=${1:-""}
                                                              notification="display notification \"${message}\""
                                                              [ "${title}" != "" ] && notification="${notification} with title \"${title}\""
                                                              
                                                              osascript -e "${notification}"
                                                              

                                                              I also got sick of using the gui to close macos apps and made a “close” command too:

                                                              #!/bin/sh
                                                              
                                                              if [ -z "${1}" ]; then
                                                                printf "usage: close app_name\n  no application to close provided\n"
                                                                exit 1
                                                              fi
                                                              
                                                              osascript <<END
                                                              tell application "${1}"
                                                                  quit
                                                              end tell
                                                              END
                                                              

                                                              None of these is particularly interesting, just useful to have around to know when something finished or to close say firefox from the command line. But it lets you then script the gui a bit easier. I suppose I could create a repo with these random macos scripts.

                                                              I also have an old af perl script named ts that simply timestamps output you pipe to it. I think something similar is in moreutils but I’ve had this thing for years before moreutils existed and its just a part of my dotfile setup so simpler to shunt around to any unix system.

                                                              1. 1

                                                                After just missing the audiobells from printf '\a' enough. I wrote a simple shell script to use Pushover’s api send these types of notifications.

                                                                I get a notification on my personal laptop (linux),work laptop (osx) and phone. I can run locally or on remote machines, OS doesn’t really matter, and the notifications are pretty much instantaneous.

                                                                script in question

                                                                Oh also pushover can be used with lobste.rs for replies and notifications

                                                                1. 1

                                                                  I just want an alert on my laptop when make finishes, not get alerts on my phone heh. I just use it like make && notify “make finished”, long as i see the notification i’m happy. No need to involve a web api in things IMO.

                                                              2. 3

                                                                For wayland users there is wl-clipboard which provides wl-copy and wl-paste.

                                                                1. 1

                                                                  I like the pbcopy default behavior well enough that I port it for use on X11, and handle Wayland too, so I just stick with the pbcopy command; this then works better for communicating with macOS-using colleagues.

                                                                  #!/bin/sh -eu
                                                                  if [ -n "${WAYLAND_DISPLAY:-}" ]; then
                                                                  	exec wl-copy "$@"
                                                                  elif [ -n "${DISPLAY:-}" ]; then
                                                                  	xclip -selection primary </dev/null
                                                                  	exec xclip -selection clipboard "$@"
                                                                  else
                                                                  	printf >&2 '%s: %s\n' "$(basename "$0" .sh)" 'no clipboard tool found'
                                                                  	exit 1
                                                                  fi