The age-old debugging technique of “type it again.” In my experience, it does indeed work entirely unreasonably well (in nearly every context, not just shell commands).
I did all of Linux From Scratch this one once before using the automated process. Entirely eye-opening. Took me three months at about 2 hours per day. One automated run? 75 minutes.
I started doing that way back when I didn’t know how to cut and paste in a terminal. I’m glad I kept it up.
Recently I wrote a really simple server for testing purposes. I could have easily copied an echo server from somewhere, but I went through the exercise of reading the particulars of socket/bind/accept and friends. I learned a fair bit and it was kinda fun.
And no, I didn’t even let myself copy code from the man pages.
There’s no need for such subversive tactics when people will gladly cut and paste shell commands that install something over untrusted HTTP and pipe it to a shell, possibly using sudo.
The trust model is essentially “whatever I received has most likely also been received by many others, who have presumably run it, without reporting back that it blew up their computer”. From a practical standpoint those assumptions are not actually unreasonable, which is why this stuff isn’t as dizzyingly dangerous as people make it out to be when they write about it. OTOH, the alarmists are correct insofar as that trust model offers “herd immunity” only, with no guarantees whatsoever to you personally. (In particular, you might be the first one to receive whatever you received (whether because upstream just changed it or an injection in transport or whyever else).)
I never understood this argument. If the repo is of a software, you’re going to run (and thus trust) the compiled product anyway. So, to me, “don’t pipe untrusted HTTP to shell” sounds almost like “don’t run software you’ve downloaded over untrusted HTTP”.
But then, what’s the deal about HTTP anyway? Do you trust the authors of the software and the web site operators so much that, as long as their software is delivered to you intact, encrypted and signed, you consider it safe?
Piping scripts into the shell discourages people from using other options, like the ports/packages provided by their OS, which usually incorporate at least enough checksums to detect unexpected changes.
On a completely unrelated note, I find that by not copy/pasting code snippets and shell commands, I actually understand them better.
The age-old debugging technique of “type it again.” In my experience, it does indeed work entirely unreasonably well (in nearly every context, not just shell commands).
I did all of Linux From Scratch this one once before using the automated process. Entirely eye-opening. Took me three months at about 2 hours per day. One automated run? 75 minutes.
I started doing that way back when I didn’t know how to cut and paste in a terminal. I’m glad I kept it up.
Recently I wrote a really simple server for testing purposes. I could have easily copied an echo server from somewhere, but I went through the exercise of reading the particulars of socket/bind/accept and friends. I learned a fair bit and it was kinda fun.
And no, I didn’t even let myself copy code from the man pages.
There’s no need for such subversive tactics when people will gladly cut and paste shell commands that install something over untrusted HTTP and pipe it to a shell, possibly using sudo.
Even running
makeon some repo is trusting whoever wrote it not torm -rf /you. Sometimes you just have to trust what you’re getting.The trust model is essentially “whatever I received has most likely also been received by many others, who have presumably run it, without reporting back that it blew up their computer”. From a practical standpoint those assumptions are not actually unreasonable, which is why this stuff isn’t as dizzyingly dangerous as people make it out to be when they write about it. OTOH, the alarmists are correct insofar as that trust model offers “herd immunity” only, with no guarantees whatsoever to you personally. (In particular, you might be the first one to receive whatever you received (whether because upstream just changed it or an injection in transport or whyever else).)
I never understood this argument. If the repo is of a software, you’re going to run (and thus trust) the compiled product anyway. So, to me, “don’t pipe untrusted HTTP to shell” sounds almost like “don’t run software you’ve downloaded over untrusted HTTP”.
But then, what’s the deal about HTTP anyway? Do you trust the authors of the software and the web site operators so much that, as long as their software is delivered to you intact, encrypted and signed, you consider it safe?
Piping scripts into the shell discourages people from using other options, like the ports/packages provided by their OS, which usually incorporate at least enough checksums to detect unexpected changes.