I used Linux every day for years on multiple machines for work, in the 90s with GUI etc. Didn’t have any of these problems.
. ¯_(ツ)_/¯
Having multiple machines might have been part of why your experience was good.
I remember triple-checking every config and then rebooting with fingers crossed. Because if my computer didn’t come back up, I didn’t have access to another one, so I couldn’t look up how to fix the problem…
There was a place we went in Chinatown that sold generic beige towers. I have no memory of what was in them except they were inexpensive.
I used a couple of distros. First Caldera, the RedHat. Used Slackware for a bit but RH was less fiddly to set up so I usually used it. Added bonus w RedHat was that it was our “production distro” in the later years.
Caldera was the one that cost me two gigs. Worked nicely on friends’ machine, though. And it was $20 vs $100+ for Windows. I could at least see the potential.
https://lobste.rs/s/oqitcz/my_experience_with_linux_90s_why_i_have#c_ut793d
EDIT: It was a good way to learn I better really understand shit myself, esp recovering it, before putting it on my machine. :)
Ah, thank you for posting this!
Learning Magit has been the best workflow improvement I’ve made in the past year. I’m happy to back Jonas’s campaign.
I wonder if there’s a way to “shoehorn” lazy evaluation into Ruby while both simplifying the interface and honoring its existing idioms. I like Ruby for a lot of reasons, but modularity and functional programming isn’t one of them aren’t two of them.
I have half of another post drafted about all the problems I ran into when trying to implement this in Ruby. The tl;dr version is that internal and external iterator methods don’t play well together!
I subscribe to Pinboard’s archiving feature in order to get this. I don’t use it often, but when I need it, it’s worth it. Sometimes the content has simply moved, but other times the site is offline (guess I need an archive of Pinboard’s archive though…).
This statistic quoted in the article is worrying:
A 2014 Harvard Law School study by Jonathan Zittrain, Kendra Albert and Lawrence Lessig, determined that approximately 50% of the URLs in U.S. Supreme Court opinions no longer link to the original information.
It will be very difficult for future courts to track down these references (unlike those in books).
No, it only saves the page you bookmark. It does download assets, though, so you can look at it later more-or-less how it looked at the time.
FWIW with the new headless mode Chrome can save a PDF from the command line quite easily. To me, a PDF has always seemed superior to trying to save all the assets and such, particularly in this age of heavy client-side “apps”.
I don’t think anything can teach you the value of operability (stuff like good logs, metrics, dashboards, exception tracking…) better than being on call. When something is going wrong, you need insight into what is happening in the system.
Author here. Let me know if you have comments or feedback on the article. It was fun to write.
(P.S.: Glad to finally join Lobsters. I’ve been reading it for a long time.)
Ah, this brings back memories. When I was trying to get my sound driver working back then, I recompiled my kernel so many times that I dreamed of the text scrolling on the screen.
Same. I got good at recognizing the patterns and had a pretty good idea when it would fail. I called it “zen compiling”.