I would love to visit the Computer History Museum someday. It’s interesting to be able to see (in the photos he provided) the actual components that store each bit of memory. I also love how the unit is designed for easy maintenance.
Reduce inequality due to bringing better paying jobs to lower cost regions
Yep, that’s exactly how it’s working out in Silicon Valley.
Tutorials would be a good resource, but one reason that you don’t find many is that they potentially need to be updated every 6 months - and no one ever seems to keep their tutorials up to date…
There have been various threads on the OpenBSD misc@ list over the years, but as the developers put so much effort into producing great man pages, that has been the default answer to this issue.
When I was starting with OpenBSD I already had The Complete FreeBSD by Greg Lehey as a handbook, as my journey with *BSDs started with FreeBSD, and then when I discovered OpenBSD in 2000, it remained a useful resource. FreeBSD still has their handbook some of which will be relevant to OpenBSD partly as both have their roots in 4.4 BSD.
That’s actually one of the reasons I appreciated Burnett’s guide, because he keeps it updated with each new version. I can recommend it to someone with confidence that it’ll apply to the latest OpenBSD release.
I’m hoping to introduce some of my more enterprising students to OpenBSD and *nix in general next year, and a clear tutorial can be a solid resource to get them over the initial learning curve of interacting with a non-Windows or non-macOS system.
I agree that the manpages are an excellent resource, and a solid tutorial should lead users toward the manpages instead of StackOverflow.
There are professional tutorials from events.
Very cool - I really like lower-power equipment like this. However, I think it’s a terrible idea for security since it looks like its an unencrypted video stream which would make eavsdropping trivial.
KeePassXC on my personal laptops and desktop, KeePass DX on Android, and vanilla KeePass on my Windows workstation. I keep the database in sync with Syncthing. I have a separate LastPass account for work accounts.
I’m using SyncThing at home. Just mirror and sync a folder across multiple machines.
One downside I see is the lack of storage somewhere else while all laptops are at home. Geographic risk.
It also requires all machines to store the full state. ~100GB in my case.
One popular differentiation between file synchronization and backups are that you can travel back in time with your backups. What happens if you - or more realistically: software you use - deletes or corrupts a file in your SyncThing repository? It would still be gone/corrupted and the problem would automatically be synced to all your machines, right?
Personally I use borgbackup, a fork of attic, with a RAID 1 in my local NAS and an online repository to which I, honestly, don’t sync too often because even deltas take ages with the very low bandwidth I got at home, so I did the initial upload by taking disks/machines to work …and hope that the online copies are recent ‘enough’ and I can’t really resist the thought that in scenarios where both disks in my NAS and the original machines are gone/broken (fire at home, burglaries, etc.) I would probably loose access to my online storage too. I should test my backups more often!
I use Borg too! At home and at work. I also highly recommend rsync.net, who are not the cheapest, but have an excellent system based on firing commands over ssh. They also have a special discount for borg and attic users http://www.rsync.net/products/attic.html
Hmm - that’s really not the cheapest!
3c/gb (on the attic discount) is 30% dearer than s3 (which replicates your data to multiple DCs vs rsync.net which only has RAID).
True, though S3 has a relatively high outgoing bandwidth fee of 9c/gb (vs. free for rsync.net), so you lose about a year of the accumulated 0.7c/gb/mo savings if you ever do a restore. Possibly also some before then depending on what kind of incremental backup setup you have (is it doing two-way traffic to the remote storage to compute the diffs?).
Ahh, I hadn’t accounted for the outgoing bandwidth.
That said, if I ever need to do a full restore, it means both my local drives have failed at once (or, more likely, my house has burned down / flooded); in any case, an expensive proposition.
AFAIK glacier (at 13% the price of rsync) is the real cheap option (assuming you’re OK with recovery being slow or expensive).
RE traffic for diffs: I’m using perkeep (nee camlistore) which is content-addressable, so it can just compare the list of filenames to figure out what to sync.
Eh - I don’t mind paying for a service with an actual UNIX filesystem, and borg installed. Plus they don’t charge for usage so it’s not that far off. Not to shit on S3, it’s a great service, I was just posting an alternative.
Yeah that’s fair, being able to use familiar tools is easily worth the difference (assuming a reasonable dataset size).
syncThing is awsome for slow backup stuff. But i wish i could configure it such that it checks for file changes more often. Currently it takes like 5 minutes before a change is detected which results in me using Dropbox for working directory usecases.
You can configure the scan time for syncthing, you can also run syncthing-inotify helper to get real-time updates
That’s one huge advantage of Resilio Sync. You don’t have to store the full state in every linked node. But until RS works on OpenBSD, it’s a no-go for me.
I said it before on the fediverse, but this isn’t N800/N900 like at all. The Pocket is in the tradition of UMPCs (very small desktop-experience x86 computers, often with weird form factors) instead of the NITs of N800/N900, which are closer to (a smaller version of; like the iPod touch) modern tablets or smartphones, just running a more ‘normal’ GNU/Linux. Apples and oranges.
Personally I would have loved for the Pocket to come in the form factor of the N800 series or N900 series devices. A sliding thumb keyboard seems to make more sense than a very tiny touch-type keyboard.
I think the most clever bit about the OSS guidance is that confronting someone who does those things actually makes you look more like the saboteur.
This project has been very exciting to watch as it develops. It’s incredible to me the amount of progress they have made in such a short time.
I’ve been very pleased with DokuWiki, despite my (hypocritical, see below) tendency to complain about the install instructions for different distributions and operating systems. So far my favorite setup has been DokuWiki running on OpenBSD. However, I completely failed to document the process and so I’m going to have to sit down and run through it again to contribute to the OpenBSD guide on the DW site.
I love a good deep dive into a niche topic like this. I just completed a scavenger hunt activity with my social studies class using Google Maps and Street View, and I believe that it has become optimised for that sort of use, compared to navigation. Basically you use it to find locations, not get to them. The intent being that you use turn-by-turn for that. However, when you’re in a place where turn-by-turn isn’t available (or you haven’t been given a name or exact address to navigate to), you’re going to wish you had a better alternative for traditional navigation.
They’re trying to bring macOS in from the cold by making it easier for iOS devs to work on both platforms. Microsoft tried to do this with Windows Phone. The problem in both cases is that the platform itself is less interesting for developers. So sure, the Twitter for Mac app might get updated more frequently but it will still be an afterthought compared to the iOS version, which is where Twitter knows most of the eyeballs are. macOS will still be the “also ran” in comparison to the iPhone since more people will live with a webapp on the desktop than on mobile.
This bit really stood out to me:
Compassion presents an optimization problem — it’s about understanding and minimizing suffering. It’s not the same as politeness or niceness, and it often involves speaking honestly and assertively.
I need to do what’s best for the other person, not for me.
Parts of this seem very similar to this article from 2013.
I’m reading Ender’s Shadow at the moment.
Yes. I reread Ender’s Game a couple weeks back after reading The Swarm. Then I went back and read the First Formic War trilogy, Earth Unaware, Earth Afire, and Earth Awakens. Now I’m reading the parallel novels and then I’ll move on chronologically. I was originally planning to read them in the order published, but since Card says he wrote them to be read either way, I’m going chronological.
https://medium.com/@lemiorhan/the-story-behind-anyone-can-login-as-root-tweet-33731b5ded71
The author’s follow-up to the tweet. I’m not sure it really clears up too much, except that they knew the issue had been mentioned in other places already.
To illustrate what I have in mind… most people who have studied mathematics seriously, even teenagers, can quickly sum up all numbers in a sequence. For example, what is the sum of the numbers between 1 and 99. That sounds hard? So maybe you can look up a formula online. Maybe. But once you know the “trick”, you can do it in your head, quickly, without effort. There is no miracle involved. To sum up the numbers between 1 and 99, just pair up the numbers. You pair 1 with 99, 2 with 98… and so forth, up to 49 and 51. So you have 49 such pairs, and each pair, sums up to 100 (99+1, 98+2,…). So you have 49 times 100 which is 4,900. Then you have to add the remaining number (50), so that the sum is 4,950.
I did not know that, but now I can check off my “learn something new today” box.
Regarding the article as a whole though, I guess a big part of his premise is in the definition of “tools”. I guess I would agree that mental models are tools. But when you equate mental performance with tools, then it seems like the final bit goes from:
My answer is that acquiring new tools is the surest way to get smarter.
to:
acquiring new tools is the surest way to get more tools.
But when you equate mental performance with tools, then it seems like the final bit goes from:
My answer is that acquiring new tools is the surest way to get smarter.
to:
acquiring new tools is the surest way to get more tools.
I don’t think it’s that tautological. I read it as ‘acquiring new tools is the surest way to be able to solve more problems.’
The triangle numbers (sum of 1 to n) are quite an interesting thing. When I was in 8th class I kinda discovered this little factoid on my own and simplified it into (n^2 + n) / 2 or n * (n+1) / 2. The interesting thing is that this is also the number of combinations of n+1 elements including with themselves. If you take n * (n-1) / 2 you get without themselves. They are also useful in table-manipulation.
It’s quite amazing how such a simple little operation can turn into something mindblowing and useful.
When discussing failures, people need to feel safe to share all relevant information, with the understanding that they will be judged not on how they fail, but how their handling of failures improved the team, their product and the organization as a whole. Teams with operational responsibilities need to come together and discuss outages and process failures. It’s essential to approach these as fun learning opportunities, not root-cause obsessed witch-hunts.
Having worked in a “witch-hunt” environment in the past, I still struggle to articulate information about failures. If they’re my own, I wonder how they will impact my future. If they’re someone else’s, I feel like I’m throwing them under the bus. It’s very difficult to overcome that, even though I know I’m no longer dealing with the same people.
I enjoyed this little hyperlink rabbit trail: https://jpmens.net/2018/06/19/on-a-pos-pole-display-and-an-open-source-os/
Thanks @romanzolotarev for assembling these interviews! Been reading them all day.
I’m glad you found it. I like it too :)