What do people prefer between KeePassXC and passwordstore.org? Personally I use the latter but mostly because I found it first and have invested effort into setting it up. But I was thinking of switching because since keepass(xc) stores passwords in a single file it seems easier to manage across devices. (As opposed to pass where files for each website are generally separate.)
I don’t care for passwordstore.org, because as you mentioned, it leaks the accounts you have to the filesystem. If your threat model includes a multi-user system or cloud storage, then this might be a problem. With KeePassXC, this threat is mitigated as every entry in stored a single encrypted database.
EDIT: typo
But I was thinking of switching because since keepass(xc) stores passwords in a single file it seems easier to manage across devices
It’s multiple files with pass
but it can be a single git repo, which I’ve found is a lot more useful since it can detect conflicts and stuff. Running pass git pull --rebase otherlaptop
fits a lot better with my mental model and existing tooling than “just put it in and the program performs some unspecified merge algorithm somehow”.
I’m using Strongbox on iOS these days. When I started using it, I was hesitant to pay for the Pro Lifetime version ($60), dictated by how well it would work for at least a year. I’m happy to say that it’s been exceeding my expectations for well over two years now, and I did end up paying for the lifetime version.
I used pass for a few years, but recently switched to Bitwarden. I did try KeePassXC, but didn’t like it because:
My main issues with pass were the usual ones:
pass ...
and read passwords.Bitwarden is OK, though I really hate their CLI. There’s an unofficial one (https://github.com/doy/rbw) that’s nicer to use, but it doesn’t support YubiKey logins (https://github.com/doy/rbw/issues/7), so I can’t use it.
Syncing would be a bit clunky. Technically you can stuff the DB in Git, but it’s not great.
I do both. I have issues with neither method. My only problem with having the full history available is that there is no rekeying the database, you have to change every password for it to make sense. Or maybe it’s only making me aware of the actual implications of leaking the db.
Qt applications under GNOME/Gtk WMs always look/feel a bit clunky
Working in a terminal 99% of the time, I have no issue with this. In my barebones i3 setup every GUI is ugly anyway. That irked me at first, but I learned not to care a long time ago.
It depends on your threat model but I sync my KeePass file using cloud sync (Dropbox, Jottacloud, Syncthing).
Been doing this for several years and no issues.
What I like about KeePass is that it is available on so many platforms. So even using OpenBSD and SailfishOS, I had no issue finding clients.
I’ve found passwordstore to be a great “clearing house” for importing from elsewhere even if it isn’t my final destination. I used it to export from 1Password and the Keepass family (which I tried but didn’t really like). I’m currently polishing off a script to import my password store to Bitwarden.
While this may seem nostalgic for some folks, I was entirely unaware of this feature[1] and am excited to see how it unfolds. So far it seems to just be a landrush for short names.
[1] https://linux.die.net/man/1/finger (Search for “plan”)
Already linked below (above?) but thought will link it here, too - if nothing else, this one is more up-to-date :^)
Stream starts. An OG iMac in a garage. Musicians surrounding it and sampling the bong. Maccore? Sampling more Apple sounds. Applecore, I guess. Apple Park. Tim’s standing in a field. Another event so soon! Two areas: Music and Mac.
Music. Apple: they like music (even for a lawsuit early on). Sync music state between devices, all the devices with speakers they sell, etc. And that streaming service you don’t subscribe to.
Zane to talk more about that service, I assume. Siri can already be told to play specific things. More curated playlists for Siri to select for various moods and places. Do they have lo-fi hip-hop beats? Another sub plan for Apple Music. Voice plan, for Siri to access only all the music in the service (I think?) at 5$/mo. 17 country/regions. Half the price of the normal plan.
Tim. Dave about HomePod Mini. Promo video. They come in more colours now (yellow, orange, blue?)? Colour is in the mesh, touch space, and cabling. Typical use case for a whole-home setup shown. New colours in addition to white/grey. 99$, avail in Nov.
Back to Tim. AirPods. Susmita on AirPods. Spatial audio. Music is starting to come in it (remember DVD Audio w/ 5.1 mixes?) nowadays. And movies too! Spatial audio support in Pro and Max on all Apple devices. Third gen normal AirPods. These support spatial audio now. New design. Force touch sensor on the stem for controls. New driver for low distortion w/ bass and high frequencies. Sweat/water resistant. Design for different ear shape. Adaptive EQ from Pro in normal AirPods that automatically adjust. 6 hour battery life, 5 minute charge time -> 1 hour usage. 30 hours/4 charges worth in the case. Case now supports wireless charging. Dynamic head tracking. $179 for 3rd gen. Order today, available next week. 2nd is 129$ now.
Tim. He’s still in the weeds (literal). Mac time. Apple Silicon’s curbstomping of the competition led to a lot of Mac sales.
John on Mac. MacBook Pro. A new one. Pro chip. M1 Pro.
Johny on M1 Pro. Flaunting, but Pros need a bigger chip. Two chip (C/GPU) require sep memory pools and more thermals. SoC pro chip. New fabric. Double memory interface, faster DRAM. 200GB/s memory bandwidth, 3X M1. UMA 32 GB w/ custom package. 5nm, 33.7B transistors, twice M1. 10 Core, 8P/2E, 70% than M1. 16 GPU cores, 2x perf than M1. Media engine now supports ProRes. Multiple 4k/8k ProRes with fraction of power requirements. Newer DCP and Thunderbolt for multiple displays.
DJ Khaled goes “another one”. M1 Max. Doubles memory, newer fabric. 400GB/s bandwidth. Six times M1, twice M1 Pro. 64 GB UMA. 57B transistors. 10 cores as well, but 32-core GPU. 4x GPU perf from M1. Twice the ProRes, twice the perf in general decode/encode. Leading perf-per-watt, versus everyone else. 1.7x versus leading “PC chip” in same power envelop, general at 70% less. Now GPU, comparing against IGPs - Pro is 7x faster than presumably Intel IGP. M1 Pro is slightly above most dGPUs at a fraction of the power, 40% for Max vs. top end. (I’d like to see what specific models, but the ballpark seems accurate.) Same performance on battery, unlike competition.
Craig on Mac OS. It’s obviously fast on this. Optimizations for pro applications. Better assignment of cores, especially between perf/efficiency cores. More ML optimizations, 3-20x perf than i9 MBP. Security stuff like secure boot. Reiterating application support, but new updates for Apple’s pro apps. Faster spatial audio mixing in Logic. 5x perf improvement for object tracking in FCP. 10x perf w/ ProRes in Compressor. 10K ARM applications. Developers when told about M1 Pro/Max. 4x perf in Resolve. Real-time multi-level colour correction. Cinema 4D gets 3x faster. Scene edit detection is 5x faster in Premiere.
Back to John. A new MBP that has it. Promo video. MagSafe. Type C. HDMI. 3.5mm. SD slot. No touch bar. Looks kinda thick. 16” and 14”. 50% more air at same fan speed. Most tasks the fans can remain off. 16.8mm thicc/4.7lb on 16”. 3.5lb 15.5mm on 14”.
Shruti on it. Keyboard. Full-height function row, going back on the touch bar in a way only Apple could. Black well. Big trackpad. Connectivity. As said, TB4, HDMI and SD on the right. On left 3.5mm with high-impediance, 2 TB4, and MagSafe 3. Can charge via TB4. M1 Pro has 2 Pro Display XDRs, Max can do 3 and a 4K TV. (That’s a lot of pixels and ports.). No dongles, baby!
Kate on the display. Less bezel; 3.5mm, 24% thinner. Up too, 60% thinner too, with a notch. Raised menu bar that spans across, and includes the notch. 16” is 16.2. 1.8M more pixels. 7.7m 3456x2234 in 16”. 14.2” 3024x1924. ProMotion 24-120hz display. Adaptive refresh rate. Stops when needed, goes up for scrolling, can be locked in. “Liquid Retina” HDR with more colours. Thin display with mini LEDs with local dimming zones. 1000 nits sustained, 1600 peak, 1M:1 contrast ratio.
Camera and audio with Trevor. 1080p with wider aperture front camera. Better ISP. 2x low light perf. Computational video. Better mic array, 60% lower noise floor. 16” has 6 six speakers, 2 tweet, 4 force-cancelling woofers. Bigger diaphragms. Twice as much air displacement, 80% more bass, more octaves. Also on the 14” Spatial audio.
Shruti again. Performance. 2x perf than the i9 MBP. Graphics are up to 2.5x for Pro than fastest x86 MBP GPU and 4X for Max. 5x ML perf. 3.7x perf than 13” i7 CPU and even more brutal for GPU/ML. 16 GB of VRAM on dGPUs, but UMA means the M1 Pro/Max can use even more. 30 streams of 4k ProRes 422 or 7 streams of 8K. Even more than the Mac Pro. 7.4 GB/s read SSDs. Efficient too. Battery life. 2x more battery in Lightroom, 4x more compiles in Xcode. 14” has 17 hours of video playback. 16” has 21 hours of video playback. Fast charging, 50% in 30 minutes, Environment. 100% recycled Al, less harmful substance, more renewable energy during manufacture. Promo video.
John. 14” vs. previous high end 13” compares favourably. 2000$. 16” is 2499$. Order today, available next week. Further into the transition.
Tim. The party line. Stream ends.
I don’t really archive anymore. It’s all just text and so it’s not so much, even after 15 years. As I self-host, it’s just a Maildir with old and new mails. Backing that up with deduplication is painless.
The worry that I have is not so much the cost of the storage, it’s the fact that a compromise in any mail client gives access to all of that mail. I’ve not seen any support in mail servers for spotting unusual traffic patterns and requiring 2FA to reactivate a per-client key, for example.
I kind of blend the two: 1) I aggressively delete emails, and 2) archive what I feel would be an incredible loss. For example, I don’t save any emails that are just manifestations of some service’s history, such as a receipt for Apple Card payments, or other statement emails. With this approach I end up with less than 100 emails a year, but I do have to set the expectation that these external services will make the data easily available.
It’s remarkable how long this battle between botters and the bot detection system has been going on. Jagex claims to be banning thousands of bots per month. More details here on the backstory, written from Paul Gower, one of the creators of RuneScape, just over 10 years ago: https://imgur.com/a/eAuN6uT
As an additional aside, not only has this ongoing battle led to more sophisticated botting techniques to “win” at the game, but reverse engineering the game client has in and of itself led to the creation of “private server” communities. Jagex hasn’t open sourced the RuneScape server software, so players have gone off and written their own server software instead, based on reverse engineering the game client code to understand the game’s protocol.
The issue with bots are bad game designs around non-fun gameplay i.e. grinding. You only want to bot the grind. If the game is fun there’s no point in botting because you are playing to have fun.
When you have to kill 1000000000 boars to progress obviously it’s only gonna be bots doing that.
That’s not the only reason people use bots. If your game has a market, someone is going to game the market. If your game has PVP, someone will want top tier PVP gear without earning it. And if your game has other people in it at all, someone is going to figure out a way to grief or harass them.
i agree - whether the game is fun or not is not the important factor here, it’s the existence of the open market that leads to these issues. (If there’s no market, then bots don’t negatively effect the experience of other player as much either)
Automation like this will emerge in any online space with competitive elements. For online games, this means competition for economic or social standing. If a leaderboard exists in some capacity, someone will begin finding a way to cheat it, and botting is one way that manifests.
runescape bot detection is somewhat notorious for its false positives – there’s a significant community of innocent players who have been falsely banned for botting and unable to appeal! if playing the game legitimately puts players in danger of irrevocable bans, perhaps aggressive bot detection effectively makes botting the only real way to play the game…
Our sysadmin @alynpost is resigning as moderator and sysadmin to focus on other projects. Prgmr will no longer be donating hosting. For security’s sake, I’ve reset all tokens and you’ll have to log in again - sorry for the hassle.
Is there any risk that Lobste.rs could go offline in the future due to running costs?
Isn’t that very overpriced? 40€/month at hetzner gets you a dedicated machine with a Ryzen 5 3600, 64GB of RAM and 512GB of SSD on RAID1 (no affiliation or anything, it’s just the provider I know).
Hetzner also just uses electricity from sustainable sources, while with digital ocean it depends on the location
Hetzner is the goat! I use them for my VPS and it’s the best deal I’ve seen yet for cloud services. The fact that they’re environmentally friendly as well makes it that much better!
You can rent a managed server with Hetzner and they have a panel to install and mange MySQL on it, but I don’t think it’s comparable to DigitalOcean’s managed offerings.
Would be really interesting to hear what they’re doing with “managed”. Because based on the prices I’d say prgrmr.com is also not cheap compared to the hardware you get.
I appreciate the offers but prefer not to, no. Still looking for someone to print-on-demand stickers, though.
Minor dissenting opinion:
I support a lot of people on Patreon and expect nothing in return. Chipping in $5/month to Lobste.rs because I like the community and the stuff that gets shared here isn’t a tall order, and won’t come with any entitlement. (A lot of the people I support are artists and content creators that are usually in high demand from the rest of the community.)
I can’t speak for the rest of the community, but I don’t think I’m particularly saintly in this regard. :P
If the expenses grow, please don’t rule this option out entirely.
It seems to me that the expectation comes from the design of sites which ask for monthly donations. Thinking out loud here, but a donations system which really was just a donations system, something more similar to ko-fi and didn’t have names attached, might help highlight the fact that by donating one is helping out rather than a new account tier?
I personally also donate on Patreon and expect nothing.
Thank you! That is a great attitude.
I have one concern though. What happens when lobste.rs keeps growing and the bill increases? What is your maximum you would spend on the site? Wouldn‘t it be better to care about that rather earlier than later?
By design, Lobsters grows pretty slowly. I’m thinking of design decisions like invites vs open signups, and a narrow focus rather than a subreddit for everything. Growth is not a goal like it would be in a startup, and I’d pause invites if we saw some kind of huge spike.
Right off we should have plenty of spare capacity. I aimed to overprovision this new server and we’ll see if I eyeballed that correctly as we reach peak traffic during the US work week. If the hosting bill goes to about 10x current I’ll start reconsidering donations. But that may never happen! Hosting costs slowly decline as power gets cheaper, data centers get built, and fiber gets laid. Lobsters is cheap to run because it’s a CRUD SQL app pushing around text a few kilobytes at a time and our size increases slowly. I hope not to jinx it, but it seems likely that our hosting bill is flat or declines over the next decade.
I’m definitely in the market for some stickers if you find a service or have any left over from the first batch!
I don’t understand why folks are still using Python 2 even after the announcement to sunset it on January 1, 2020.
Does anyone have a recommendation for an objective overview on what seemingly went so wrong with the two to three transition?
As an outsider who writes at most 1KLOC of Python a year, I’m not sure, but I want to learn more.
I have a pet hypothesis: libraries matter more than programs, and the lack of a mutually compatible subset of py2.6 and py3 made things worse than they needed to be.
It’s relatively easy to port a program from py2 to py3 provided the library support is there. You change all of your code to py3, and drop py2 support. You only had to touch your own code.
For library code and particularly frameworks, it’s not so easy. They want to maintain compatibility with both during a transition period. If most devs haven’t yet moved to py3 and Django drops py2, Django will die. So they need to get all of their users to move over. Convincing other people is way harder than changing your own code.
For at least about 4 years after py3 was released, it wasn’t really feasible to ship one source code base that worked on both. It took until py2.7 and py3.3 for both languages to be fixed so that there was a workable common subset. e.g. One of the trivial to solve but very big problems is that if you want a Unicode string literal in py2 you have to write it like u"hello"
to avoid having a byte string, whereas in py3.0 this syntax was not legal. In 3.3 the interpreter had pep 414 merged which made u"hello"
and "hello"
both be accepted and handled identically. I can’t find the patch right now but I suspect this may have been a 1 or 2 line change to the lexer.
There’s always going to be a multi year porting effort when you’re looking at frameworks with hundreds of kLoCs, and the infeasibility of maintaining a single code base supporting both delayed that process from even beginning for years. Then even once that’s finished, it takes years more for downstream users to port their own stuff over.
This all exacerbated a bootstrapping problem: most devs weren’t on py3 so the value proposition for frameworks to support it was initially poor. Most frameworks weren’t on py3 so the value proposition for devs to switch to it was initially poor. Py3 is nicer than py2 but py2 wasn’t all that bad in the first place so it took a while for the gap in functionality to be really noticeable.
(Note that tools for automatically rewriting py2 to py3, such as 2to3, did exist. My secondhand understanding is that they were not good enough for library code in practice.)
My belief now is that making sure a mutually compatible subset exists is a really good idea when breaking backwards compatibility. It would have IMO been much less harmful that many py2 programs didn’t run unmodified on py3 if there had been, out of the gate, a subset of the language which worked on both.
I can’t give you anything “objective” because anyone looking at what I write would be able to pin me as more of an insider (which is partly true, partly not). I did write something earlier this year that got a bit of traction on this site, if you want to read it.
But I feel like there are a few main things.
One is that it was never possible to go “right” in the sense of a nice smooth transition of everybody, or even nearly everybody, all in a short time. Some languages have it easier here – Ruby, for example, has diversified but is still so dominated by one domain (web development) and even one specific framework within that domain (Rails) that it can be a relatively simpler matter to drag everyone across a breaking change (and they did make it across a breaking-ish string change). Python, once upon a time, was mostly a Unix-y scripting language, but it’s now used so widely and for so many different things that getting all the constituencies on board makes herding cats look downright easy. And the old-guard folks who have been around the longest and using it for Unix-y scripting things the longest had some of the loudest objections to Python 3 (more in a minute).
Another is that organizational resistance to “maintenance programming” cannot be underestimated. When that story was going around recently about the horrors Uber went through rewriting their iOS app in Swift, a lot of people seemed surprised that the company even tried such a thing, but it really makes perfect sense once you have experience of a certain type of regrettably-common environment. In many organizations, programming work that doesn’t directly ship new features to customers, or that doesn’t otherwise have immediately quantifiable payoff, is effectively forbidden, to such a degree that often a rewrite to a new language – sold to management via the expected quantifiable payoffs of important new features or better performance or whatever – is the only way to get even basic maintenance done on existing code. This is also why I largely don’t begrudge people whose response to Python 3 was “well, time to rewrite in Go/Rust/whatever”. They probably have huge deferred maintenance burdens in their codebases, and can’t sell “we’ll rewrite in the new version of our existing language” but can sell “we’ll rewrite in $NEW_LANG
and look at all the nice stuff we get”.
From a purely technical perspective, the early Python 3 releases reminded me a bit of what happened with the KDE 3 -> 4 transition, where the initial releases still were effectively technology previews rather than production-ready. Python did a better job of messaging about this, but a lot of people didn’t get or didn’t pay attention to that messaging and concluded from the earliest releases that Python 3 was a dud. The early Python 3 releases were still stabilizing APIs and still had some critical things – like a lot of the new I/O system – written in pure Python rather than C, which tanked performance, and were lacking some of the later porting conveniences (like being able to u
-prefix string literals, which didn’t become legal again until Python 3.3).
And then there was the filesystem thing. Plenty of code – and a lot of critical code in things like Linux distros – that needed to move over to Python 3 was still effectively “Unix-y scripting”. And Unix-y filesystems are a horrid mess. You may have seen some of the things that go around once in a while about how there’s no portable reliable way to, say, ask to have a file actually written out to disk and get back an indication of whether the write succeeded or failed; that’s an example, but not the one that Python snagged on. Python ran into the problem of encoding: Python 3 strings are Unicode, and to get strings from bytes you have to decode from whatever encoding they’re in. Many Unix-y systems allow you to do this, but not all systems and not all configurations, and no matter how careful you are, a surprising number of people will come out of the woodwork waving copies of old specs and saying that what they are doing is technically allowed, so it’s your problem to stay compatible with what they’re doing. The only portable, reliable description of Unix filesystem paths is “bags of bytes that make no guarantees whatsoever about being decodable”. And so for a long time, Unix filesystem path encoding on Python 3 was not a great story. That has gotten better (at the cost of some hackiness in Python itself), but it was a sore point for a fair number of people for a while.
And plenty of people either jumped ship to other languages, or were dragged to Python 3 by the libraries and frameworks they use. Or are just continuing to use Python 2, often on Linux distros which have committed to long support cycles anyway. But regardless, have made peace with the transition in their own way and so the furor has died down a bit.
I don’t have one handy, and can’t even think of a good one that I’ve read. As someone who has been using python at different levels since 2001, but who has never been closely involved in design/maintenance of the language or standard library personally, here are some quick but arms-length observations:
I’m sure someone can do a better job than I just did. But that’s my understanding having been around for a while without a ton of personal skin in the production of the language/libraries. Which may be the closest thing I can imagine to being objective while still having hands on knowledge in the context of the moments in question.
Also, Guido van Rossum gave a (clearly) not objective but still pretty fair overview in this talk in my opinion.
Coincidentally I just got an email from Ken yesterday afternoon.
if you know me by previous@email, please use my new email address:
*new@email*
thank you, ken thompson
I had emailed him a few years back to ask about some story I read that he had snuck an alligator into Bell Labs. Surprisingly, he responded:
there is a youtube video of a speech i gave describing the alligator. you hae to skip 25-30 minutes to get to me.
ken
I suspected the email I got yesterday was fraudulent since I hadn’t contacted him in so many years, so I replied and asked if it was legitimate. He responded back saying it was, from the previous email address.
It’s not tremendously difficult to acquire vanity email aliases at big tech companies, so long as nobody else is using the one you want already, and the Mail system supports customizing it.
I originally read Clean Code as part of a reading group which had been organised at work. We read about a chapter a week for thirteen weeks.
I am so eager to know (a) who organised this reading group, and (b) what the goal was.
@dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.
This has some die photos of one example: https://archive.eetasia.com/www.eetasia.com/ART_8800717286_499485_TA_9b84ce1d_2.HTM
Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.
The Xbox360 also did something similar with its own “e-fuses.” I assume it’s standard practice now.
Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:
First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:
The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.
The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.
All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.
Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.
(You likely know this, but just in case:)
What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.
(But indeed, lots of PUFs are research vehicles only.)
Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.
I can’t help but to think that at the scale Kubernetes works best at, you end up with the lesser of two [tools] principle. From a non-technical angle if you have a complex system spanning multiple machines, supported by lots of code, contains architectural complexity, and so on and so forth, at least having an open core that a new hire can understand will lower the barrier of entry for new hires to come in and hit the ground running with contributions.
I once read a great article in defense of Kubernetes that made a similar claim and I wish I could find it.
Essentially, their point was that doing infrastructure at the scale where you need this tooling means you already have a complex environment with complex interactions. If you think you can just say “oh, we’ll use some homemade shell scripts to wrangle it” you are ignoring the fact that you needed to evolve those scripts over time and that your understanding of them is deeply linked with that evolution.
I don’t know the first thing about Kubernetes, and still barely do - but when I started my current job I was able to have an understanding about how their software is deployed, what pieces fit together, etc. When I got confused I could find solid documentation and guides… I didn’t have to work through a thousand line Perl monstrosity or try to buttonhole someone in ops to get my questions answered.
It contains a full copy of Mark Twain’s novel “The Adventures of Tom Sawyer”. It gets really tiring to blacklist this file on my desktop search engine as it otherwise constantly comes up in unrelated searches for words that are by accident in this novel.
Why.
It was compressed down to 112 KB. https://github.com/sthagen/llvm-project/pull/297/files#diff-bc34bddcbcace10a9d067aa84bec27ab
I tried a few different approaches to productivity, including notebook and a pen, iPad, etc.
I settled on simply having a set of somewhat structured documents tracking the work of entire projects. If a new project comes up, I create a new document.
Each document has:
I go through the documents and simply append to the Timeline when something new comes up.
In theory I could coalesce all these documents into a single .txt file, which would achieve something similar to what you’ve done.
Off-topic: I used to own this domain.
I should have never gotten rid of it. Oh well.
Who knew .io would take off like it did… Don’t kick yourself. It’s at least a cool, alternative universe where they might have written you a check for it. :)
It’s at least a cool, alternative universe where they might have written you a check for it.
I have the unusual honor to say as a 16-year-old I was able to sell a domain to a company for a four-figure amount. The alternative universe is a reality for me, albeit in a different context.
P.S. I’m holding out on w-1.net and w-3.net. ;-)
I’m using three as a Kubernetes cluster so I can learn how to set it up on cheap hardware. Learned a lot along the way, and I’d highly encourage anyone else interested in learning Kubernetes to do the same!
Yeah, I shared a write up by someone else about a month ago: https://www.jeffgeerling.com/blog/2019/everything-i-know-about-kubernetes-i-learned-cluster-raspberry-pis
This doesn’t appear to be decided yet, but rather a tweet that links to a discussion.