Based in Europe, I am buying “Euro-Boxes” in different sizes for all my “organizing physical stuff” needs.
They are cheap, stack-able, durable, and available in a large range of sizes.
Much better than flimsy and overpriced IKEA boxes.
Personal: MacBook Air (13-inch, Mid 2013) 1.3GHz dual-core Intel Core i5, 8GB RAM, 256GB Flash-Storage
This is a super interesting analysis, I’ve implemented cross-fades for my video editor project: http://www.openmovieeditor.org/ and I always wondered how to get it right when having images with alpha channels.
IMHO the best thing about the PHP ecosystem is that you have a large variety of hosting providers that will do your PHP hosting, and due to PHPs nature it is very easy to switch providers, should you be unhappy, …
In practice I’ve found the opposite to be true (sometimes). PHP’s “just copy those files, Yolo” means the dependencies story is still rather poor. Want some db client? You need to find hosting which has that extension installed/enabled. Same for caching / opcode speedups. Same for format parsers. Maybe you get cPanel where you can install some of them, but it’s very rarely code driven. If you have pure-php code, it’s easier.
In comparison ruby / node / even python have an easy way to install those at deploy time in most PaaS environments.
Great advice, and I really like your examples, I’ve seen many of exactly these conversations in the “wild”.
I have used NixOS as my daily driver for a couple months now and I love it. However, I have a very superficial understanding of its architecture so I struggle to make meaning out of this. I’ve read https://r13y.com/ already, but it left me with more questions:
diffoscope
?) Isn’t the output necessarily different due to hardware optimizations? Are they turned off for the purposes of these tests?Yocto Project (an embedded Linux system) also has a reproducibility status page:
https://www.yoctoproject.org/reproducible-build-results/
Here is their wiki page about the topic: https://wiki.yoctoproject.org/wiki/Reproducible_Builds
What is being compared to determine whether two builds are consistent with one another? (diffoscope?) Isn’t the output necessarily different due to hardware optimizations? Are they turned off for the purposes of these tests?
In my understanding reproducible builds require that you target the same hardware, so e.g. arm64 without any extended instruction sets or so. Non-deterministic optimizations need to be turned of for that. https://reproducible-builds.org/docs/ is a nice resource, listing things which makes reproducible builds complicated in practice.
Does reaching the 100% threshold unlock new capabilities or use-cases?
Yes, one can assert whether a given ISO image matches upstream sources and hasn’t had any backdoors or so backed into the binary without disassembling it. This ability is lost if you are <100%.
Are there other 100% reproducible (non toy) operating systems? How non-reproducible are other OSes?
None that I know of, but many are working on it, see https://reproducible-builds.org/projects/
Yes, one can assert whether a given ISO image matches upstream sources
The act of verifying removes the need for verification. When you build it yourself to check, you no longer need to check. Just use your build artifacts.
Reproducible builds are nice for other reasons – eg, caching by hash in distributed builds, but they’re security snake oil.
Finally: if you’ve got a trusting trust attack, you can have a backdoor with no evidence in the code, which still builds reproducibly.
If you’re building it yourself to check, you no longer need to check. Just use your build artifacts. This is security theater.
It’s not. It would explicitly have prevented the Linux Mint ISO replacement attack we saw 6 years ago.
https://blog.linuxmint.com/?p=2994
(It belongs to the story that the parent comment is just parroting talking points from Tavis)
It’s not. It would explicitly have prevented the Linux Mint ISO replacement attack we saw 6 years ago.
Can you explain how anyone would have noticed without building the ISO from scratch?
I think preventing it is hard because there are so many avenues to exploit, but reproducible builds can help you determine whether a build has been compromised. If you don’t know whether the attacker managed to alter your build artifacts, you can just rebuild them and do a byte-for-byte comparison. If your builds aren’t reproducible, you have to look at what the differences are: are they changed timestamps? optimization levels? reordered files? etc
You need to build it yourself to check, but could notify others if hashes mismatch as that would be much more suspect than it would be for non-reproducible software. Independent third parties could build other peoples ISOs on a regular basis to check.
Also, I forgot the obvious circumvention (beyond the trusting trust attack): being lazy and putting the exploit into the distributed code – since in practice, nobody actually audits the code they run, it seems like in practice this would effectively circumvent any benefits from reproducible builds. Signing the ISO gets you a hell of a lot more bang for the buck.
Reproducible Builds only concerns itself of the distribution network and build server. It can’t solve compromises on the input because that is not the goal. We need other initiatives to solve that part. Reproducible Builds is only part of the puzzle, and I people like you and Tavis really struggle to see that. I don’t know why.
This is very much like claiming memory safety issues are pointless to mitigate since logic bugs are still going to exist. But wouldn’t eliminating memory safety issues remove a good chunk of the attack surface though? Isn’t that a net gain?
I can get the claimed benefits of reproducible builds by taking the exact steps I’d need to verify them – running a compiler and deploying the output.
If you can tell me how to get memory safety by running a compiler once over existing code, with no changes and no runtime costs, I’d also call any existing memory safety efforts snake oil.
Again, if you’re concerned about the security problems that reproducible builds claim to solve, you can solve them today with no code changes. Just run the builds.
Again, if you’re concerned about the security problems that reproducible builds claim to solve, you can solve them today with no code changes. Just run the builds.
I have better things to do then swap out my distribution with Gentoo and pretend it solves the problem.
Yes of course, my claim was that one could check whether the binary matches the sources, not that it magically solves all security issues. You are right that people need to be able to trust their toolchains in the first place (trusting trust), but this is true for all software, reproducible or not.
Another initiative in this direction are “bootstrappable builds”, https://www.bootstrappable.org/
Nobody! And even fewer people if the ISO build is not reproducible. Which is the point of reproducible builds.
Ensuring we can reproduce a bit-for-bit identical artifact ensures we can validate the work, even with a signing key compromise. Without reproducible builds we are only left to our own device and have no way to even start validating it.
Rather than creating yet another “standard” for convergence, it would be cool if regular open source productivity apps would be ported to Android.
LibreOffice on Android just they way it currently is would be sufficient.
Most tablets already support plugging in Mouse+Keyboard.
So with regular productivity apps available, you could just use them with an “El Cheapo” USB Mouse+Keyboard.
It’s not actually e-ink though. It’s a transflective display, so it doesn’t have the battery benefits. Still useful for working with a laptop in sunlight though.
Yeah, I wanted to to say this as well.
Never the less, it is a great display. I have a modded Lenovo IdeaPad S10-2 with a Pixel Qi Display and I run Haiku-OS on this machine.
It’s the best option for working (writing) outdoors.
I don’t think these displays are still produced, so get one of those from ebay, while they are still available.
For me personally I have already solved this “Problem”, and built myself a simple “Typewriter” computer, that I use daily:
The Wifi is NOT working, which I consider a feature, not a bug.
The Text Editor provided by the default Haiku OS installation is sufficient for me, so that is what I use.
I use git to sync my work, whenever I plug into a physical network.
The OS is super snappy, and I am writing way more without distractions. I recommend anyone to build one of those machines for themselves, who’d like to go back to a time where computers were simple writing machines.
Corporate email is often based on an Outlook server, and it is becoming increasingly difficult to setup a regular email client that is relying on IMAP and SMTP to work with the corporate outlook server.
getting two factor authentication to work is a problem, and for Thunderbird to work with Outlook you need to either:
So yes, getting a plain text email client to work as a corporate developer is a problem, but you might as well blame Microsoft for putting up these barriers in their email server.
I work in a company where the email infrastructure is Office 365. After many interactions with the IT team, I made them whitelist DavMail, which is connected to my Thunderbird, in which I use exteditor
to write my emails using, in my case, Neovim.
A friend of mine, who has a day-to-day job as contributor of a few FLOSS projects, had problems sending email from his company’s infrastructure (also Office 365) and decided to send patches with his private email. Last time we spoke about it, he was about to buy an Owl’s license.
Thus, I agree with you: Microsoft is the problem in here. And the article even touches that point:
We assumed that Outlook was to blame. Could Microsoft fix that instead? “The question always is fix it to whose standards, because we are focused much more on business and enterprise models of clients and customers. For them we fixed it to a more HTML-based model so it really depends on who your audience is and who your target is.”
It turned out, though, that this time Outlook was not guilty. “I think it was actually Gmail that was a barrier.[…]”
At last, I expect people contributing to the kernel to RTFM: there is a kernel.org’s page on email clients (for collaboration on the Linux’s kernel), describing problematic MUAs and even given basic configuration for some of them. And it is not the unique reference on the subject.
So much fuzz for something that does not even address the problem of lack of good maintainers for the kernel. 🤦
Turns out lay people can get through this, if given the right coaching and the wrong information: TurboTax apparently didn’t bother to sign their binary, so the official docs tell people to go down this route D: https://ttlc.intuit.com/community/troubleshooting/help/turbotax-for-mac-won-t-open-when-installed/01/26611
You’d expect that a big company (software company even) would find someone to go through the trouble of properly signing the binary, …
Similar to this: https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
JetBrains CLion and this Video: Refactoring C++ Code by Arne Mertz
Immersion, Repetition, Deliberate Practice, Competitive Programming, Imitation (Fake it till you make it), Code Katas, Refactoring Katas, Code Golf, Object Calisthenics, Memory Palace Method
I use a “Zettelkasten” System, with vim, and git-sync. Filenames are hierarchical numbers, and I navigate using the following vim feature: https://vim.fandom.com/wiki/Open_file_under_cursor https://github.com/simonthum/git-sync
I have a CI job that runs every month to update deps. I’ve been reasonably happy with it.
I’m always wary of giving a CI system write access to actually change the repo. Incidents like the recent CircleCI one come to mind. If the automation just opens a pull request (or equivalent), that seems pretty helpful though.
In practice I’ve found tools like Renovate and Dependabot are almost great, but still have too many small annoyances to be worth the effort over just doing the updates myself. Things like (I forget which one does this) opening one PR per dependency, but then not rebasing the remaining open PRs automatically when one is merged.
Yeah, the bot just opens a PR on the first of the month. I have to go through and manually approve it before it gets merged.
Not to “well actually” but [it is possible with Renovate]https://docs.renovatebot.com/configuration-options/#rebasewhen) to specify not to rebase all the time
In that case, I’ll claim it was Dependabot I was having trouble with ;-)
As an example I have set up a repository with both dependabot and mergify, … it works quite well.
dependabot opens an MR, github actions run the CI workflow, and mergify only merges if its green.
Haven’t used that combination for a real project though.
I’ve committed heavily to Renovate at work and it’s paying off. I have ~two dozen repos under my care right now and they have similar but not the same dependencies. Keeping everything relatively up to date is a challenge because of the number of repos and the sometimes long dependency resolution process in Poetry (although it’s faster in 1.4.0).