I don’t think I saw any other system except maybe Tarantool use single commit thread.
Document mentions that single core can handle a lot - which is of course true. Still, did TigerBeetle experience any workloads where commit thread became the bottleneck?
It is a bit too early to think in terms of specific workloads: we think we got performance architecture right, but there’s an abundance of low-hanging performance coconuts we are yet to pick to give the hard numbers from practice.
That being said:
the commit/execute itself, this loop that handles double-entry bookkeeping, was not so far observed to be the main bottleneck
but we are CPU-bound at the moment. The bottleneck is in the in-memory part of LSM tree, when we sort memtable before writing it out to disk. But this is a bottleneck for rather stupid reasons:
the chief issue isn’t as much the CPU time spent there, but rather just the fact that we “block” the CPU for milliseconds, so no IO events can be handled in the meantime.
so the obvious thing here is to make the sort “asynchronous”, such that it looks like any other IO operation — you toss a buffer into io_uring, you don’t touch the buffer until you get a completion, and you are free to do whatever else in the meantime. This will require one more thread, not to get more out of the CPU, but just to make the overall flow asynchronous
then, we can pipeline compaction more. What we do now is:
another obvious optimization is to use the right algorithm — we just sort there for simplicity, but what we actually need is a k-way merge
and, of course, given that we have many LSM trees, we can sort/merge each one actually in parallel. But we want to postpone the step of using more than one core for as much as possible, to make sure we get to 90% of the single core first.
In other words, the bottlenecks we are currently observing do not necessarily reveal the actual architectural bottlenecks!
Personally, as someone who has worked on a multithreaded database -
Using multiple threads or processes to write to the same physical drive is usually just moving the single point of synchronization to the kernel / firmware. How fast that is / how well it works is going to vary from machine to machine, but the answer is almost always “orders of magnitude slower than the single threaded version”. It also means you’ve got to deal with risks like “OS update broke the performance”.
If I couldn’t get enough write performance out of a single thread, first thing I’d try is switching to user-space storage drivers (to ensure no other process was stealing my IO time and reduce kernel context switches). If that still wasn’t enough, I’d want to shard writes to separate drives.
Specifically databases or in general? It is a very established pattern but everyone ignores it and rawdogs threads-everywhere-all-at-once-all-the time-no tooling-or-language-support-because I’m special and this is different.
I really wonder about this… not to say that I disagree but rather that I’m genuinely curious.
First, my perspective is that of a C programmer (with decades of experience and the comfort and biases that come with that).
Now, I have written firmware for an embedded device. For my dev/testing purposes I needed a test rig that I could use to poke and prod the device (via BLE). I prototyped this test rig in python for a bunch of reasons that are unimportant here but a big one was that I found a python module that provided me with a BLE api. One attraction being that it would work across platforms (vs learning the native / proprietary BLE api of each target).
I rather hate python. Not the language per se but the ever shifting ecosystem.
So today, after spending a bunch of time expanding my test rig to deal with some new stuff in the firmware, I find myself wondering if this prototype would be easier to maintain in rust or if I should have started with Rust as the OP suggests. And would it really be worth rewriting? Of course, this is impossible for you, the reader, to answer – but the idea of recoding a couple thousand lines of python in rust (that I’m not any kind of expert in) is quite daunting even if it were possible to objectively decide such a rewrite is a good idea.
I would suggest that porting 2000 lines is not necessarily a particularly big job unless they’ve made heavy use of unusual language features, or lean on libraries which don’t exist in the target language - although porting from one language I don’t know well to another is daunting regardless of the size of the program.
The FOSDEM organizers, insofar as they are FOSDEM organizers, exist for the purpose of organizing FOSDEM. In that role, they have decided that there will be such and such a speaker at such and such a time at such and such a place.
You can disagree with that decision. You can register your disagreement through channels they’ve provided. If that’s proven ineffective, you can disengage with the whole affair. If you don’t like that option, you can register your disagreement by non-obstructive protest, which clearly indicates friendly criticism. Or if you don’t like that option, you can register your disagreement via obstructive protest. But in that case you have declared yourself to desire to obstruct FOSDEM’s reason for existence; it’s not surprising that FOSDEM will in turn assert its will to proceed with itself.
There’s a bizarre trend among vaguely-radical Westerners whereby they expect to be able to disrupt the operations of an organization and not suffer any opposition for their own disruption because they call the disruption “protest”. This is a very confused understanding of human relations. If you are intentionally disrupting the operations of an organization, at least in that moment, the org is your enemy and you are theirs. Of course your enemy is not going to roll over and let you attack it. Own your enemyship.
FOSDEM is volunteer-run, by people who are involved in F/LOSS themselves. It exists thanks to a lot of goodwill from the ULB. Protestors are not fighting the cops or some big corporations, they’re causing potential grief for normal working-class people like themselves.
While the ULB campus always gives me the impression it’s not averse to a bit of political activism, it would be an own goal if some tone deaf protestors were to jeopardise the possibility of future FOSDEM conferences.
Dorsey won’t care, he’ll take his same carefully rehearsed speech and deliver it again at any of the hundreds of for-profit tech conferences. They’ll be delighted to have him. But there’s really only one volunteer-run event of FOSDEM’s scale in the EU.
By all means, boo away Dorsey, but be considerate of the position of the people running this.
Protestors are not fighting the cops or some big corporations, they’re causing potential grief for normal working-class people like themselves.
As a former and future conference organizer who has taken a tiny paycheck from three of the ~dozen conferences I’ve organized but never a salary °, this is 100% true. I’ve been fortunate that my conferences have had no real controversy, but two that did have a bit of a tempest in a teapot were went to two directions. One, the controversy was forgotten in a week, aside from a couple of people who just couldn’t let it go on Twitter. It took about a month for their attention to swing elsewhere. In retrospect, almost a decade later now, we’d have made a different decision, and the controversy wouldn’t have happened. Unfortunately, the other was life-altering for a few of the organizers because of poor assumptions and unclear communication on our part and a handful of attendees who felt that it was reasonable to expect intra-day turnaround on questions leading to a hostile inquiry two weeks before a $1M event for 1,500 people put on by eight volunteers spread way too thin.
I’ve also had to kick out attendees who were causing a disruption. No, man, you can’t come into the conference and start doing like political polling, petitioning, and signature collection, even if I agree with you, and might have considered setting aside a booth for you if you’d arranged for it ahead of time.
As conference organizers, we have a duty to platform ideas worth being heard and balance that with the person presenting them. The most effective way to protest a person or a presentation is not to attend, and the second most is to occupy space in silence with a to-the-point message on an unobtrusive sign or article of clothing. Anything more disruptive, and you’re creating a scene that will get you kicked out according to the customs of the organizer team, the terms of attendance, and the laws of venue if the organizers enforce their code of conduct. I’ve never physically thrown someone out of an event in my 24 years of event organization, but I’ve gotten temptingly close and been fortunate that someone with a cooler head yet more formidable stature intervened (and I was 6’2” 250 lbs at the time!).
° A tenet of my conf organizer org is “pay people for their work.”
As conference organizers, we have a duty to platform ideas worth being heard
Or you can just acknowledge that a person has done enough damage in their life that you won’t let them shout out any other weird takes.
It’s not like FOSDEM is mainstream enough that it needs to have people who are more well-known outside of FLOSS circles to keynote. There are enough figureheads (often people who spent decades of their life doing good things) who would be more well-suited. This is not some random enterprise conference where you may invite any random startup CEO to shill their stuff. FOSDEM should do better. (I remember phk, and it was great)
As conference organizers, we have a duty to platform ideas worth being heard and balance that with the person presenting them.
that’s cool but literally nobody had heard of block’s involvement in open source until this was announced, so i don’t know what ideas you’re referring to
Respectfully I disagree with this. Peaceful protest can be non-disruptive but still effective. If Jack is talking the whole time with people on the stage in protest around him, I think a lot of attendees will inevitably read Drew DeVault’s article and understand his argument.
I was going to point this out, but then I realized the concern is more likely the “presumably” in “presumably being platformed”. In other words, they’re not saying that Block’s sponsorship is a lie, they’re saying FOSDEM did not accept a bribe: Block’s sponsorship is not the reason Dorsey got a keynote as DeVault alleges.
Yes, that could be it. If so, a bit disappointing to see such misconstrual or misrepresentation of DeVault’s clear statement of presumption as a statement of fact. (There’s also a lot of grey area between “bribe” and “total neutrality”. Patronage is a thing.)
Would you agree then with the statement that “presumably DeVault lied when he construed Block’s main sponsorship as the reason Dorsey got the keynote selection”? Would presumably have made @talideon ’s comment acceptable?
The linked article makes it clear that patronage is not in play for this event.
To be clear, in our 25 year history, we have always had the hard rule that sponsorship does not give you preferential treatment for talk selection; this policy has always applied, it applied in this particular case, and it will continue to apply in the future. Any claims that any talk was allowed for sponsorship reasons are false.
Not trying to be adversarial, just trying to highlight how others are reading DeVault’s statement in light of the clear answer from FOSDEM that Block’s sponsorship had no role in his keynote. I don’t care one way or the other about the keynote, have no feelings either way about DeVault (who seems to be the most polarizing figure on this site), and will not be at FOSDEM. But when I read DeVault’s wording, I generally understood he believes FOSDEM accepted Block’s sponsorship in return for a keynote address and “presumably” is there to avoid any legal issues from making such a claim.
Would you agree then with the statement that “presumably DeVault lied when he construed Block’s main sponsorship as the reason Dorsey got the keynote selection”?
No, because there’s no need to presume anything when you can just go look at his words. None of it gets anywhere near “lie”, to me. It strikes me as easy and reasonable to take his words as a true statement of his belief.
[FOSDEM’s very clear statement snipped]
Thanks for pointing that out.
But when I read DeVault’s wording, I generally understood he believes FOSDEM accepted Block’s sponsorship in return for a keynote address and “presumably” is there to avoid any legal issues from making such a claim.
We’re on roughly the same page here. To me, he’s definitely casting aspersions on the integrity of the selection process, though I wouldn’t go so far as to say there’s a clear belief of quid pro quo: this is the bit where the grey area is.
To me, he’s definitely casting aspersions on the integrity of the selection process, though I wouldn’t go so far as to say there’s a clear belief of quid pro quo: this is the bit where the grey area is.
Protesting is the act of clearly communicating that you don’t like something. Effective protests are those where the specific ideas being communicated are convincing enough, or the people doing the protest are important enough or widespread enough, that you take the communication seriously.
Communication takes many forms and some of them are more disruptive than others. But it is a popular fiction that disrupting and shouting down events because you don’t like the speakers is an effective form of protest - in fact there could be nothing more ineffective than associating your side of the argument with something that would make an ordinary attendee annoyed. Only if an ordinary attendee would side against the speaker by default would this be a good strategy.
But when employed, usually this is not about actually communicating that you protest against the thing, it’s about attempting to get your way by force, just with a 1A-tinted veneer. Protesting is allowed as long as it is actually protesting, instead of trying to take control.
This is just another example of the black and white thinking responsible for a large part of the awfulness in the world.
Effectiveness isn’t binary. The allowed form of protesting probably not as effective as it could be in a different form than what is allowed. That doesn’t make it ineffective.
Protesting is making your disagreement and numbers visible. If you’re physically stopping somebody from doing whatever it is they’re planning and you’re against, it is not a protest, it’s just disrupting. And by implication it means you have the power to stop it and are not the oppressed underdog you are likely proclaiming to be.
‘Protesting is allowed as long as it is ineffective’ is an appeal to the moral valence of the particular action of protesting. If you change the meaning of the word to refer to a completely different thing, you cannot keep the moral valence. Physically stopping something from occurring is not something anyone has to tolerate just because you used a certain set of Latin glyphs/mouth-noises to identify it.
Could be, I’m not very pleased with how I worded that so I might not be very clear about what I meant exactly, but it seems disingenuous to go back and edit it, given I’m not sure it’d end up better anyway. I’m trying to point at a meaningful difference between bringing attention to an issue and the size of the cohort in agreement with you, versus unilaterally acting to stop people doing something simply because you wish they wouldn’t. Of course everybody thinks that they’re in the right, therefore their actions are justified, but that can’t really be the case all that often, since everybody thinks it.
I’m trying to point at a meaningful difference between bringing attention to an issue and the size of the cohort in agreement with you, versus unilaterally acting to stop people doing something simply because you wish they wouldn’t.
The latter sounds like direct action. It is widely regarded as a kind of protest. Protests typically involve a small minority of the population deliberately causing a disruption to force a response. Often, a majority of the contemporary population disagree with the aims of protest, even protests that we consider good and/or effective in retrospect.
I’m curious to know why you don’t like Apple laptops. Because it’s a locked-down system, or because of the planned obsolescence? I think these points are valid criticism to iPhones, but Apple laptops are fine in these regards from my perspective. Sure, they could have more upgradability and repairability, but those didn’t bother me that much. (I have been using a 13” MacBook Pro since 2018, and it served me well until recently when it became unbearably slow, so I upgraded to a shiny M4 Pro MacBook Pro to have some fun with local LLMs. Later I took the old MacBook Pro apart and realized it’s slow simply because the fan has gathered a lot of dust, so the CPU basically got thermal throttled to death. In other words, I could have used it even longer by simply cleaning the dust on my own. Either way, I think 6 years is a fairly long time for a laptop’s lifespan!)
Also curious to hear why people like ThinkPad + Linux so much!
This is just me and my problems. I don’t expect anyone else will have these problems, but this is why I, personally, detest working with macOS or Apple software.
Because macOS is a horrible operating system that doesn’t let me fix stuff. The friggin’ OS locks me out of ptracing processes by default, fer crissakes. There is no /proc. Heck, there isn’t even a FHS. Most of the userland is from BSD, not from GNU, so grep doesn’t have a -P flag for PCRE and find doesn’t default to searching the CWD. I know I can fix this stuff with MachoMeBrew, but why would I need MachoMeBrew? It takes so much tweaking to make macOS just work like I want it to.
Because the keyboard is all wrong. I use Emacs. I require ctrl keys in comfortable positions. Most McKeyboards just don’t have right ctrl keys or they have them in weird locations. I use modifier keys on both sides of the keyboard to work with in Emacs. I don’t want an option key. I want ctrl, meta, and super, and I want them in their normal positions.
Because there is no selection like in X. I can’t just highlight and middle click to copy-paste. I have to use the keyboard instead.
Because system upgrades locked my computer for a long time, sometimes over an hour, without any indication of what the operating system was doing. I couldn’t use anything while the upgrade was happening. What nonsense is this? When I’m running apt upgrade on Debian, I can still use all of my programs while the upgrade is happening. I only have to restart processes to reload the new version in RAM, if I want to. And reboots? Again, I can install a new Linux and reboot to the new Linux whenever I feel like getting around to it. It should be my machine, but Apple makes it feel like it’s their machine I happen to be renting while the hardware lasts.
Because Apple wants me to sign up and give them my personal information just to install basic software. Some of this stuff I’m complaining about can sort of be fixed and emulated if you install the right software. But even before you’re even given permission by Apple to install software on your own machine, you have to tell them your name, maybe your phone number, and click “I agree” multiple times on many piles of unreadable legalese.
It’s just a mess I don’t want to bother with. Give me a Linux, give me open source, give me free drivers, give me the right keyboard layout, give free licenses, not EULAs.
What is this Planned Obsolescence that I keep hearing about? Do iPhones or Mac stop working after a few years? Or have we set some unrealistic expectation that Apple should support hardware to infinity and beyoooond?
I think it’s mostly about not allowing old devices to upgrade to the latest operating systems even if they are perfectly capable, but at least for macOS, you can bypass this restriction with tools like OpenCore Legacy Patcher. Apple also made old phones with degraded batteries slower via software updates, but that was a few years ago.
The oldest supported Mac running macOS 15 is now 8 years old.
The reason I always get a little upset is that people have this unrealistic expectation that Apple somehow must invest and do far beyond what is reasonable. Why is that?
And why is it immediately assumed to be malicious intent? Because that is what “planned obsolescence” really means: that they sat in a meeting room in 2017 and said “we are are not going to make macOS 16 work on this hardware so that people will have to buy our new stuff in 2025! Haha!”.
For anything else it is just software and hardware that becomes unsupported over time, like 99% of this crap we deal with in this industry. Which we accept because the engineering and qa burden to keep things working is huge. Specially compared to where your users are. (They are not on hardware from 2017).
But … Apple is somehow special and must do this on purpose.
My old MacBook Pro is a 2017 model (A1708), which doesn’t support macOS 14, but I was able to install macOS 15 on it with OpenCore Legacy Patcher, and it works perfectly fine. Reportedly, you can even install macOS 15 on devices dating back to 2007. From my understanding, the new software “just works” on old devices without extra engineering investments (otherwise the patcher wouldn’t work so seamlessly), but Apple is putting efforts into preventing users from installing new software on it, and that is not cool. I mean, there’s a difference between “upgrade if you want to, but don’t blame us when things break because it’s unsupported” and “you are no allowed to upgrade”. On the other hand, nobody would/should complain about Apple Intelligence not being available on Intel macs: these doesn’t have the required hardware, so that’s an unrealistic expectation as you pointed out.
Meanwhile, maybe Apple bans these software upgrades simply because they don’t want to deal with bug reports from unsupported devices? Moreover, you can always use the patcher to bypass the hardware check, so yeah, I wouldn’t say planned obsolescence is a good reason to hate Apple. For me it’s more of a minor nuisance that can be easily overcome. That’s why I was asking JordiGH why he hates Apple laptops.
8 is a bit short. I replaced my MacBook Pro a bit over a year ago. The old one was ten years old and still working fine. It was faster for most day-to-day things than the Surface Book 2 that Microsoft had given me for work. It had a 4-core Haswell (8 Hyperthread) CPU and 16 GiB of RAM, which is ample for the vast majority of things I do (compiling LLVM is a lot faster on the new machine, as is running place-and-route tools, everything else was fine with the old one).
I believe they dropped support for anything without the Secure Element chip in the last update, which is annoying but understandable. I wouldn’t be surprised if the last x86 Macs have a much shorter support lifetime than normal because dropping x86 support from XNU will save a lot of development effort. A lot of people complained when they dropped support for the original x86 Macs, but given that they’d already started the 64-but transition with the G5 it was obvious that the 32-bit x86 Macs were a dead end, which is why I waited until the Core 2 came out. That machine lasted until it was much slower than the replacements. It was still supported when I replaced it with the Sandy Bridge model (which had two unfortunate encounters with a pavement and ended up being retired quite quickly).
My first Mac was a G4 PowerBook and, back then, a three-year-old computer (of any kind) was painfully obsolete. Most companies did 3-4 year rolling upgrades. Now, that’s been extended to 7 in a lot of places and even then it’s eligible for upgrade rather than automatic because a seven-year-old computer is often fine. I basically use computers until they wear out now. The performance difference between a modern Intel chip and one from two years ago used to be a factor of two, now you’re lucky if it’s more than 10%, so the need to upgrade is much less.
I’m less annoyed with this on Macs because the bootloader is not locked and, if macOS is unsupported then the device can have a second life running something else (even the Arm ones now have nice Linux support). It’s indefensible for the iPhones, where they just become eWaste as soon as Apple stops providing updates because there’s no possible way for third parties to support them. An iPhone 7 would run modern Android quite nicely if you could unlock the bootloader.
“Made old phones with degraded batteries slower via software updates” is literally true, but a tad misleading.
The software update started tracking what time the phone usually gets charged, and clocked the CPU down if it weren’t going to last until that time.
I had an affected model, and having my phone suddenly slow down definitely sucked, but having it start lasting until I got home wasn’t a bad thing to gain in return.
I’d agree that it should have - at minimum - be something you can disable.
That’s not what they lost in court for. They were reducing the clock frequency to make batteries discharge slower when the maximum charge capacity dropped. Rather than seeing short battery life and getting a replacement battery (which was often covered by warranty). people would see a slow phone and buy a new one.
This settlement is why iPhones (but not iPads) now have a battery health UI in settings: so you can see if the battery is holding less than 80% of its rated charge and replace it, rather than the whole phone. The iPad does not have this because it was not covered by the settlement, which was specifically a class action suit by iPhone owners.
Yes, and if it had caused the voltage to drop and reboot, people would have taken them to the shops and discovered that the battery needed replacing, which was covered by the warranty or consumer-rights law in a lot of cases.
That’s true but the decision-makers at Apple had no notion that it was the case and didn’t factor the extra income they made into their decision-making /s
For what its worth, apple slowed down the CPU of old iphones (I believe 4-5 years at a time? Most other vendors would simply not care about such an old device), because with degraded batteries they were prone to random restarts (CPU would have needed higher voltage than the battery could provide).
This wasn’t communicated and they got fined (in France), but if only apple actually communicated better, this whole fiasco could actually have a positive spin (company fixes bug in 5 years old device). In the end though, it became a user-selectable choice, so best of both worlds.
It’s not appropriate to connect systems that don’t get security updates to the Internet, so in that sense, yes iPhones and Mac do stop being suitable for the tasks they were previously suitable for after some years.
In the case of iOS, Apple’s track record is much better than competitors’. In the case of macOS, Apple’s track record is worse than competitors’, so I think it’s quite justified to complain about the macOS situation.
(The above is deliberately phrased in terms of track record: going forward, there are the twists that Samsung and Google are getting better on the mobile side and Microsoft is getting worse on the desktop side.)
In the past 6 months or so, I have given away 3 Penryn Macs with a working Wayland Linux environment on them and I have installed Ubuntu on two Haswell Macs that are staying in the extended family. The Haswell Macs would have worked in their previous role just fine if macOS had continued to get security updates: The move was entirely about Apple’s software obsolescence. The key problem afer the switch to Ubuntu is that iCloud Drive and Apple Photos don’t work the way they do on macOS (you can get some access via the Web at least if you don’t have the encryption enabled).
The way hardware progress has changed means that N years before Haswell and N years from Haswell onwards (on the Intel side) is very different in terms of what hardware is quite OK for users who aren’t compiling browser engines. It doesn’t feel reasonable to treat Haswell hardware as obsolete. (FWIW, Adobe raised the requirement for a prosumer app, Lightroom, to Haswell only late last year. That is, until very recently a prosumer subscription app supported pre-Haswell hardware.)
In the case of macOS, Apple’s track record is worse than competitors’, so I think it’s quite justified to complain about the macOS situation.
Sorta, I guess. Apple supports the entire laptop for at least 6 years from the day they stop selling it. Not just MacOS, but hardware too. MacOS just gets rolled into that support.
It’s hard to find any PC vendor willing to support a laptop past 3 years. Many , by default, come with 1yr of support(for varying definitions of support) in the best case.
If you buy an Apple laptop, you know you should be able to keep it supported and working without too much hassle for 6 years. When you buy a Dell or Lenovo, you don’t have any idea how long it might last.
Generally trying to get repairs for consumer grade laptops from any vendor other than Apple is usually annoying at the very least, if not impossible, regardless of warranty status. For business grade laptops, as long as you paid extra for the support, you can usually get repairs done for 3 years. Past 3 years, the answer is almost always: NO.
Even in server/enterprise land, it’s hard to get support past 5 years for any server/switch/etc.
Either way, I think 6 years is a fairly long time for a laptop’s lifespan!
I find it interesting how wildly ppl’s expectations of laptop lifespans seems to differ. Just here in the comments the lifespans people are happy with seem to range between 3–4 and 10 years. Some of this is probably differences in usage patterns, but it’s wild to me that we’re seeing over 2× differences.
At the moment my “new laptop” is a 2017 Surface Pro I got used two years ago, and save for some games it handles pretty much everything I do without any issues.
My previous laptop is now 12 years old. It has a quad-core (eight-thread) Haswell 2.something GHz processor, 16 GiB of RAM, an 1 TB SSD, and a moderately old GPU. You can buy laptops today that have slower CPUs, less RAM, and smaller disks. I think the worst Intel GPU is a bit faster than the NVIDIA one in that machine. If the old machine is obsolete then people are buying brand new machines that are already obsolete and selling them should be regarded as fraud. It isn’t, because they’re actually fine for a lot of use cases.
It’s gone from being a top-of-the-line machine to one that’s a bit better than bargain basement in that time.
My “Late 2010” 11.6” Macbook Air is still going. I had to replace the original battery last month.
Note that I am actually able to use that one for (light) development purposes.
It now has an aftermarket larger SSD. So 4 GB RAM, 240 GB SSD, 1.4 GHz Core 2 Duo.
That said, it is no longer my main machine. I prefer my Framework 13 which is roughly the same size, but but with a larger screen.
I have a very high end X1 extreme through work (i7-11th gen something, RTX 3050, 64GB of RAM). It’s running windows ATM, but I’ve had Linux on it, and let me tell you it does not compete. It gets insanely hot and the battery lasts AT BEST, if I really try, 3.5 hours. This is not better on Linux either.
The macs easily last an entire day of work on a single charge, while I can’t even work for a full afternoon. It’s not even close in terms of convenience.
There’s also just annoying lack of attention to detail. One of the most annoying misfeatures of this laptop is that if you charge it with anything weaker than the included 170W (!!!) charger (say, from the hub in a monitor, since IIRC USB couldn’t do that much power when this was released, and it’s not really needed if it’s plugged in all day), it pops up a BIOS error saying that the charger is below the wattage of the included one. This is only shippable by pressing the ESC key, and AFAIK it is completely impossible to disable. This is very early in the boot process so the CPU is still stuck at 100%.
I have been woken up SEVERAL TIMES because windows decided to update in the middle of the night, so it then rebooted, and got stuck in that screen with the fan at 100%, because it was plugged into “only” a 95W charger.
This is so dumb. You can just tell no thought was put into it, especially because it happily updates on battery with no warning, too. If anyone happens to know how to disable this LMK because I’ve just resorted to leaving the laptop unplugged from my dock or plugging in the bulky included charger along with it, which kind of ruins the point of having a single cable.
Oh, just thought of another one: I had recently started having some issues with the laptop shutting down if I put it in hibernate mode (so all state was lost).
Turns out, it’s because it does not have 64GB of storage free to persist the memory, so it just did not work and shut down instead. But it did not tell me! I had to dig through forums to find that out. How hard is it to just disable it if the free storage is less than the amount of RAM?
Tbf that’s mostly on windows, not Lenovo, but gah. It’s just bad UX.
gets insanely hot and the battery lasts AT BEST, if I really try, 3.5 hours
Sounds like a dGPU (mis)management thing mostly..? My AMD-based L14, despite a small battery capacity (like barely 50Wh or something) easily lasts for 5-7 hours of coding and hanging around online, while staying cool and (with thinkfan) quiet.
Unfortunately I work with CUDA quite a lot, not for anything super intensive so the 3050 is fine, but enough that I can’t just completely disable the GPU and be fine. If I’m just browsing the web or something though back when I had Linux disabling the dGPU easily doubled the battery life.
I need to look better into this, I’m not sure if there’s some way to have it turn on on-demand on Windows.
I rock(ed) one for many years, and frankly.. no. They have CPU throttling issues, and their battery lives are nothing to be happy about.
I am most definitely not an Apple fan, but the M-series Macs are definitely a paradigm shift in that laptops, for the first time ever, are not just desktop PCs with uninterruptible power supplies that last for the duration of going from your home to work, when you have to plug it in again.
FWIW even my 2023 Thinkpad X13 Gen4 (AMD) lasts a whopping…. 3-5 hours on battery, if the moon is in the right phase and I don’t sneeze too hard. I’ve gotten as little as 2.5 hours of web browsing and terminal use out of it on a bad day, and my max, ever, was about 6.
Sure, that’s not nothing, and that’s more than “from work to home” (I guess - I don’t commute anymore), but it doesn’t survive a full flight between Seattle and Chicago, and that’s my benchmark for “good battery life”.
My 16” Lenovo Legion Pro 5i (2023) with 24 core i9-13900HX lasted six hours on battery in Windows 11, doing text editing and web browsing and short compiles (few seconds). I kicked the Windows off and put on Ubuntu 24.04 and battery life is now 5 hours. Which is still more than I need.
Yeah, I bet the top end MacBook Pro lasts a lot longer, but then this cost me $1600 (incl tax and shipping) while an equivalent MBP with M4 Max costs $3999 plus tax.
I liked it, too, until I saw hell with my P53. Nothing to do with Linux, it was just a very bad purchase. Worked ok within the warranty, and now a mere 6 years later (!), it is full of hardware defects, crawling on the best it can. Six years with this kind of behavior would be unthinkable for old IBM or even early Lenovo Thinkpads. Sadly, I have no idea what to recommend instead. All things considered (Framework etc), they still seem to be the best. Of the worst.
The problem with post-IBM ThinkPads is that Lenovo has no attention to detail. They have good designers but they are spread thin across a gazillion devices. It’s impossible to do a great job in those conditions. Different models have different flaws, like fan noise, bad panels, etc. They should streamline their offering and stop trying to copy some Apple features that are not aligned with their ethos.
Fan noise is fixable (just take control from the OS with thinkfan), if that’s the issue I’m thinking about (shitty fan curve in firmware that doesn’t go silent on idle).
What’s not fixable is the shitty firmware bugs. My L14gen2a doesn’t like staying asleep and just wakes up randomly for no reason a lot, and sometimes the keyboard controller hangs with a pressed key (one key gets logically “stuck”, other keys stop responding – only fixed by a sleep-wake cycle).
Before anyone says Apple is so much better though: that exact same keyboard controller issue happened to me back in the day on a 2010 MacBook Air, at the worst moment possible… I was playing Dungeon Crawl Stone Soup. You can imagine the outcome.
Fan noise is fixable (just take control from the OS with thinkfan), if that’s the issue I’m thinking about (shitty fan curve in firmware that doesn’t go silent on idle).
I was referring more to the lack of sufficiently good cooling hardware in some ThinkPad models. They have so many SKUs that heating designs, pipes and fans are not thought or tested carefully in some models. Others are great.
So it’s not that I worry that my concurrent code would be too slow without async, it’s more that I often don’t even know how I would reasonably express it without async!
Threads can express this kind of stuff just fine, on top of some well-known synchronization primitives. The main thing that async gives you in this sense, that you can’t build “for free” on top of threads, is cooperative cancellation.
That is, you can build patterns like select and join on top of primitives like semaphores, without touching the code that runs in the threads you are selecting/joining. For example, Rust’s crossbeam-channel has a best-in-class implementation of select for its channel operations. Someone could write a nice library for these concurrency patterns that works with threads more generally.
And, if you are willing to restrict yourself to a particular set of blocking APIs (as async does) then you can even get cooperative cancellation! Make sure your “leaf” operations are interruptible, e.g. by sending a signal to the thread to cause a system call to return EINTR. Prepare your threads to exit cleanly when this happens, e.g. by throwing an exception or propagating an error value from the leaf API. (With a Result-like return type you even get a visible .await-like marker at suspension/cancellation points.)
The later half of the post takes a couple of steps in this direction, but makes some assumptions that get in the way of seeing the full space of possibilities.
For example, Rust’s crossbeam-channel has a best-in-class implementation of select for its channel operations. Someone could write a nice library for these concurrency patterns that works with threads more generally.
Thread cancellation is not realistically possible in most real-world code, unfortunately – see the appendix in my recent blog post that was on Lobsters.
Make sure your “leaf” operations are interruptible, e.g. by sending a signal to the thread to cause a system call to return EINTR
This is not possible on Windows, as far as I’m aware – this may or may not be an issue depending on the platforms you’re targeting, but it would be a shame to lose Windows support just because of this.
The most important thing, though, is that async Rust allows you to select heterogenously over arbitrary sources of asynchronicity, and compose them in a meaningful fashion.
Key to this is the notion of a “waker”, i.e. something that you can register yourself with that can wake you up when your operation is done. This is very general idea, and the async runtime can provide drivers for whatever it wishes to support.
I wrote a post a few years ago about why and how nextest uses Tokio, that goes quite deep into operations that would blow up the complexity of thread-based concurrency to unmanageable levels. The state machine has gotten much more complex since then, too, with over 50 states per test at this point. An alternative that might work is a unified message queue, but managing that would also be a headache.
Async Rust is in very rarified company here. As far as I know, the only other environment which supports this is Concurrent ML.
Great articl! It aticulates why I always feel much more productive with tokio/async compared to plain threads (even with the additional pain async can bring somtimes).
It’s something I have experienced/felt but was never able to articulate well. Now I will refer to your blog post as an example.
Yes, the bulk of my post was about what you need to do to get thread cancellation, and how to disentangle it from async/await as a whole. (At the same time though, you don’t need thread cancellation for a general select operation, either- you only need one if you intend to use it to implement things like timeouts.)
Assuming you are willing to use a new set of APIs to get thread cancellation (which, again, you are also doing if you are using async/await) you can get cross-platform, waker-based, heterogenous, extensible selection/composition/cancellation using threads and blocking instead of a state machine transform. This is essentially switching out the memory management style of the async/await model, while porting over the other parts of the Future::poll contract that enable cooperative scheduling and cancellation.
What value would that get you over the current async model? The current model is decoupled from threads so works in places without them, like embedded environments.
The usual benefits of stackful coroutines and/or OS threads: you can suspend through functions that aren’t async, you don’t have to think about pinning, thread-locals and Send/Sync work more “like normal,” debuggers already know how to walk the stack, etc.
To be clear, I’m not trying to make any sort of argument one way or another about which approach people should use. I’m just pointing to another part of the design space that could also provide the nice non-performance-related features described in the article.
you can suspend through functions that aren’t async
But, well, you can’t do this with arbitrary synchronous code that isn’t cancellable, right? Unless I’m missing something. Maybe we’re talking at cross purposes here.
Async cancellation is a real mess, but at least it only happens at yield points. I’ve tried to reason about making arbitrary code cancellable, and it seems very difficult.
I’ve tried to reason about making arbitrary code cancellable, and it seems very difficult.
I spent a decent number of brain cycles in grad school looking at this problem and the conclusion I came to when you factor in concurrency (locks, condition variables, etc) is that it’s essentially equivalent to the Halting Problem. Being able to statically determine that cancelling at any arbitrary point in time leaves the system in a defined valid state is, I would conjecture but don’t have a proof for, impossible. If you were to make a language with enough specific constraints (similar idea as the borrow checker in Rust) you might be able to do it but it’d be quite restrictive. It’s a quite interesting thought exercise though to think about what the least restrictive approach to allowing that might look like.
Yeah I do think Rice’s theorem applies to cancellation. But in many cases you can do useful over or under approximations — that seems really difficult with cancellation.
But, well, you can’t do this with arbitrary synchronous code that isn’t cancellable, right? Unless I’m missing something. Maybe we’re talking at cross purposes here.
Async cancellation is a real mess, but at least it only happens at yield points. I’ve tried to reason about making arbitrary code cancellable, and it seems very difficult.
There are a couple of things going on here: first, the raw ability to suspend and/or cancel through a function without giving it a distinct color; second, the higher level question of whether that’s something you can do correctly.
If you don’t want cancellation, or you do but you control all the code involved, the second question kind of goes away. Otherwise, I agree this is a thorny problem, and you’d want some kind of story for how deal with it. Maybe you lean on your language’s notion of “exception safety;” maybe you use a Result-like return type to get a visible marker of cancellation points; etc.
Oh, because thread stacks are already immovable?
Right, exactly. And in the spirit of filling out the design space… an async/await-like language feature could get away with this too, if it exposed raw Future objects differently than Rust. As an extreme example, C++ does this by simply heap-allocating its coroutine frames “under the hood” and only ever letting the program handle them through pointers. But I could imagine some other points in between Rust’s and C++’s approaches to resolve this.
I would only add that the other big benefit of async/await is that it is one way of introducing a typed distinction between a function which synchronizes with a concurrent process and one which doesn’t; if your language doesn’t permit blocking synchronization APIs (unlike Rust, unfortunately) it gives you meaningful distinctions about purity that I think are very valuable to users.
Threads can express this kind of stuff just fine, on top of some well-known synchronization primitives.
As someone who has spent years working with raw threads & designed multiple concurrent algorithms: I’m yet to meet anyone who can reliably get raw thread synchronization right.
IMO the significant thing which async rust gives you is a compiler error whenever you do something that could race.
IMO the significant thing which async rust gives you is a compiler error whenever you do something that could race.
Eh? Rust doesn’t error for race conditions, and while it does prevent data races, it prevents them just as much for multi-threaded code as it does for async code.
This is only true in environments where management undervalues glue work.
The corollary is: if you’re a manager who wants all your teams to operate with increased efficiency, find the people who do good glue work, encourage them to do glue work, and socially reward it.
That is, set up an environment where the sort of tactical advice in this article doesn’t apply.
Also: watch for the pathological version of this, which is “everyone messing about with pet projects which are justified as glue work”. That is, ensure that as far as possible the benefits of glue work are seen in metrics that are genuinely valuable.
Note that glue work generally won’t be individually visible to metrics, but will be at a project level, over time.
This is only true in environments where management undervalues glue work.
I think another problem is load balancing: when a team becomes more efficient, a manager up the chain shifts more work to that team. If the new load is too high, the team can become inefficient again. Over time, the team might realize that they can’t really get ahead.
If the team or some manager in the chain is good at setting boundaries this might not be a problem.
The corollary is: if you’re a manager who wants all your teams to operate with increased efficiency, find the people who do good glue work, encourage them to do glue work, and socially reward it.
You didn’t explicitly say this, but you might have meant it: socially rewarding the glue work should mean that the team does not get too much extra work just because they are efficient. Otherwise, the team will backslide eventually.
Also: watch for the pathological version of this, which is “everyone messing about with pet projects which are justified as glue work”. That is, ensure that as far as possible the benefits of glue work are seen in metrics that are genuinely valuable.
Agreed. This feels like the pathology of “good” engineers. They are efficient at their main job, but it doesn’t necessarily produce any more than a bad engineer because the spare capacity is spent on side projects.
To be clear, ultimately, it’s still better because you have some speculative work that might pay off, people working on pet projects that should increase morale, and there is clear slack in the system if the main projects need more attention.
If the team or some manager in the chain is good at setting boundaries this might not be a problem.
Yup - managing up in this case is really important. But also important is team autonomy: are they really setting their own goals, managing their budget, etc.? Does the exec team / board trust that they’ll set the balance accordingly, and that you’ll step in to help steer if needed while they adjust?
Because a skilled, autonomous, team will adjust themselves to sensibly take advantage of increased efficiencies.
Also relevant is theory of constraints, coupled with proper measurements of throughput (“done is running in production / in the hands of users”, etc.). Because if you know what your team utilisation is, it’s a lot easier to have those managing-up conversations.
You didn’t explicitly say this, but you might have meant it: socially rewarding the glue work should mean that the team does not get too much extra work just because they are efficient. Otherwise, the team will backslide eventually.
Yes! Most companies socially punish glue work, and socially punish the consequences of doing it well.
Agreed. This feels like the pathology of “good” engineers. They are efficient at their main job, but it doesn’t necessarily produce any more than a bad engineer because the spare capacity is spent on side projects.
To be clear, ultimately, it’s still better because you have some speculative work that might pay off, people working on pet projects that should increase morale, and there is clear slack in the system if the main projects need more attention.
Yup, it’s a difficult line to walk. One bit of advice I give all new managers is that getting runs on the board early is super important … build some social capital to spend on covering fire.
Edited to add: If there’s one thing that managing up has entailed for me over the years, it’s explaining to execs without serious management experience (this happens more often than you’d think!) that 100% utilization is Bad. I literally still have dreams about these conversations a year after changing jobs.
I think there’s a strong correlation between “your peers (not superiors) think you do a good job despite having fewer tangible commits” and “this is useful and appreciated glue work”, esp. re: your pet project argument.
I work at a megacorp, and the question to our team (devops) is always about what products we can create and market to present our outputs. Not metrics about build or deployment times, efficiencies gained over time in the dev cycle, etc. It’s all about branding and it robs me of my will to continue working.
to be fair those metrics sound horrid too (Goodhart’s law applies), and in my view a “devops team” (if that truly is the case) means you’re not doing devops properly
By that metric, no one is and you’re engaging in a No True Scotsman fallacy. What metrics do you think are good to measure? What do you think people who focus on builds and deployments should be doing if not trying to improve how efficiently and reliably their software runs?
Assuming /u/aae is referring to the original devops practices when they refer to “doing devops properly”:
Early devops was a rejection of separating “people who focus on builds and deployments” from “people who write the software”.
15 years ago, “devops” meant regular application developers were paying attention to “how efficiently and reliably their software runs”, instead of having a separate team do so.
Having a separate “devops team” means the application developers can leave thinking about that stuff to the devops team, which is roughly the polar opposite of the early devops movement.
Also worth pointing out that by default flamegraphs only show on-cpu time (i.e. your application/kernel running code at the time the sample is taken). That is not the whole story, if the application/thread is asleep waiting for something and doesn’t run on any of the CPUs at the time the sample is taken it won’t show up at all. To see them you need to use “off-cpu” flamegraphs.
I once found a literal “sleep” in the code deep in a 3rdparty library that way (it was PAM, and it kept loading/unloading the crypto library every time, triggering it’s initialization code many times, which had a ‘sleep’ inside it as it was too early for pthread_cond to work. More modern Linux distros don’t have this problem anymore since they switched to libxcrypt).
Flamegraphs can actually help visualise anything where you can produce a weighted frequency for a given stack! You can also do things like trace disk I/O or memory allocations, using the size in bytes as a weight, to get interesting visualisations as well.
I never thought about measuring things other than runtime! I’ll have to keep this in mind; stuff like profiling memory allocations seems like it could be really handy.
Is there a good guide for how to build up the flamegraph data with custom metrics like that? I must admit I have relied on “tool spits out stuff for flamegraph.pl consumption” and haven’t thought of it further
You underestimate the power of laptops on planes crossing the international date line. 24 hours in one timezone may be a limit, but you can certainly squeeze more than 24 hours out of the same calendar day if you are clever.
Wasn’t there just a story posted about how to think about time recently? Borrowing some language from that, a duration of over 24 hours would be a bug, but a period of ver 24 hours would not be.
I don’t think that’s true; a duration can be over 24 hours (or 86,400 seconds), but you’d still be expressing it (ultimately) in something that reduces to seconds, and that doesn’t have any date- or calendar-like component.
You can have a duration of 168 hours (= 7 * 24) between two instants, but if a DST changeover occurs during that duration in some region, a 7 day period starting at the datetime corresponding to the timezone in use at that region at the beginning of the duration won’t end at the same instant as the duration when converted back at the timezone in use at the end; you’ll have gained or lost 3600 seconds somewhere.
The article linked talks about physical time vs civil time, and goes on further to define a duration as something that happens in physical time (where things like leap seconds etc don’t exist), and that their analog in civil time is the period (where things like what you’re talking about, like DST changeovers, do exist).
Having said that, my logic is still flawed because the day is a civil time construct, so there are definitely times where it’s period (not duration; that’s physical time) is not exactly 24 hours.
I was using the precise terminology offered by the linked article very carefully, yes; hence referring to durations between instants, and periods with datetimes. To reiterate my position, a duration can be over 24 hours, much as the linked article says:
Warning: in this context, units like “minute” are fine, but larger units like “month” or “year” have no precise meaning. You should avoid them, but it’s also reasonable to use one for approximate descriptions only. Someone has to choose an arbitrary value for it, in terms of one of the precise units (for example, for a year, it might use 525,600 minutes).
Hence my speaking of a duration of 168 hours between two instants, and that a DST changeover occurring during that duration in some region, a 7 day period (civil) may not always have a corresponding duration (physical) of 168 hours, even though it might usually, because a DST changeover (like moving an hour forward or backward) would result in that (civil) period having a (physical) duration of 167 or 169 hours.
Your logic was flawed not because a day is a civil time construct (because you didn’t actually invoke the concept of a day, only hours), but because there’s no reason you can’t measure physical durations longer than 24 hours. Like the article notes, you can use 525,600 minutes (or 8,760 hours, or 31,536,000 seconds), whatever.
Ah, yep. So, if we take the example of somewhere that moves backward an hour at 3am, the day (civil period from midnight to midnight) has a duration of 25 hours (because you physically would experience 25 hours passing from midnight to midnight), and a period of … well, a day; it’s hard to say how many hours the period has, since it depends on your treatment of the matter, but maybe 24.
Right! I’m not really chastising you. If your unmaintained code turns out to be worth anything then it would be beneficial to the OSS community to fork and maintain it.
You only made the mess for yourself. And maybe it isn’t even messy for your purpose. Why clean up someone else’s mess, just because you decided to share knowledge for free?
You can still be a good communicator about it, though. (The social responsibility part.)
Realistically I think the “right thing” here is very varied, and very dependent on the type and size of the project, whether it’s got any sponsorship or corporate backing, whether it was ever ‘promoted’ in any way as a solution to anything, current number and mental and physical health of the maintainers, etc. Plus the ultimate “does the bug and/or the solution look simple / obvious / interesting” - and if it’s reasonably likely to affect many users, cause data loss or security breaches, etc. This is the ultimate “one size does not fit all” issue!
So if I report a bug in systemd with a reasonable amount of detail that’s likely to bite other folks, I’d hope for a bit of help (and in my actual experience, have a good chance of getting it.) On the other hand, individuals who’ve created something for themselves that they think others might find useful should absolutely not be deterred from just dumping it on github and then forgetting about it!
Why do these tools insist on taking over schema definition? This never works well in practice. I want a type-safe orm that will use standard ddl and generate whatever it needs from that.
I applaud type-safe approaches. It should be possible to statically verify your queries and results.
other tools/languages. It is very rare that particular orm backend is the only thing that reads/write data.
multiple environments with different schema versions (or even partially migrated or incompatible)
coupling of application and data management. This gets especially difficult considering above.
richness of real-world ddls (including platform specific definitions). Orms rarely have capacity to declare everything I can manually
when things hit the fan it is extremely difficult to reconcile/diff your code and your real production db backup from 3 weeks ago
So insisting on having full master view of your data schema in code works only for simplest of projects imo. Not when you have dozens of developers adding tables/fields daily and somebody trying to run analytics and someone else to migrate this data to/from another store.
I see. The main one that seems like an issue to me would be the second. Personally I don’t really use ORMs ever so that’s why I asked - I don’t know what problems they cause because I never saw much value anyway.
Oh, interesting. How would you avoid downtime without an orm? I assume you just run migrations independently from the application so you’re not in a “application will start once migrations are done” state?
I have seen one codebase which used named stored procedures for pretty much every query.
It actually worked better than it sounds (they had good tests, etc) - migrations which changed the schema could also replace any affected procedures with a version that used the new tables.
Not sure I’d want to use that approach, but it kept a nice stable api between the application and the database.
I personally think this is the best general approach for DB interfacing, with versioning applied to the named stored procedures for when their API’s need changing. But avoiding downtime when migrating also means just being really careful with what the migrations are even doing in the first place.
There could be an argument for the idea that an automated migration system could automatically write less intrusive migrations than an average naive developer might, but I haven’t seen this borne out in practice.
How you run migrations isn’t necessarily tied to what ORM you use. Generally speaking, a high-level overview of the approach taken is as follows:
Deploy code changes to some host dedicated to running migrations (this could just be a CI server)
Run the migrations
Once done, deploy the code changes to your servers
Optionally, run some post deployment cleanups
There have been various ORMs over the decades that provide some sort of “automatically migrate the DB to the current code based schema” feature. I’ve however never seen or heard of one being used at any noteworthy scale, likely because migrating data is a lot more than just “add this new column”.
Another limitation of most of these migration tools that I didn’t see problematic before “the trenches” is linearity. Something like this has happened at $work-1:
migration 421 has been applied everywhere but becomes problematic in production. Because “very important client” the ops team keeps pushing and the solution always seems around the corner
couple days later it becomes apparent that we’re hitting some internal Aurora limitation and things need to be reworked drastically
meanwhile prolific developers have already contributed 20+ migrations that have all been applied to all staging and what not environment asking when they can run theirs
So you’re in a hard spot regarding what to do. If I recall we made 421 a no-op and people had to manually clean up the mess.
It’s a mess, I believe relational data models and 100% uptime across upgrades are fundamentally not compatible. In general I’m not convinced loose coupling of the schema and the application is even possible to do sustainably at scale. You can try double-writing relational data with both the old and the new schema but it’s not really a scalable approach for larger applications, it’s just too hard to correctly do especially when critical data is simply unrepresentable in the old schema.
I suspect this is a big part of why nosql took off. If you store things as JSON objects you at least have a really janky way to do evolution. You can also store events you receive as JSON and reprocess them with both the old and the new application versions, modulo dealing with updates to the JSON schema itself (which might be easier in practice).
In our experience, it takes a strong team, but it can be done. Generally, you consider the DB an application all its own and treat it like it is. You use something like PgTAP for testing, you have DDL checked into a VCS like git along with something like Liquibase to apply and revert. You have dev and prod environments, etc.
To avoid the thrashing of adding and removing columns all the time, We add an ‘extras’ JSONB column to every table where it might remotely make sense(which is most of them). This way app’s and users can shove extra stuff in there to their hearts content. When it’s needed outside of that particular app, we can then take the time to migrate it to a proper column. We also use it all the time for end-user data that we don’t care about.
Always type and constraint check columns at the DB level too. REFENCES(FK’s) are the bare minimum. CHECK() and trigger function checks are your friend. This forces applications to not be lazy and shove random junk in your DB.
We also liberally use views and triggers to make old versions exist as needed until some particular app can get updated.
Use the built in permission system. Ideally every app doesn’t get it’s own user to the DB and instead logs in as the end user to the DB, so we can do row and column level access granularity per user even through the application.
We also make a _nightly copy of the DB available, which is a restore from last nights backup put into production. This makes sure your backups work and gives devs(of DB or app variety) access to test stuff being mean in production, without having to actually abuse production. Consider an _hourly if you need it too.
This is mostly PostgreSQL specific, but similar techniques probably exist in other DB’s.
Yeah I tend to use nosql other than for really, really simple stuff in postgres where migrations are uncommon. But at work we use Rails so I see tons of model updates, but I haven’t done it much myself.
It is very rare that particular orm backend is the only thing that reads/write data.
multiple environments with different schema versions (or even partially migrated or incompatible)
I do understand that if you’re in this situation ORMs are going to mess you up in a lot of cases. But there’s a lot of systems out there with a single database, all written to by a single piece of software (sometimes even just a single instance of that software!) where these issues just don’t show up.
I do think there’s a big gap in what those teams need and what you need.
The pricing was already getting untenable for many workloads when Salesforce took over (AFAICT they basically never dropped their price / compute as everyone else did), but the shift from “best dev experience available” to “handy way to build salesforce apps” really ran it into the ground.
The openSUSE project requires spec files to have a license header. It assigns the same license to its RPM spec files as the license of the package itself. So, if a package is GPLv3-licensed, the spec file is also considered GPLv3. The exception is if a package does not have an open-source license, then the spec file is assigned the MIT License.
It seems to be incredibly inconvenient for everybody involved. Is it implied that due to the GPL linking clause, this is (in their opinion) what you have to do?
It’s a really strange choice because the spec file doesn’t link against anything in the code. It’s not to different from a textual description of the build process.
It’s also problematic when the package license changes. Suppose a package with a sole maintainer (who can just do it single-handedly) changes its license from GPLv3 to MIT. Suppose the spec has multiple contributors who all need to agree to any potential license change. Now the spec is stuck with GPLv3 even though the license of the package itself is now much more permissive.
On the flip side, a spec file must (if only to protect maintainers) have some license. “Use the same as the package” is simple to explain, and avoids all kinds of ongoing support requests / discussions / storms in teacups over ‘license changes’ which aren’t really licence changes.
Surely “Use MIT” is even simpler to explains and avoids even more ongoing requests/discussions? I mean what happens if you have 10 contributors to a complicated spec file, then the upstream project changes license from e.g GPL to MIT? You’d need to contact those 10 contributors and get their consent to re-license the spec file! That seems like a wholly unnecessary waste of time.
I can’t say for sure, but it wouldn’t surprise me if “Use MIT” resulted in a steady trickle of well-intentioned-but-ignorant tickets getting raised to the tune of “It looks like you’re relicensing upstreams GPL code to MIT, you can’t do that”.
It would surprise me a lot if that is more work than gathering signatures for everyone who has ever contributed to a spec file any time its upstream relicenses..
We have seen extremely complex projects pull off license changes with no issues… Plenty of the truly huge and complex open source software out there requires contributers to sign a CLA
I think the best argument for it is that if the upstream package maintainers wants to start maintaining the specfile too, i.e. like docker-ce does, having the spec file as the same license as the project makes it easier for the project to pull it into their repo and keep it up to date.
I don’t think that happens very often in practice, but perhaps it happens about as often as upstream license changes, which are also quite rare.
Ugh. I loved having a great example that you can have more scale than mostly everyone else with a traditional monolithical architecture in a good manner.
At least they acknowledge it’s likely they will not save money (which implies, it’s likely it will cost them money?) And I think their explanation that they are gaining flexibility is true. (It remains to be seen if it will be a worthwhile tradeoff for them, of course.)
(I’m kinda surprised that it’s never discussed just having managed hardware. I feel people only discuss colo, where there are managed hardware solutions which I think make sense.)
This is how cloud providers used to “feel” back when it was basically just EC2, even though the instances, even then, were pretty far from the actual metal. This basically hasn’t changed as long as you never venture into the higher-level services the cloud providers offer, though maybe it’s gotten pricier.
I think EC2 has always been pricier than any equivalent VPS. Or EC2 dedicated hosts has been more expensive than equivalent dedicated hosting.
And I think it’s fair; you’re paying for a better API, better elasticity, etc. It’s just that if you’re not using those, it doesn’t make sense to pay for them.
(What I’ve been explained, but really never get is how colo is sometimes much more expensive than renting a dedicated server. Or at least, to seem quite pricy in comparison. Yeah, I guess it’s extra work, but…)
I think EC2 has always been pricier than any equivalent VPS.
I looked at this recently, and AFAICT EC2 (with 3 year lock-in via RI purchase) was between 4 and 30 times as expensive than the equivalent performance on Hetzner (month to month lease with a 1-month added setup fee), depending on your disk & network performance profile (eg if you want to write 20gb / second, you can stripe 4 NVME SSDs in a $400/mo hetzner box - that throughput capacity alone would cost $800USD / mo with EBS, and you’d still have to pay for IOPS and storage and a computer to attach it to).
I did a graduate thesis on accelerometers way back, they’re not very accurate. You can theoretically detect the constant 1g of gravity and thus get a vector normal to it, but if you just have one axis you’re literally lost after a few turns.
How far back? Accelerometers have gotten quite a bit more accurate in recent years, due to consumer applications. I suspect you could get pretty accurate results with multiple accelerometers and watching/recalibrating for drift.
With how accurate accelerometers are with state of the art AR/VR applications, this seems pretty doable. Opinions?
LOL this was decades ago, it was an experimental device that never got produced. The idea was to be able to, say, decline a call by turning the phone over. Ideally you’d be able to answer a call by detecting the move to your ear.
To clarify, what I meant to indicate was that the problem was eventually solved, so apparently the accelerometers are good enough nowadays, disputing the present-tense statement “they’re not very accurate”.
While it’s true it’s been a long time and I haven’t kept up with the tech, I think any improvements have been with software. The device itself was a MEMS device, essentially a tiny beam affected by forces where you measured the deflection. There are inherent physical limits to how accurate such a device can be. You can employ standard QA to find the really accurate parts, but that raises the cost substantially.
In other words, while it’s possible the company in question could have implemented their solution solely using accelerometers, that might have limited the target market to only the most expensive handsets on the market.
If you make a 5 degree error estimating how sharp a turn was, on a train moving 20m/s, you’re losing 100 meters of accuracy each minute.
Consider that the accelerometer moves in your pocket as you shift position; there’s constant small changes to the pitch/yaw.
Modern phones do this (and make it work) to reduce the use of battery-intensive GPS connections, but the errors can accumulate quickly, so they are designed to draw on multiple sources.
You get two things from the accelerometers: magnitude and direction relative to the phone. You can usually figure out that the one with a roughly constant 1g magnitude is down. But then how do you work out direction of travel? Normally you’d use the compass but that tends to give complete nonsense values when you’re inside a metal tube. This seems like a good application for ML, because you can record a bunch of samples of known movements and sensor readings with a bunch of errors and then you’re trying to map sensor readings with a bunch of errors to the closest approximation of one of their known results.
Cars are a lot easier because the size of the wheel is known and so is its angle for turns. You have a sensor that is directly in contact with the ground and which doesn’t slip more than a rounding error (if the car moves more than a metre or so without the wheels rolling, you’ve probably crashed).
Publicly distributing a modified version of the Rust programming language, compiler, […], provided that the modifications are limited to: […]
Crucially, this list doesn’t include feature developments or bug fixes.
While I’m sure this wasn’t the point of this rule,
I wouldn’t think that wasn’t the point. I’m not sure about the Rust Foundation, but the Rust Project (i.e., the development team) certainly has expressed a desire that no-one should fork Rust to add (or remove) features, with “splitting the community” being the problem they tend to cite with such forks.
If I’m developing a feature or a bug fix that I intend to contribute back to a project that’s hosted on github, I “fork” it to my account. Now, obviously, that’s not a “fork” in the FOSS sense of the word, in that I’m not planning to maintain it. I want to develop my feature or bug fix in public, getting feedback from others who also participate in the development of the project, then use the github UI that most people who post projects on github expect to submit a “pull request” from my fork.
If I change the README to reflect that, unless I’m quite careful when I do so, it makes my “pull request” noisier than it needs to be unless I jump through some hoops that I’d never do for any other project.
Nothing in the CONTRIBUTING.md file on the main repo suggests any need to do that.
So yeah, I’d agree with the poster that there’s no way this was the point of the rule.
“Forks” on github as they are commonly used in contributing to projects like this are different from “forks” in the larger sense. But since they choose to host on github, I’d assume they default to the github-specific meaning of the term.
It’s a trademark, not a copyright; the legal implications are far less strict. You can’t promote your fork and call it rust, and the worst you’d get if you violated that rule is a sternly worded letter asking you to stop (which could escalate into an actual lawsuit if you continued doing so).
Right. And the question is, will those who own/manage the trademark consider my having a public feature development or bugfix development “fork” (in the github sense) that github automatically invites readers to clone, to be promoting my fork? I can’t imagine that’s what they want to do or convey, because that’s just a very normal workflow for developing contributions to a project.
But clearly the author of this piece and @5d22b have interpreted the policy document to mean otherwise. It would seem like some sort of adjacent “explainer” piece (for those without enough experience dealing with trademarks to have developed judgement about this) might be a good idea.
Disagree. The problem here is the sanitizer is preserving [what it thinks are] comments. Or more fundamentally, that it’s passing any of the input through untouched.
A good sanitizer will parse the input to a DOM, remove or reject anything unsafe, then serialize that DOM back to safe HTML.
I have been working on standardizing the sainitizer api with W3C/WHATWG folks for a while now). I presented some of the pitfalls at Nullcon few years ago. Video here, and slides here.
TLDR: While theoretically great, you’re discounting parser issues that only come up when parsing twice but differently.
So far, every library that has tried to allow comments, svg or mathml was broken by mxss issues. Nobody parses like the browser does. Most sanitizers don’t even provide output that depends on the current user agent.
I miss the golang html/template library in every other language I use.
I’m not aware of any comparable implementation of contextual escaping - that is, the parsed template knows what kind of document context each interpolated value is added to, and applies different escaping rules for attribute values vs, say, content in script tags.
I thought this kind of context-aware escaping is done by Google Closure Compiler templates but that’s based on a vague memory from when Google first talked about this stuff publicly (maybe 15 years ago?) and a quick look at the documentation is inconclusive. If it isn’t Closure I’m pretty sure it’s Google because I remember being impressed about the amount of engineering that went into a template engine, and it might even have been before LangSec generalized and gave a name to the principle of actually using a proper parser when working with untrusted data.
While I mostly agree, there could be bugs in the “reject anything unsafe” part of the problem, much like @students example above, where the simplest strip < doesn’t actually work.
Certainly if that is a strict whitelist subset, then the chances are quite low(i.e. very close to zero), but it’s probably not actually zero.
The NASA adding a “moon time zone” by the end of 2026, and I think it will be the perfect cherry on top of the weird bunch that the whole implementation of time zones that we have.
The lunar timescale is more subtle than a time zone. They are planning to install atomic clocks and establish a timescale there, which will be decoupled from timescales on earth.
Since 1977, atomic time on earth has been subject to a relativistic correction so that the length of the UTC second matches the SI definition at sea level. This correction is due to gravity and general relativity: clocks run faster at higher altitudes so for example the USNO clocks in Washington DC run slower than the GPS ground station clocks in Colorado, which run slower than the GPS clocks in orbit.
In the 1960s a common way to compare clocks at different time labs was to transport an atomic clock from one to the other. They needed to get an accurate record of the flight (speed and altitude etc.) so that they could integrate a special relativity correction over the journey, because time dilation at nearly 1000 km/h is measurable.
A native atomic timescale on the moon will differ from UTC partly because of the different gravitational dilation owing to the moon being much smaller, and partly because of its orbital velocity.
Anyway, I haven’t seen a good explanation for the lunar timescale. Like,
What are they planning that needs that degree of precision?
What are they planning that needs a stable precision timescale over the long term?
What are the difficulties in syncing with UTC that mean a new timescale makes more sense?
Is it too far?
Is it because the relativistic dilation is big enough to mess with scientific instruments?
The announcement could have benefited from a lot more detail since it implies there’s some cool science being planned but they said nothing about it!
In the 1960s a common way to compare clocks at different time labs was to transport an atomic clock from one to the other. They needed to get an accurate record of the flight (speed and altidude etc.) so that they could integrate a special relativity correction over the journey, because time dilation at nearly 1000 km/h is measurable.
Holy shit, that’s some seriously advanced stuff. Our ancestors knew what they were doing!
I get what you’re saying, but it seems clearly a question of perspective to me. Consider this possible timeline:
1900: Senior engineer who will work on this project is born.
1960: They work on the project.
1980: They pass away of old age.
2000: Future lobsters member is born.
2024: They post about their ancestors working on a project.
It’s unlikely, but not implausible, that someone is posting here with a great-great-great-grandfather who worked on this project (but died long before the poster were born).
The first two I have no idea. For the last one, it seems cheaper and easier to have a decoupled timescale (fly some atomic clocks to the moon and provide them with power and a stable-enough environment) than to have a coupled one (fly some atomic clocks to the moon, provide them with power and a stable-enough environment, and a time-transfer system).
I’m sure the moon is too far for common-view GPS (everyone’s favorite cheap easy solution) to work, so “do nothing” probably seemed like a good alternative. And they can always start steering it at some later date, and just freeze in whatever offset existed at that time.
I thought you were supposed to use normalization to avoid NULLs. I haven’t tried to do this myself, but there are blog posts about it.
This person says to use 6th normal form to avoid NULL: https://36chambers.wordpress.com/2021/10/22/avoiding-null-with-normalization/
Wikipedia says that normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice, but you did ask for absurd jello molds with olives and shit.
Yeah, 6th normal form is exactly what OP is looking for - the absurd number of joins required makes it difficult to use, to say the least.
This is some random GitHub project’s internal readme, but I think it’s still a good read — most of the things in there were new to me two years ago!
I don’t think I saw any other system except maybe Tarantool use single commit thread.
Document mentions that single core can handle a lot - which is of course true. Still, did TigerBeetle experience any workloads where commit thread became the bottleneck?
It is a bit too early to think in terms of specific workloads: we think we got performance architecture right, but there’s an abundance of low-hanging performance coconuts we are yet to pick to give the hard numbers from practice.
That being said:
the chief issue isn’t as much the CPU time spent there, but rather just the fact that we “block” the CPU for milliseconds, so no IO events can be handled in the meantime.
so the obvious thing here is to make the sort “asynchronous”, such that it looks like any other IO operation — you toss a buffer into io_uring, you don’t touch the buffer until you get a completion, and you are free to do whatever else in the meantime. This will require one more thread, not to get more out of the CPU, but just to make the overall flow asynchronous
then, we can pipeline compaction more. What we do now is:
We can run prefetch&compact concurrently though!
another obvious optimization is to use the right algorithm — we just
sortthere for simplicity, but what we actually need is a k-way mergeand, of course, given that we have many LSM trees, we can sort/merge each one actually in parallel. But we want to postpone the step of using more than one core for as much as possible, to make sure we get to 90% of the single core first.
In other words, the bottlenecks we are currently observing do not necessarily reveal the actual architectural bottlenecks!
As Grandma TigerBeetle once said, “you can have another core once you’ve saturated the first one!”
Thank you for a detailed answer!
Personally, as someone who has worked on a multithreaded database -
Using multiple threads or processes to write to the same physical drive is usually just moving the single point of synchronization to the kernel / firmware. How fast that is / how well it works is going to vary from machine to machine, but the answer is almost always “orders of magnitude slower than the single threaded version”. It also means you’ve got to deal with risks like “OS update broke the performance”.
If I couldn’t get enough write performance out of a single thread, first thing I’d try is switching to user-space storage drivers (to ensure no other process was stealing my IO time and reduce kernel context switches). If that still wasn’t enough, I’d want to shard writes to separate drives.
Specifically databases or in general? It is a very established pattern but everyone ignores it and rawdogs threads-everywhere-all-at-once-all-the time-no tooling-or-language-support-because I’m special and this is different.
If this were ever an option, I think we’re too late for it now.
In any case I subscribe to the “computers must not speak unless spoken to” philosophy, and hope for that kind of future.
“Visitor at front door” is a computer speaking without being spoken to…
Yes, but playing a doorbell chime isn’t.
This is a weird claim. I use Ruby on a Windows ARM laptop just fine, installed via https://rubyinstaller.org/
I was setting up ruby installs on windows laptops at railsgirls events in 2014, and it was fine then, too.
Agreed: Super weird claim.
I really wonder about this… not to say that I disagree but rather that I’m genuinely curious.
First, my perspective is that of a C programmer (with decades of experience and the comfort and biases that come with that).
Now, I have written firmware for an embedded device. For my dev/testing purposes I needed a test rig that I could use to poke and prod the device (via BLE). I prototyped this test rig in python for a bunch of reasons that are unimportant here but a big one was that I found a python module that provided me with a BLE api. One attraction being that it would work across platforms (vs learning the native / proprietary BLE api of each target).
I rather hate python. Not the language per se but the ever shifting ecosystem.
So today, after spending a bunch of time expanding my test rig to deal with some new stuff in the firmware, I find myself wondering if this prototype would be easier to maintain in rust or if I should have started with Rust as the OP suggests. And would it really be worth rewriting? Of course, this is impossible for you, the reader, to answer – but the idea of recoding a couple thousand lines of python in rust (that I’m not any kind of expert in) is quite daunting even if it were possible to objectively decide such a rewrite is a good idea.
I would suggest that porting 2000 lines is not necessarily a particularly big job unless they’ve made heavy use of unusual language features, or lean on libraries which don’t exist in the target language - although porting from one language I don’t know well to another is daunting regardless of the size of the program.
“Protesting is allowed as long as it is ineffective”.
The FOSDEM organizers, insofar as they are FOSDEM organizers, exist for the purpose of organizing FOSDEM. In that role, they have decided that there will be such and such a speaker at such and such a time at such and such a place.
You can disagree with that decision. You can register your disagreement through channels they’ve provided. If that’s proven ineffective, you can disengage with the whole affair. If you don’t like that option, you can register your disagreement by non-obstructive protest, which clearly indicates friendly criticism. Or if you don’t like that option, you can register your disagreement via obstructive protest. But in that case you have declared yourself to desire to obstruct FOSDEM’s reason for existence; it’s not surprising that FOSDEM will in turn assert its will to proceed with itself.
There’s a bizarre trend among vaguely-radical Westerners whereby they expect to be able to disrupt the operations of an organization and not suffer any opposition for their own disruption because they call the disruption “protest”. This is a very confused understanding of human relations. If you are intentionally disrupting the operations of an organization, at least in that moment, the org is your enemy and you are theirs. Of course your enemy is not going to roll over and let you attack it. Own your enemyship.
FOSDEM is volunteer-run, by people who are involved in F/LOSS themselves. It exists thanks to a lot of goodwill from the ULB. Protestors are not fighting the cops or some big corporations, they’re causing potential grief for normal working-class people like themselves.
While the ULB campus always gives me the impression it’s not averse to a bit of political activism, it would be an own goal if some tone deaf protestors were to jeopardise the possibility of future FOSDEM conferences.
Dorsey won’t care, he’ll take his same carefully rehearsed speech and deliver it again at any of the hundreds of for-profit tech conferences. They’ll be delighted to have him. But there’s really only one volunteer-run event of FOSDEM’s scale in the EU.
By all means, boo away Dorsey, but be considerate of the position of the people running this.
As a former and future conference organizer who has taken a tiny paycheck from three of the ~dozen conferences I’ve organized but never a salary °, this is 100% true. I’ve been fortunate that my conferences have had no real controversy, but two that did have a bit of a tempest in a teapot were went to two directions. One, the controversy was forgotten in a week, aside from a couple of people who just couldn’t let it go on Twitter. It took about a month for their attention to swing elsewhere. In retrospect, almost a decade later now, we’d have made a different decision, and the controversy wouldn’t have happened. Unfortunately, the other was life-altering for a few of the organizers because of poor assumptions and unclear communication on our part and a handful of attendees who felt that it was reasonable to expect intra-day turnaround on questions leading to a hostile inquiry two weeks before a $1M event for 1,500 people put on by eight volunteers spread way too thin.
I’ve also had to kick out attendees who were causing a disruption. No, man, you can’t come into the conference and start doing like political polling, petitioning, and signature collection, even if I agree with you, and might have considered setting aside a booth for you if you’d arranged for it ahead of time.
As conference organizers, we have a duty to platform ideas worth being heard and balance that with the person presenting them. The most effective way to protest a person or a presentation is not to attend, and the second most is to occupy space in silence with a to-the-point message on an unobtrusive sign or article of clothing. Anything more disruptive, and you’re creating a scene that will get you kicked out according to the customs of the organizer team, the terms of attendance, and the laws of venue if the organizers enforce their code of conduct. I’ve never physically thrown someone out of an event in my 24 years of event organization, but I’ve gotten temptingly close and been fortunate that someone with a cooler head yet more formidable stature intervened (and I was 6’2” 250 lbs at the time!).
° A tenet of my conf organizer org is “pay people for their work.”
Or you can just acknowledge that a person has done enough damage in their life that you won’t let them shout out any other weird takes.
It’s not like FOSDEM is mainstream enough that it needs to have people who are more well-known outside of FLOSS circles to keynote. There are enough figureheads (often people who spent decades of their life doing good things) who would be more well-suited. This is not some random enterprise conference where you may invite any random startup CEO to shill their stuff. FOSDEM should do better. (I remember phk, and it was great)
that’s cool but literally nobody had heard of block’s involvement in open source until this was announced, so i don’t know what ideas you’re referring to
Respectfully I disagree with this. Peaceful protest can be non-disruptive but still effective. If Jack is talking the whole time with people on the stage in protest around him, I think a lot of attendees will inevitably read Drew DeVault’s article and understand his argument.
Drew lied about FOSDEM taking money from Dorsey for the keynote. You should take anything he says with a large grain of salt.
Where did he say that? I don’t see that claim anywhere in the article.
There you go.
What? How is that a “lie”? Dorsey’s blockchain bullshit company is a main sponsor of FOSDEM this year.
I was going to point this out, but then I realized the concern is more likely the “presumably” in “presumably being platformed”. In other words, they’re not saying that Block’s sponsorship is a lie, they’re saying FOSDEM did not accept a bribe: Block’s sponsorship is not the reason Dorsey got a keynote as DeVault alleges.
Yes, that could be it. If so, a bit disappointing to see such misconstrual or misrepresentation of DeVault’s clear statement of presumption as a statement of fact. (There’s also a lot of grey area between “bribe” and “total neutrality”. Patronage is a thing.)
Would you agree then with the statement that “presumably DeVault lied when he construed Block’s main sponsorship as the reason Dorsey got the keynote selection”? Would presumably have made @talideon ’s comment acceptable?
The linked article makes it clear that patronage is not in play for this event.
Not trying to be adversarial, just trying to highlight how others are reading DeVault’s statement in light of the clear answer from FOSDEM that Block’s sponsorship had no role in his keynote. I don’t care one way or the other about the keynote, have no feelings either way about DeVault (who seems to be the most polarizing figure on this site), and will not be at FOSDEM. But when I read DeVault’s wording, I generally understood he believes FOSDEM accepted Block’s sponsorship in return for a keynote address and “presumably” is there to avoid any legal issues from making such a claim.
No, because there’s no need to presume anything when you can just go look at his words. None of it gets anywhere near “lie”, to me. It strikes me as easy and reasonable to take his words as a true statement of his belief.
Thanks for pointing that out.
We’re on roughly the same page here. To me, he’s definitely casting aspersions on the integrity of the selection process, though I wouldn’t go so far as to say there’s a clear belief of quid pro quo: this is the bit where the grey area is.
I like the fact that they demoted Mr Dorsey to a ordinary main track! Free speech but no billionaire privilege!
That’s fair. Have a wonderful day!
Protesting is the act of clearly communicating that you don’t like something. Effective protests are those where the specific ideas being communicated are convincing enough, or the people doing the protest are important enough or widespread enough, that you take the communication seriously.
Communication takes many forms and some of them are more disruptive than others. But it is a popular fiction that disrupting and shouting down events because you don’t like the speakers is an effective form of protest - in fact there could be nothing more ineffective than associating your side of the argument with something that would make an ordinary attendee annoyed. Only if an ordinary attendee would side against the speaker by default would this be a good strategy.
But when employed, usually this is not about actually communicating that you protest against the thing, it’s about attempting to get your way by force, just with a 1A-tinted veneer. Protesting is allowed as long as it is actually protesting, instead of trying to take control.
This is just another example of the black and white thinking responsible for a large part of the awfulness in the world.
Effectiveness isn’t binary. The allowed form of protesting probably not as effective as it could be in a different form than what is allowed. That doesn’t make it ineffective.
Protesting is making your disagreement and numbers visible. If you’re physically stopping somebody from doing whatever it is they’re planning and you’re against, it is not a protest, it’s just disrupting. And by implication it means you have the power to stop it and are not the oppressed underdog you are likely proclaiming to be.
That’s a very narrow definition of protest which - afaict - isn’t in line with how that word is used by the rest of the English-speaking world.
‘Protesting is allowed as long as it is ineffective’ is an appeal to the moral valence of the particular action of protesting. If you change the meaning of the word to refer to a completely different thing, you cannot keep the moral valence. Physically stopping something from occurring is not something anyone has to tolerate just because you used a certain set of Latin glyphs/mouth-noises to identify it.
Could be, I’m not very pleased with how I worded that so I might not be very clear about what I meant exactly, but it seems disingenuous to go back and edit it, given I’m not sure it’d end up better anyway. I’m trying to point at a meaningful difference between bringing attention to an issue and the size of the cohort in agreement with you, versus unilaterally acting to stop people doing something simply because you wish they wouldn’t. Of course everybody thinks that they’re in the right, therefore their actions are justified, but that can’t really be the case all that often, since everybody thinks it.
The latter sounds like direct action. It is widely regarded as a kind of protest. Protests typically involve a small minority of the population deliberately causing a disruption to force a response. Often, a majority of the contemporary population disagree with the aims of protest, even protests that we consider good and/or effective in retrospect.
Further reading:
this comment didn’t age well
Not a thinkpad with Linux? I quite like that combination.
I had a good laugh at “just €950.”
I got excited, but yeah, after clicking: no way I’m buying anything from Apple.
Glad that macOS works for all of you, all the power to you, but I personally will never willingly again touch an Apple laptop.
I’m curious to know why you don’t like Apple laptops. Because it’s a locked-down system, or because of the planned obsolescence? I think these points are valid criticism to iPhones, but Apple laptops are fine in these regards from my perspective. Sure, they could have more upgradability and repairability, but those didn’t bother me that much. (I have been using a 13” MacBook Pro since 2018, and it served me well until recently when it became unbearably slow, so I upgraded to a shiny M4 Pro MacBook Pro to have some fun with local LLMs. Later I took the old MacBook Pro apart and realized it’s slow simply because the fan has gathered a lot of dust, so the CPU basically got thermal throttled to death. In other words, I could have used it even longer by simply cleaning the dust on my own. Either way, I think 6 years is a fairly long time for a laptop’s lifespan!)
Also curious to hear why people like ThinkPad + Linux so much!
This is just me and my problems. I don’t expect anyone else will have these problems, but this is why I, personally, detest working with macOS or Apple software.
Because macOS is a horrible operating system that doesn’t let me fix stuff. The friggin’ OS locks me out of ptracing processes by default, fer crissakes. There is no
/proc. Heck, there isn’t even a FHS. Most of the userland is from BSD, not from GNU, so grep doesn’t have a-Pflag for PCRE andfinddoesn’t default to searching the CWD. I know I can fix this stuff with MachoMeBrew, but why would I need MachoMeBrew? It takes so much tweaking to make macOS just work like I want it to.Because the keyboard is all wrong. I use Emacs. I require ctrl keys in comfortable positions. Most McKeyboards just don’t have right ctrl keys or they have them in weird locations. I use modifier keys on both sides of the keyboard to work with in Emacs. I don’t want an option key. I want ctrl, meta, and super, and I want them in their normal positions.
Because there is no selection like in X. I can’t just highlight and middle click to copy-paste. I have to use the keyboard instead.
Because system upgrades locked my computer for a long time, sometimes over an hour, without any indication of what the operating system was doing. I couldn’t use anything while the upgrade was happening. What nonsense is this? When I’m running
apt upgradeon Debian, I can still use all of my programs while the upgrade is happening. I only have to restart processes to reload the new version in RAM, if I want to. And reboots? Again, I can install a new Linux and reboot to the new Linux whenever I feel like getting around to it. It should be my machine, but Apple makes it feel like it’s their machine I happen to be renting while the hardware lasts.Because Apple wants me to sign up and give them my personal information just to install basic software. Some of this stuff I’m complaining about can sort of be fixed and emulated if you install the right software. But even before you’re even given permission by Apple to install software on your own machine, you have to tell them your name, maybe your phone number, and click “I agree” multiple times on many piles of unreadable legalese.
It’s just a mess I don’t want to bother with. Give me a Linux, give me open source, give me free drivers, give me the right keyboard layout, give free licenses, not EULAs.
Give me control and ownership. Of my own machine.
What is this Planned Obsolescence that I keep hearing about? Do iPhones or Mac stop working after a few years? Or have we set some unrealistic expectation that Apple should support hardware to infinity and beyoooond?
I think it’s mostly about not allowing old devices to upgrade to the latest operating systems even if they are perfectly capable, but at least for macOS, you can bypass this restriction with tools like OpenCore Legacy Patcher. Apple also made old phones with degraded batteries slower via software updates, but that was a few years ago.
The oldest supported Mac running macOS 15 is now 8 years old.
The reason I always get a little upset is that people have this unrealistic expectation that Apple somehow must invest and do far beyond what is reasonable. Why is that?
And why is it immediately assumed to be malicious intent? Because that is what “planned obsolescence” really means: that they sat in a meeting room in 2017 and said “we are are not going to make macOS 16 work on this hardware so that people will have to buy our new stuff in 2025! Haha!”.
For anything else it is just software and hardware that becomes unsupported over time, like 99% of this crap we deal with in this industry. Which we accept because the engineering and qa burden to keep things working is huge. Specially compared to where your users are. (They are not on hardware from 2017).
But … Apple is somehow special and must do this on purpose.
My old MacBook Pro is a 2017 model (A1708), which doesn’t support macOS 14, but I was able to install macOS 15 on it with OpenCore Legacy Patcher, and it works perfectly fine. Reportedly, you can even install macOS 15 on devices dating back to 2007. From my understanding, the new software “just works” on old devices without extra engineering investments (otherwise the patcher wouldn’t work so seamlessly), but Apple is putting efforts into preventing users from installing new software on it, and that is not cool. I mean, there’s a difference between “upgrade if you want to, but don’t blame us when things break because it’s unsupported” and “you are no allowed to upgrade”. On the other hand, nobody would/should complain about Apple Intelligence not being available on Intel macs: these doesn’t have the required hardware, so that’s an unrealistic expectation as you pointed out.
Meanwhile, maybe Apple bans these software upgrades simply because they don’t want to deal with bug reports from unsupported devices? Moreover, you can always use the patcher to bypass the hardware check, so yeah, I wouldn’t say planned obsolescence is a good reason to hate Apple. For me it’s more of a minor nuisance that can be easily overcome. That’s why I was asking JordiGH why he hates Apple laptops.
8 is a bit short. I replaced my MacBook Pro a bit over a year ago. The old one was ten years old and still working fine. It was faster for most day-to-day things than the Surface Book 2 that Microsoft had given me for work. It had a 4-core Haswell (8 Hyperthread) CPU and 16 GiB of RAM, which is ample for the vast majority of things I do (compiling LLVM is a lot faster on the new machine, as is running place-and-route tools, everything else was fine with the old one).
I believe they dropped support for anything without the Secure Element chip in the last update, which is annoying but understandable. I wouldn’t be surprised if the last x86 Macs have a much shorter support lifetime than normal because dropping x86 support from XNU will save a lot of development effort. A lot of people complained when they dropped support for the original x86 Macs, but given that they’d already started the 64-but transition with the G5 it was obvious that the 32-bit x86 Macs were a dead end, which is why I waited until the Core 2 came out. That machine lasted until it was much slower than the replacements. It was still supported when I replaced it with the Sandy Bridge model (which had two unfortunate encounters with a pavement and ended up being retired quite quickly).
My first Mac was a G4 PowerBook and, back then, a three-year-old computer (of any kind) was painfully obsolete. Most companies did 3-4 year rolling upgrades. Now, that’s been extended to 7 in a lot of places and even then it’s eligible for upgrade rather than automatic because a seven-year-old computer is often fine. I basically use computers until they wear out now. The performance difference between a modern Intel chip and one from two years ago used to be a factor of two, now you’re lucky if it’s more than 10%, so the need to upgrade is much less.
I’m less annoyed with this on Macs because the bootloader is not locked and, if macOS is unsupported then the device can have a second life running something else (even the Arm ones now have nice Linux support). It’s indefensible for the iPhones, where they just become eWaste as soon as Apple stops providing updates because there’s no possible way for third parties to support them. An iPhone 7 would run modern Android quite nicely if you could unlock the bootloader.
It’s one bit short?!?
I’ve met some people who were a few bits short of a full byte…
“Made old phones with degraded batteries slower via software updates” is literally true, but a tad misleading.
The software update started tracking what time the phone usually gets charged, and clocked the CPU down if it weren’t going to last until that time.
I had an affected model, and having my phone suddenly slow down definitely sucked, but having it start lasting until I got home wasn’t a bad thing to gain in return.
I’d agree that it should have - at minimum - be something you can disable.
That’s not what they lost in court for. They were reducing the clock frequency to make batteries discharge slower when the maximum charge capacity dropped. Rather than seeing short battery life and getting a replacement battery (which was often covered by warranty). people would see a slow phone and buy a new one.
This settlement is why iPhones (but not iPads) now have a battery health UI in settings: so you can see if the battery is holding less than 80% of its rated charge and replace it, rather than the whole phone. The iPad does not have this because it was not covered by the settlement, which was specifically a class action suit by iPhone owners.
They made them discharge slower because otherwise the voltage drop caused the phone to shut off, but they didn’t communicate this to their customers.
Yes, and if it had caused the voltage to drop and reboot, people would have taken them to the shops and discovered that the battery needed replacing, which was covered by the warranty or consumer-rights law in a lot of cases.
That’s true but the decision-makers at Apple had no notion that it was the case and didn’t factor the extra income they made into their decision-making /s
For what its worth, apple slowed down the CPU of old iphones (I believe 4-5 years at a time? Most other vendors would simply not care about such an old device), because with degraded batteries they were prone to random restarts (CPU would have needed higher voltage than the battery could provide).
This wasn’t communicated and they got fined (in France), but if only apple actually communicated better, this whole fiasco could actually have a positive spin (company fixes bug in 5 years old device). In the end though, it became a user-selectable choice, so best of both worlds.
It’s not appropriate to connect systems that don’t get security updates to the Internet, so in that sense, yes iPhones and Mac do stop being suitable for the tasks they were previously suitable for after some years.
In the case of iOS, Apple’s track record is much better than competitors’. In the case of macOS, Apple’s track record is worse than competitors’, so I think it’s quite justified to complain about the macOS situation.
(The above is deliberately phrased in terms of track record: going forward, there are the twists that Samsung and Google are getting better on the mobile side and Microsoft is getting worse on the desktop side.)
In the past 6 months or so, I have given away 3 Penryn Macs with a working Wayland Linux environment on them and I have installed Ubuntu on two Haswell Macs that are staying in the extended family. The Haswell Macs would have worked in their previous role just fine if macOS had continued to get security updates: The move was entirely about Apple’s software obsolescence. The key problem afer the switch to Ubuntu is that iCloud Drive and Apple Photos don’t work the way they do on macOS (you can get some access via the Web at least if you don’t have the encryption enabled).
The way hardware progress has changed means that N years before Haswell and N years from Haswell onwards (on the Intel side) is very different in terms of what hardware is quite OK for users who aren’t compiling browser engines. It doesn’t feel reasonable to treat Haswell hardware as obsolete. (FWIW, Adobe raised the requirement for a prosumer app, Lightroom, to Haswell only late last year. That is, until very recently a prosumer subscription app supported pre-Haswell hardware.)
Sorta, I guess. Apple supports the entire laptop for at least 6 years from the day they stop selling it. Not just MacOS, but hardware too. MacOS just gets rolled into that support.
It’s hard to find any PC vendor willing to support a laptop past 3 years. Many , by default, come with 1yr of support(for varying definitions of support) in the best case.
If you buy an Apple laptop, you know you should be able to keep it supported and working without too much hassle for 6 years. When you buy a Dell or Lenovo, you don’t have any idea how long it might last.
Generally trying to get repairs for consumer grade laptops from any vendor other than Apple is usually annoying at the very least, if not impossible, regardless of warranty status. For business grade laptops, as long as you paid extra for the support, you can usually get repairs done for 3 years. Past 3 years, the answer is almost always: NO.
Even in server/enterprise land, it’s hard to get support past 5 years for any server/switch/etc.
I find it interesting how wildly ppl’s expectations of laptop lifespans seems to differ. Just here in the comments the lifespans people are happy with seem to range between 3–4 and 10 years. Some of this is probably differences in usage patterns, but it’s wild to me that we’re seeing over 2× differences.
At the moment my “new laptop” is a 2017 Surface Pro I got used two years ago, and save for some games it handles pretty much everything I do without any issues.
My previous laptop is now 12 years old. It has a quad-core (eight-thread) Haswell 2.something GHz processor, 16 GiB of RAM, an 1 TB SSD, and a moderately old GPU. You can buy laptops today that have slower CPUs, less RAM, and smaller disks. I think the worst Intel GPU is a bit faster than the NVIDIA one in that machine. If the old machine is obsolete then people are buying brand new machines that are already obsolete and selling them should be regarded as fraud. It isn’t, because they’re actually fine for a lot of use cases.
It’s gone from being a top-of-the-line machine to one that’s a bit better than bargain basement in that time.
My “Late 2010” 11.6” Macbook Air is still going. I had to replace the original battery last month.
Note that I am actually able to use that one for (light) development purposes. It now has an aftermarket larger SSD. So 4 GB RAM, 240 GB SSD, 1.4 GHz Core 2 Duo.
That said, it is no longer my main machine. I prefer my Framework 13 which is roughly the same size, but but with a larger screen.
I have a very high end X1 extreme through work (i7-11th gen something, RTX 3050, 64GB of RAM). It’s running windows ATM, but I’ve had Linux on it, and let me tell you it does not compete. It gets insanely hot and the battery lasts AT BEST, if I really try, 3.5 hours. This is not better on Linux either.
The macs easily last an entire day of work on a single charge, while I can’t even work for a full afternoon. It’s not even close in terms of convenience.
There’s also just annoying lack of attention to detail. One of the most annoying misfeatures of this laptop is that if you charge it with anything weaker than the included 170W (!!!) charger (say, from the hub in a monitor, since IIRC USB couldn’t do that much power when this was released, and it’s not really needed if it’s plugged in all day), it pops up a BIOS error saying that the charger is below the wattage of the included one. This is only shippable by pressing the ESC key, and AFAIK it is completely impossible to disable. This is very early in the boot process so the CPU is still stuck at 100%.
I have been woken up SEVERAL TIMES because windows decided to update in the middle of the night, so it then rebooted, and got stuck in that screen with the fan at 100%, because it was plugged into “only” a 95W charger.
This is so dumb. You can just tell no thought was put into it, especially because it happily updates on battery with no warning, too. If anyone happens to know how to disable this LMK because I’ve just resorted to leaving the laptop unplugged from my dock or plugging in the bulky included charger along with it, which kind of ruins the point of having a single cable.
Oh, just thought of another one: I had recently started having some issues with the laptop shutting down if I put it in hibernate mode (so all state was lost).
Turns out, it’s because it does not have 64GB of storage free to persist the memory, so it just did not work and shut down instead. But it did not tell me! I had to dig through forums to find that out. How hard is it to just disable it if the free storage is less than the amount of RAM?
Tbf that’s mostly on windows, not Lenovo, but gah. It’s just bad UX.
Sounds like a dGPU (mis)management thing mostly..? My AMD-based L14, despite a small battery capacity (like barely 50Wh or something) easily lasts for 5-7 hours of coding and hanging around online, while staying cool and (with thinkfan) quiet.
Unfortunately I work with CUDA quite a lot, not for anything super intensive so the 3050 is fine, but enough that I can’t just completely disable the GPU and be fine. If I’m just browsing the web or something though back when I had Linux disabling the dGPU easily doubled the battery life.
I need to look better into this, I’m not sure if there’s some way to have it turn on on-demand on Windows.
I rock(ed) one for many years, and frankly.. no. They have CPU throttling issues, and their battery lives are nothing to be happy about.
I am most definitely not an Apple fan, but the M-series Macs are definitely a paradigm shift in that laptops, for the first time ever, are not just desktop PCs with uninterruptible power supplies that last for the duration of going from your home to work, when you have to plug it in again.
Which Thinkpad models did you use? There are some that aim at the light weight and long battery life segment of business users.
FWIW even my 2023 Thinkpad X13 Gen4 (AMD) lasts a whopping…. 3-5 hours on battery, if the moon is in the right phase and I don’t sneeze too hard. I’ve gotten as little as 2.5 hours of web browsing and terminal use out of it on a bad day, and my max, ever, was about 6.
Sure, that’s not nothing, and that’s more than “from work to home” (I guess - I don’t commute anymore), but it doesn’t survive a full flight between Seattle and Chicago, and that’s my benchmark for “good battery life”.
My 16” Lenovo Legion Pro 5i (2023) with 24 core i9-13900HX lasted six hours on battery in Windows 11, doing text editing and web browsing and short compiles (few seconds). I kicked the Windows off and put on Ubuntu 24.04 and battery life is now 5 hours. Which is still more than I need.
Yeah, I bet the top end MacBook Pro lasts a lot longer, but then this cost me $1600 (incl tax and shipping) while an equivalent MBP with M4 Max costs $3999 plus tax.
T450. But I also bought used ThinkPads to my less tech savvy family members.
dupe comment, FYI
I liked it, too, until I saw hell with my P53. Nothing to do with Linux, it was just a very bad purchase. Worked ok within the warranty, and now a mere 6 years later (!), it is full of hardware defects, crawling on the best it can. Six years with this kind of behavior would be unthinkable for old IBM or even early Lenovo Thinkpads. Sadly, I have no idea what to recommend instead. All things considered (Framework etc), they still seem to be the best. Of the worst.
The problem with post-IBM ThinkPads is that Lenovo has no attention to detail. They have good designers but they are spread thin across a gazillion devices. It’s impossible to do a great job in those conditions. Different models have different flaws, like fan noise, bad panels, etc. They should streamline their offering and stop trying to copy some Apple features that are not aligned with their ethos.
Fan noise is fixable (just take control from the OS with thinkfan), if that’s the issue I’m thinking about (shitty fan curve in firmware that doesn’t go silent on idle).
What’s not fixable is the shitty firmware bugs. My L14gen2a doesn’t like staying asleep and just wakes up randomly for no reason a lot, and sometimes the keyboard controller hangs with a pressed key (one key gets logically “stuck”, other keys stop responding – only fixed by a sleep-wake cycle).
Before anyone says Apple is so much better though: that exact same keyboard controller issue happened to me back in the day on a 2010 MacBook Air, at the worst moment possible… I was playing Dungeon Crawl Stone Soup. You can imagine the outcome.
I was referring more to the lack of sufficiently good cooling hardware in some ThinkPad models. They have so many SKUs that heating designs, pipes and fans are not thought or tested carefully in some models. Others are great.
Threads can express this kind of stuff just fine, on top of some well-known synchronization primitives. The main thing that
asyncgives you in this sense, that you can’t build “for free” on top of threads, is cooperative cancellation.That is, you can build patterns like select and join on top of primitives like semaphores, without touching the code that runs in the threads you are selecting/joining. For example, Rust’s crossbeam-channel has a best-in-class implementation of select for its channel operations. Someone could write a nice library for these concurrency patterns that works with threads more generally.
And, if you are willing to restrict yourself to a particular set of blocking APIs (as async does) then you can even get cooperative cancellation! Make sure your “leaf” operations are interruptible, e.g. by sending a signal to the thread to cause a system call to return EINTR. Prepare your threads to exit cleanly when this happens, e.g. by throwing an exception or propagating an error value from the leaf API. (With a
Result-like return type you even get a visible.await-like marker at suspension/cancellation points.)The later half of the post takes a couple of steps in this direction, but makes some assumptions that get in the way of seeing the full space of possibilities.
Thread cancellation is not realistically possible in most real-world code, unfortunately – see the appendix in my recent blog post that was on Lobsters.
This is not possible on Windows, as far as I’m aware – this may or may not be an issue depending on the platforms you’re targeting, but it would be a shame to lose Windows support just because of this.
The most important thing, though, is that async Rust allows you to select heterogenously over arbitrary sources of asynchronicity, and compose them in a meaningful fashion.
Key to this is the notion of a “waker”, i.e. something that you can register yourself with that can wake you up when your operation is done. This is very general idea, and the async runtime can provide drivers for whatever it wishes to support.
I wrote a post a few years ago about why and how nextest uses Tokio, that goes quite deep into operations that would blow up the complexity of thread-based concurrency to unmanageable levels. The state machine has gotten much more complex since then, too, with over 50 states per test at this point. An alternative that might work is a unified message queue, but managing that would also be a headache.
Async Rust is in very rarified company here. As far as I know, the only other environment which supports this is Concurrent ML.
Great articl! It aticulates why I always feel much more productive with tokio/async compared to plain threads (even with the additional pain async can bring somtimes).
It’s something I have experienced/felt but was never able to articulate well. Now I will refer to your blog post as an example.
Yes, the bulk of my post was about what you need to do to get thread cancellation, and how to disentangle it from async/await as a whole. (At the same time though, you don’t need thread cancellation for a general select operation, either- you only need one if you intend to use it to implement things like timeouts.)
Assuming you are willing to use a new set of APIs to get thread cancellation (which, again, you are also doing if you are using async/await) you can get cross-platform, waker-based, heterogenous, extensible selection/composition/cancellation using threads and blocking instead of a state machine transform. This is essentially switching out the memory management style of the async/await model, while porting over the other parts of the
Future::pollcontract that enable cooperative scheduling and cancellation.What value would that get you over the current async model? The current model is decoupled from threads so works in places without them, like embedded environments.
The usual benefits of stackful coroutines and/or OS threads: you can suspend through functions that aren’t
async, you don’t have to think about pinning, thread-locals andSend/Syncwork more “like normal,” debuggers already know how to walk the stack, etc.To be clear, I’m not trying to make any sort of argument one way or another about which approach people should use. I’m just pointing to another part of the design space that could also provide the nice non-performance-related features described in the article.
Thanks.
But, well, you can’t do this with arbitrary synchronous code that isn’t cancellable, right? Unless I’m missing something. Maybe we’re talking at cross purposes here.
Async cancellation is a real mess, but at least it only happens at yield points. I’ve tried to reason about making arbitrary code cancellable, and it seems very difficult.
Oh, because thread stacks are already immovable?
This is definitely compelling.
I spent a decent number of brain cycles in grad school looking at this problem and the conclusion I came to when you factor in concurrency (locks, condition variables, etc) is that it’s essentially equivalent to the Halting Problem. Being able to statically determine that cancelling at any arbitrary point in time leaves the system in a defined valid state is, I would conjecture but don’t have a proof for, impossible. If you were to make a language with enough specific constraints (similar idea as the borrow checker in Rust) you might be able to do it but it’d be quite restrictive. It’s a quite interesting thought exercise though to think about what the least restrictive approach to allowing that might look like.
Yeah I do think Rice’s theorem applies to cancellation. But in many cases you can do useful over or under approximations — that seems really difficult with cancellation.
There are a couple of things going on here: first, the raw ability to suspend and/or cancel through a function without giving it a distinct color; second, the higher level question of whether that’s something you can do correctly.
If you don’t want cancellation, or you do but you control all the code involved, the second question kind of goes away. Otherwise, I agree this is a thorny problem, and you’d want some kind of story for how deal with it. Maybe you lean on your language’s notion of “exception safety;” maybe you use a
Result-like return type to get a visible marker of cancellation points; etc.Right, exactly. And in the spirit of filling out the design space… an async/await-like language feature could get away with this too, if it exposed raw
Futureobjects differently than Rust. As an extreme example, C++ does this by simply heap-allocating its coroutine frames “under the hood” and only ever letting the program handle them through pointers. But I could imagine some other points in between Rust’s and C++’s approaches to resolve this.I would only add that the other big benefit of async/await is that it is one way of introducing a typed distinction between a function which synchronizes with a concurrent process and one which doesn’t; if your language doesn’t permit blocking synchronization APIs (unlike Rust, unfortunately) it gives you meaningful distinctions about purity that I think are very valuable to users.
As someone who has spent years working with raw threads & designed multiple concurrent algorithms: I’m yet to meet anyone who can reliably get raw thread synchronization right.
IMO the significant thing which async rust gives you is a compiler error whenever you do something that could race.
Eh? Rust doesn’t error for race conditions, and while it does prevent data races, it prevents them just as much for multi-threaded code as it does for async code.
This is only true in environments where management undervalues glue work.
The corollary is: if you’re a manager who wants all your teams to operate with increased efficiency, find the people who do good glue work, encourage them to do glue work, and socially reward it.
That is, set up an environment where the sort of tactical advice in this article doesn’t apply.
Also: watch for the pathological version of this, which is “everyone messing about with pet projects which are justified as glue work”. That is, ensure that as far as possible the benefits of glue work are seen in metrics that are genuinely valuable.
Note that glue work generally won’t be individually visible to metrics, but will be at a project level, over time.
I think another problem is load balancing: when a team becomes more efficient, a manager up the chain shifts more work to that team. If the new load is too high, the team can become inefficient again. Over time, the team might realize that they can’t really get ahead.
If the team or some manager in the chain is good at setting boundaries this might not be a problem.
You didn’t explicitly say this, but you might have meant it: socially rewarding the glue work should mean that the team does not get too much extra work just because they are efficient. Otherwise, the team will backslide eventually.
Agreed. This feels like the pathology of “good” engineers. They are efficient at their main job, but it doesn’t necessarily produce any more than a bad engineer because the spare capacity is spent on side projects.
To be clear, ultimately, it’s still better because you have some speculative work that might pay off, people working on pet projects that should increase morale, and there is clear slack in the system if the main projects need more attention.
Yup - managing up in this case is really important. But also important is team autonomy: are they really setting their own goals, managing their budget, etc.? Does the exec team / board trust that they’ll set the balance accordingly, and that you’ll step in to help steer if needed while they adjust?
Because a skilled, autonomous, team will adjust themselves to sensibly take advantage of increased efficiencies.
Also relevant is theory of constraints, coupled with proper measurements of throughput (“done is running in production / in the hands of users”, etc.). Because if you know what your team utilisation is, it’s a lot easier to have those managing-up conversations.
Yes! Most companies socially punish glue work, and socially punish the consequences of doing it well.
Yup, it’s a difficult line to walk. One bit of advice I give all new managers is that getting runs on the board early is super important … build some social capital to spend on covering fire.
Edited to add: If there’s one thing that managing up has entailed for me over the years, it’s explaining to execs without serious management experience (this happens more often than you’d think!) that 100% utilization is Bad. I literally still have dreams about these conversations a year after changing jobs.
I think there’s a strong correlation between “your peers (not superiors) think you do a good job despite having fewer tangible commits” and “this is useful and appreciated glue work”, esp. re: your pet project argument.
In the current economic environment, this is most places.
I work at a megacorp, and the question to our team (devops) is always about what products we can create and market to present our outputs. Not metrics about build or deployment times, efficiencies gained over time in the dev cycle, etc. It’s all about branding and it robs me of my will to continue working.
to be fair those metrics sound horrid too (Goodhart’s law applies), and in my view a “devops team” (if that truly is the case) means you’re not doing devops properly
By that metric, no one is and you’re engaging in a No True Scotsman fallacy. What metrics do you think are good to measure? What do you think people who focus on builds and deployments should be doing if not trying to improve how efficiently and reliably their software runs?
Assuming /u/aae is referring to the original devops practices when they refer to “doing devops properly”:
Early devops was a rejection of separating “people who focus on builds and deployments” from “people who write the software”.
15 years ago, “devops” meant regular application developers were paying attention to “how efficiently and reliably their software runs”, instead of having a separate team do so.
Having a separate “devops team” means the application developers can leave thinking about that stuff to the devops team, which is roughly the polar opposite of the early devops movement.
Also worth pointing out that by default flamegraphs only show on-cpu time (i.e. your application/kernel running code at the time the sample is taken). That is not the whole story, if the application/thread is asleep waiting for something and doesn’t run on any of the CPUs at the time the sample is taken it won’t show up at all. To see them you need to use “off-cpu” flamegraphs.
I once found a literal “sleep” in the code deep in a 3rdparty library that way (it was PAM, and it kept loading/unloading the crypto library every time, triggering it’s initialization code many times, which had a ‘sleep’ inside it as it was too early for pthread_cond to work. More modern Linux distros don’t have this problem anymore since they switched to libxcrypt).
Flamegraphs can actually help visualise anything where you can produce a weighted frequency for a given stack! You can also do things like trace disk I/O or memory allocations, using the size in bytes as a weight, to get interesting visualisations as well.
I never thought about measuring things other than runtime! I’ll have to keep this in mind; stuff like profiling memory allocations seems like it could be really handy.
Is there a good guide for how to build up the flamegraph data with custom metrics like that? I must admit I have relied on “tool spits out stuff for
flamegraph.plconsumption” and haven’t thought of it furtheryou are everywhere.
I assure you I am not!
Sampling profilers (such as
rbspy) do the opposite - visualizing wall time only.Without reading the article, I’ll say that more than 24 a day is probably a bug.
You underestimate the power of laptops on planes crossing the international date line. 24 hours in one timezone may be a limit, but you can certainly squeeze more than 24 hours out of the same calendar day if you are clever.
DST!
Technically you change time zones when the switch happens.
This is a special level of pedantic and I love it.
15-minute minimum billing increments mean lawyers regularly charge for over 24 hours work in a day.
I thought lawyers billed in increments of 6 minutes.
The world is big.
Wasn’t there just a story posted about how to think about time recently? Borrowing some language from that, a duration of over 24 hours would be a bug, but a period of ver 24 hours would not be.
And just like that, I’ll see myself out.
I don’t think that’s true; a duration can be over 24 hours (or 86,400 seconds), but you’d still be expressing it (ultimately) in something that reduces to seconds, and that doesn’t have any date- or calendar-like component.
You can have a duration of 168 hours (= 7 * 24) between two instants, but if a DST changeover occurs during that duration in some region, a 7 day period starting at the datetime corresponding to the timezone in use at that region at the beginning of the duration won’t end at the same instant as the duration when converted back at the timezone in use at the end; you’ll have gained or lost 3600 seconds somewhere.
The article linked talks about physical time vs civil time, and goes on further to define a duration as something that happens in physical time (where things like leap seconds etc don’t exist), and that their analog in civil time is the period (where things like what you’re talking about, like DST changeovers, do exist).
Having said that, my logic is still flawed because the day is a civil time construct, so there are definitely times where it’s period (not duration; that’s physical time) is not exactly 24 hours.
I was using the precise terminology offered by the linked article very carefully, yes; hence referring to durations between instants, and periods with datetimes. To reiterate my position, a duration can be over 24 hours, much as the linked article says:
Hence my speaking of a duration of 168 hours between two instants, and that a DST changeover occurring during that duration in some region, a 7 day period (civil) may not always have a corresponding duration (physical) of 168 hours, even though it might usually, because a DST changeover (like moving an hour forward or backward) would result in that (civil) period having a (physical) duration of 167 or 169 hours.
Your logic was flawed not because a day is a civil time construct (because you didn’t actually invoke the concept of a day, only hours), but because there’s no reason you can’t measure physical durations longer than 24 hours. Like the article notes, you can use 525,600 minutes (or 8,760 hours, or 31,536,000 seconds), whatever.
I didn’t directly invoke the use of the word day; that’s correct. The comment I was replying to did:
Ah, yep. So, if we take the example of somewhere that moves backward an hour at 3am, the day (civil period from midnight to midnight) has a duration of 25 hours (because you physically would experience 25 hours passing from midnight to midnight), and a period of … well, a day; it’s hard to say how many hours the period has, since it depends on your treatment of the matter, but maybe 24.
Wow. This sounds awful, like a way to be negligent of your own weaknesseses while asking others to clean up all the messes you make with ignorance
My OSS code is “free as in puppy”.
You’re welcome to have it, but don’t ask me to deal with whatever problems it brings you.
Right! I’m not really chastising you. If your unmaintained code turns out to be worth anything then it would be beneficial to the OSS community to fork and maintain it.
You only made the mess for yourself. And maybe it isn’t even messy for your purpose. Why clean up someone else’s mess, just because you decided to share knowledge for free?
You can still be a good communicator about it, though. (The social responsibility part.)
Realistically I think the “right thing” here is very varied, and very dependent on the type and size of the project, whether it’s got any sponsorship or corporate backing, whether it was ever ‘promoted’ in any way as a solution to anything, current number and mental and physical health of the maintainers, etc. Plus the ultimate “does the bug and/or the solution look simple / obvious / interesting” - and if it’s reasonably likely to affect many users, cause data loss or security breaches, etc. This is the ultimate “one size does not fit all” issue!
So if I report a bug in systemd with a reasonable amount of detail that’s likely to bite other folks, I’d hope for a bit of help (and in my actual experience, have a good chance of getting it.) On the other hand, individuals who’ve created something for themselves that they think others might find useful should absolutely not be deterred from just dumping it on github and then forgetting about it!
Why do these tools insist on taking over schema definition? This never works well in practice. I want a type-safe orm that will use standard ddl and generate whatever it needs from that.
I applaud type-safe approaches. It should be possible to statically verify your queries and results.
Why doesn’t it work well in practice?
I saw several reasons in past projects:
So insisting on having full master view of your data schema in code works only for simplest of projects imo. Not when you have dozens of developers adding tables/fields daily and somebody trying to run analytics and someone else to migrate this data to/from another store.
I see. The main one that seems like an issue to me would be the second. Personally I don’t really use ORMs ever so that’s why I asked - I don’t know what problems they cause because I never saw much value anyway.
I forgot really big one: migrations take time. If it is coupled to application server: your application is down during migration.
Oh, interesting. How would you avoid downtime without an orm? I assume you just run migrations independently from the application so you’re not in a “application will start once migrations are done” state?
I have seen one codebase which used named stored procedures for pretty much every query.
It actually worked better than it sounds (they had good tests, etc) - migrations which changed the schema could also replace any affected procedures with a version that used the new tables.
Not sure I’d want to use that approach, but it kept a nice stable api between the application and the database.
I personally think this is the best general approach for DB interfacing, with versioning applied to the named stored procedures for when their API’s need changing. But avoiding downtime when migrating also means just being really careful with what the migrations are even doing in the first place.
There could be an argument for the idea that an automated migration system could automatically write less intrusive migrations than an average naive developer might, but I haven’t seen this borne out in practice.
How you run migrations isn’t necessarily tied to what ORM you use. Generally speaking, a high-level overview of the approach taken is as follows:
There have been various ORMs over the decades that provide some sort of “automatically migrate the DB to the current code based schema” feature. I’ve however never seen or heard of one being used at any noteworthy scale, likely because migrating data is a lot more than just “add this new column”.
Another limitation of most of these migration tools that I didn’t see problematic before “the trenches” is linearity. Something like this has happened at $work-1:
So you’re in a hard spot regarding what to do. If I recall we made 421 a no-op and people had to manually clean up the mess.
It’s a mess, I believe relational data models and 100% uptime across upgrades are fundamentally not compatible. In general I’m not convinced loose coupling of the schema and the application is even possible to do sustainably at scale. You can try double-writing relational data with both the old and the new schema but it’s not really a scalable approach for larger applications, it’s just too hard to correctly do especially when critical data is simply unrepresentable in the old schema.
I suspect this is a big part of why nosql took off. If you store things as JSON objects you at least have a really janky way to do evolution. You can also store events you receive as JSON and reprocess them with both the old and the new application versions, modulo dealing with updates to the JSON schema itself (which might be easier in practice).
In our experience, it takes a strong team, but it can be done. Generally, you consider the DB an application all its own and treat it like it is. You use something like PgTAP for testing, you have DDL checked into a VCS like git along with something like Liquibase to apply and revert. You have dev and prod environments, etc.
To avoid the thrashing of adding and removing columns all the time, We add an ‘extras’ JSONB column to every table where it might remotely make sense(which is most of them). This way app’s and users can shove extra stuff in there to their hearts content. When it’s needed outside of that particular app, we can then take the time to migrate it to a proper column. We also use it all the time for end-user data that we don’t care about.
Always type and constraint check columns at the DB level too. REFENCES(FK’s) are the bare minimum. CHECK() and trigger function checks are your friend. This forces applications to not be lazy and shove random junk in your DB.
We also liberally use views and triggers to make old versions exist as needed until some particular app can get updated.
Use the built in permission system. Ideally every app doesn’t get it’s own user to the DB and instead logs in as the end user to the DB, so we can do row and column level access granularity per user even through the application.
We also make a
_nightlycopy of the DB available, which is a restore from last nights backup put into production. This makes sure your backups work and gives devs(of DB or app variety) access to test stuff being mean in production, without having to actually abuse production. Consider an_hourlyif you need it too.This is mostly PostgreSQL specific, but similar techniques probably exist in other DB’s.
Yeah I tend to use nosql other than for really, really simple stuff in postgres where migrations are uncommon. But at work we use Rails so I see tons of model updates, but I haven’t done it much myself.
I do understand that if you’re in this situation ORMs are going to mess you up in a lot of cases. But there’s a lot of systems out there with a single database, all written to by a single piece of software (sometimes even just a single instance of that software!) where these issues just don’t show up.
I do think there’s a big gap in what those teams need and what you need.
That used to be Heroku’s bread and butter. It wasn’t cheap but worked so well that productivity gains far outweighed the costs.
AWS AppRunner is an okay alternative to Google Cloud Run, but nowhere near as flexible nor fast to provision.
Heroku’s fall is crazy. It had the best workflow imaginable.
The pricing was already getting untenable for many workloads when Salesforce took over (AFAICT they basically never dropped their price / compute as everyone else did), but the shift from “best dev experience available” to “handy way to build salesforce apps” really ran it into the ground.
Very curious how the OpenSUSE policy came to be.
It seems to be incredibly inconvenient for everybody involved. Is it implied that due to the GPL linking clause, this is (in their opinion) what you have to do?
It’s a really strange choice because the spec file doesn’t link against anything in the code. It’s not to different from a textual description of the build process.
It’s also problematic when the package license changes. Suppose a package with a sole maintainer (who can just do it single-handedly) changes its license from GPLv3 to MIT. Suppose the spec has multiple contributors who all need to agree to any potential license change. Now the spec is stuck with GPLv3 even though the license of the package itself is now much more permissive.
On the flip side, a spec file must (if only to protect maintainers) have some license. “Use the same as the package” is simple to explain, and avoids all kinds of ongoing support requests / discussions / storms in teacups over ‘license changes’ which aren’t really licence changes.
Surely “Use MIT” is even simpler to explains and avoids even more ongoing requests/discussions? I mean what happens if you have 10 contributors to a complicated spec file, then the upstream project changes license from e.g GPL to MIT? You’d need to contact those 10 contributors and get their consent to re-license the spec file! That seems like a wholly unnecessary waste of time.
I can’t say for sure, but it wouldn’t surprise me if “Use MIT” resulted in a steady trickle of well-intentioned-but-ignorant tickets getting raised to the tune of “It looks like you’re relicensing upstreams GPL code to MIT, you can’t do that”.
To a close approximation, 0 people read spec files, so I’d be surprised at such a trickle.
If I understand correctly, Fedora defaults all .spec files to MIT, and searching around their bugzilla, I can’t find any confused reports like that.
It would surprise me a lot if that is more work than gathering signatures for everyone who has ever contributed to a spec file any time its upstream relicenses..
How complicated a spec file are you getting for a project with few enough contributors to pull off a license change?
We have seen extremely complex projects pull off license changes with no issues… Plenty of the truly huge and complex open source software out there requires contributers to sign a CLA
I assume this is where you impose a CLA on spec changes.
I think the best argument for it is that if the upstream package maintainers wants to start maintaining the specfile too, i.e. like docker-ce does, having the spec file as the same license as the project makes it easier for the project to pull it into their repo and keep it up to date.
I don’t think that happens very often in practice, but perhaps it happens about as often as upstream license changes, which are also quite rare.
Just a guess: .srpm files have a consistent license then.
Ugh. I loved having a great example that you can have more scale than mostly everyone else with a traditional monolithical architecture in a good manner.
At least they acknowledge it’s likely they will not save money (which implies, it’s likely it will cost them money?) And I think their explanation that they are gaining flexibility is true. (It remains to be seen if it will be a worthwhile tradeoff for them, of course.)
(I’m kinda surprised that it’s never discussed just having managed hardware. I feel people only discuss colo, where there are managed hardware solutions which I think make sense.)
This is how cloud providers used to “feel” back when it was basically just EC2, even though the instances, even then, were pretty far from the actual metal. This basically hasn’t changed as long as you never venture into the higher-level services the cloud providers offer, though maybe it’s gotten pricier.
I think EC2 has always been pricier than any equivalent VPS. Or EC2 dedicated hosts has been more expensive than equivalent dedicated hosting.
And I think it’s fair; you’re paying for a better API, better elasticity, etc. It’s just that if you’re not using those, it doesn’t make sense to pay for them.
(What I’ve been explained, but really never get is how colo is sometimes much more expensive than renting a dedicated server. Or at least, to seem quite pricy in comparison. Yeah, I guess it’s extra work, but…)
I looked at this recently, and AFAICT EC2 (with 3 year lock-in via RI purchase) was between 4 and 30 times as expensive than the equivalent performance on Hetzner (month to month lease with a 1-month added setup fee), depending on your disk & network performance profile (eg if you want to write 20gb / second, you can stripe 4 NVME SSDs in a $400/mo hetzner box - that throughput capacity alone would cost $800USD / mo with EBS, and you’d still have to pay for IOPS and storage and a computer to attach it to).
This is where I’m confused. Did they only have a single location that their services lived?
Wouldn’t it be simpler to “ask” the accelerometer for information? Or is that not reliable enough?
I did a graduate thesis on accelerometers way back, they’re not very accurate. You can theoretically detect the constant 1g of gravity and thus get a vector normal to it, but if you just have one axis you’re literally lost after a few turns.
How far back? Accelerometers have gotten quite a bit more accurate in recent years, due to consumer applications. I suspect you could get pretty accurate results with multiple accelerometers and watching/recalibrating for drift.
With how accurate accelerometers are with state of the art AR/VR applications, this seems pretty doable. Opinions?
LOL this was decades ago, it was an experimental device that never got produced. The idea was to be able to, say, decline a call by turning the phone over. Ideally you’d be able to answer a call by detecting the move to your ear.
IIRC the phone I got a decade ago had that feature, although I didn’t use it.
We’re talking mid to late 90s here. A visionary product, unfortunately never realized.
To clarify, what I meant to indicate was that the problem was eventually solved, so apparently the accelerometers are good enough nowadays, disputing the present-tense statement “they’re not very accurate”.
While it’s true it’s been a long time and I haven’t kept up with the tech, I think any improvements have been with software. The device itself was a MEMS device, essentially a tiny beam affected by forces where you measured the deflection. There are inherent physical limits to how accurate such a device can be. You can employ standard QA to find the really accurate parts, but that raises the cost substantially.
In other words, while it’s possible the company in question could have implemented their solution solely using accelerometers, that might have limited the target market to only the most expensive handsets on the market.
If you make a 5 degree error estimating how sharp a turn was, on a train moving 20m/s, you’re losing 100 meters of accuracy each minute.
Consider that the accelerometer moves in your pocket as you shift position; there’s constant small changes to the pitch/yaw.
Modern phones do this (and make it work) to reduce the use of battery-intensive GPS connections, but the errors can accumulate quickly, so they are designed to draw on multiple sources.
They say in the article that they are using accelerometer, am I missing something?
I was more so getting at multiplying the acceleration (with some rolling average smoothing) by time to get a distance, skipping the whole FFT/ML step.
I thought the same. Feels like using acceleration vectors directly should give you way more information than just detecting the fact on motion from an aggregate. People did that for cars before GPS: https://www.thedrive.com/news/34489/car-navigation-systems-before-gps-were-wonders-of-analog-technology
You get two things from the accelerometers: magnitude and direction relative to the phone. You can usually figure out that the one with a roughly constant 1g magnitude is down. But then how do you work out direction of travel? Normally you’d use the compass but that tends to give complete nonsense values when you’re inside a metal tube. This seems like a good application for ML, because you can record a bunch of samples of known movements and sensor readings with a bunch of errors and then you’re trying to map sensor readings with a bunch of errors to the closest approximation of one of their known results.
Cars are a lot easier because the size of the wheel is known and so is its angle for turns. You have a sensor that is directly in contact with the ground and which doesn’t slip more than a rounding error (if the car moves more than a metre or so without the wheels rolling, you’ve probably crashed).
I wouldn’t think that wasn’t the point. I’m not sure about the Rust Foundation, but the Rust Project (i.e., the development team) certainly has expressed a desire that no-one should fork Rust to add (or remove) features, with “splitting the community” being the problem they tend to cite with such forks.
I don’t think it was the point, though.
If I’m developing a feature or a bug fix that I intend to contribute back to a project that’s hosted on github, I “fork” it to my account. Now, obviously, that’s not a “fork” in the FOSS sense of the word, in that I’m not planning to maintain it. I want to develop my feature or bug fix in public, getting feedback from others who also participate in the development of the project, then use the github UI that most people who post projects on github expect to submit a “pull request” from my fork.
If I change the README to reflect that, unless I’m quite careful when I do so, it makes my “pull request” noisier than it needs to be unless I jump through some hoops that I’d never do for any other project.
Nothing in the CONTRIBUTING.md file on the main repo suggests any need to do that.
So yeah, I’d agree with the poster that there’s no way this was the point of the rule.
“Forks” on github as they are commonly used in contributing to projects like this are different from “forks” in the larger sense. But since they choose to host on github, I’d assume they default to the github-specific meaning of the term.
It’s a trademark, not a copyright; the legal implications are far less strict. You can’t promote your fork and call it rust, and the worst you’d get if you violated that rule is a sternly worded letter asking you to stop (which could escalate into an actual lawsuit if you continued doing so).
Right. And the question is, will those who own/manage the trademark consider my having a public feature development or bugfix development “fork” (in the github sense) that github automatically invites readers to clone, to be promoting my fork? I can’t imagine that’s what they want to do or convey, because that’s just a very normal workflow for developing contributions to a project.
But clearly the author of this piece and @5d22b have interpreted the policy document to mean otherwise. It would seem like some sort of adjacent “explainer” piece (for those without enough experience dealing with trademarks to have developed judgement about this) might be a good idea.
Disagree. The problem here is the sanitizer is preserving [what it thinks are] comments. Or more fundamentally, that it’s passing any of the input through untouched.
A good sanitizer will parse the input to a DOM, remove or reject anything unsafe, then serialize that DOM back to safe HTML.
I have been working on standardizing the sainitizer api with W3C/WHATWG folks for a while now). I presented some of the pitfalls at Nullcon few years ago. Video here, and slides here.
TLDR: While theoretically great, you’re discounting parser issues that only come up when parsing twice but differently.
So far, every library that has tried to allow comments, svg or mathml was broken by mxss issues. Nobody parses like the browser does. Most sanitizers don’t even provide output that depends on the current user agent.
It’s really best to do it in the browser.
Shouldn’t this be fixable by serialization with more conservative escaping? Are there any bypasses when the serializer uses
<in attributes?BTW, your slides have turned all text into paths with no
alt/titlefallback. The slides are text-less vector images.To answer my own question: always escaping
<is not enough, because it can’t be simply escaped in<style>and<script>.Ah. This seems to be the mobile view. These slides will work better
I miss the golang html/template library in every other language I use.
I’m not aware of any comparable implementation of contextual escaping - that is, the parsed template knows what kind of document context each interpolated value is added to, and applies different escaping rules for attribute values vs, say, content in script tags.
I thought this kind of context-aware escaping is done by Google Closure Compiler templates but that’s based on a vague memory from when Google first talked about this stuff publicly (maybe 15 years ago?) and a quick look at the documentation is inconclusive. If it isn’t Closure I’m pretty sure it’s Google because I remember being impressed about the amount of engineering that went into a template engine, and it might even have been before LangSec generalized and gave a name to the principle of actually using a proper parser when working with untrusted data.
While I mostly agree, there could be bugs in the “reject anything unsafe” part of the problem, much like @students example above, where the simplest strip
<doesn’t actually work.Certainly if that is a strict whitelist subset, then the chances are quite low(i.e. very close to zero), but it’s probably not actually zero.
The NASA adding a “moon time zone” by the end of 2026, and I think it will be the perfect cherry on top of the weird bunch that the whole implementation of time zones that we have.
The lunar timescale is more subtle than a time zone. They are planning to install atomic clocks and establish a timescale there, which will be decoupled from timescales on earth.
Since 1977, atomic time on earth has been subject to a relativistic correction so that the length of the UTC second matches the SI definition at sea level. This correction is due to gravity and general relativity: clocks run faster at higher altitudes so for example the USNO clocks in Washington DC run slower than the GPS ground station clocks in Colorado, which run slower than the GPS clocks in orbit.
In the 1960s a common way to compare clocks at different time labs was to transport an atomic clock from one to the other. They needed to get an accurate record of the flight (speed and altitude etc.) so that they could integrate a special relativity correction over the journey, because time dilation at nearly 1000 km/h is measurable.
A native atomic timescale on the moon will differ from UTC partly because of the different gravitational dilation owing to the moon being much smaller, and partly because of its orbital velocity.
Anyway, I haven’t seen a good explanation for the lunar timescale. Like,
The announcement could have benefited from a lot more detail since it implies there’s some cool science being planned but they said nothing about it!
previously…
Holy shit, that’s some seriously advanced stuff. Our ancestors knew what they were doing!
It was the 1960s. Some of those people are still alive. Feels weird to call them ancestors.
I get what you’re saying, but it seems clearly a question of perspective to me. Consider this possible timeline:
It’s unlikely, but not implausible, that someone is posting here with a great-great-great-grandfather who worked on this project (but died long before the poster were born).
Will the Olson time zone database gain another level?
/usr/share/zoneinfo/Earth/Europe/BerlinThe first two I have no idea. For the last one, it seems cheaper and easier to have a decoupled timescale (fly some atomic clocks to the moon and provide them with power and a stable-enough environment) than to have a coupled one (fly some atomic clocks to the moon, provide them with power and a stable-enough environment, and a time-transfer system).
I’m sure the moon is too far for common-view GPS (everyone’s favorite cheap easy solution) to work, so “do nothing” probably seemed like a good alternative. And they can always start steering it at some later date, and just freeze in whatever offset existed at that time.
It should track date by lunar month and just record the date as being 1/1/1 forever :D