It’s HTML, generated on the server, using composed templates…like some of us were writing in 1999, and again in 2009, and probably even (heretical as it might seem) in 2019, or 2023!
I do appreciate that we’re stepping back a bit from the ridiculous workflow of transpiring JS variants to other JS variants to polyfilled alt-web runtimes to paint HTML using deeply-nested JS class hierarchies, but…c’mon. (Likewise, “filesystem-based routing” is another facepalm term. We had this by default with CGI, JSP, CF, PHP, et. al. way back in the before-times, too.)
You’re really “Columbusing” this hard, folks. (Def.: like Columbus, you stumbled into a place that lots of people were already happily occupying, claiming it as a grand discovery, and self-identifying as an intrepid explorer.)
To be clear: I don’t bear any ill will towards the author of this article, who did pull together some reasonable crates and design fragments, and seems to be at least somewhat self-aware about the fact that this isn’t actually 100% new terrain. That being said, the ahistorical subgenre of “server components” as wild new terrain bugs me rather a lot.
I hear you and mostly agree, but in my experience what’s often the case is that there isn’t a good historically aware webdev canon (for lack of a better word).
As a community and industry, we default to be exclusively forward facing, there is no history there is only now etc., which often ends up in some version of “columbusing” as you called it. But that’s at least in part because a lot of people that are into webdev these days have no experience, or even mental model of older significantly simpler technologies than the mess of an ecosystem most of the web relies on these days. Heck, some have never even heard of CGI and have maybe only seen the abbreviation when configuring a webserver (if they even configured it themselves at all, I mean why do that if you can just use a container, right?).
I’m also “an old” (to reference your profile), but I’d encourage you to keep the frustration at bay and instead recruit folks like this into the church of “Hey that’s neat, that reminds me of xyz from way back when - maybe you’d like to check it out! History can be neat too, there’s lots of useful ideas in ‘legacy’ technologies.”
For what it’s worth, I remember the eye-rolling and dismissive/negative comments from old school devs when I was young and thought interpreted languages with garbage collection are just universally better suited to do everything and anything. That wasn’t a good experience. What was a good experience though was having folks introduce me to cool / useful stuff from their era.
idk, I’ve used server-side includes, though I think it was in 2002 rather than 1999. They sucked ass. CGI too. PHP as well; though it seems it might have been a step in the right direction that sure wasn’t obvious at the time. Never used JSP or CF so I can’t comment. But on the whole dynamic web content has always been a morass of tradeoffs and complication.
I’ve been recently playing with Elixir’s server-side rendering functionality in Phoenix and what strikes me about it is that the things you make can compose. You can take a smaller, simpler component and build something bigger out of it. You can take raw HTML and build a small component out of it and put it anywhere any HTML would be valid anyway. idk about Maud but with Phoenix it actually validates your generated HTML as well, so if you make a typo or type error or some other bug it will generally give you a compiler warning instead of the browser going “hang on I think I can figure this out”.
idk if it will change the world or hold up to its promises in reality, or whether it will turn out to be just another broken abstraction, but so far it’s pretty nice.
Maud will give you compilation errors so you can’t open a and not close it for example. My “components” are just Rust functions so if you don’t provide the correct arguments while invoking them you get build errors as well.
A different matter is for my custom markdown components though. The definitions of the component itself is built with Maud so checked at build time, but since Markdown is rendered at runtime (because I’m planning to move my content into a Postgres db) there’s not much I can do about correct usage of them.
author here! Yes, I’m aware this is not anything new at all, I just wanted to hightlight that it’s not difficult to put together a webpage (in this case, with Rust) - and the “react server components” that recently got some hype are not a novel idea.
For me the best parts of building this were:
Suspense, being able to lazy load the component - thanks htmx
Custom markdown components, still feel a little hacky with lol-html but overall once you have everything set up it’s nice to work with
I’m currently working on improving the DX of this little stack by using Leptos instead of Maud for templating. This allows some slightly more advanced features.
Essentially we can have our cake and eat it too. A local dev server that uses SSR and does live reload, while also supporting fully static site generation.
Just make sure the Asahi kernel version is compatible with the ZFS module, set boot.supportedFileSystems when building the installer, and things should be all set. I think that was the only issue I ran into when I tried that specific setup a few months ago.
Yes, there’s should be no reason it won’t work on either ALARM or Fedora Asahi Remix because we support running Nix on standalone distributions on aarch64 all the same as NixOS.
The only exception I can think of is maybe some SELinux incompatibilities on Fedora? I believe I’ve read about having to do some weird hacks, but I don’t know the details myself. Try it and see?
I run Nix on my personal and work macs and it…basically works?
Like any mix of Nix and <OS that isn’t NixOS> you have to be really mindful of what’s in your path, find out which shebangs + and hard-coded commands in scripts are lying around, be okay with never being able to use (say) pre-compiled blobs for Node or Python libraries might get pulled automatically, etc.
Modulo all of that, you can at least do a quick ‘nix run’ invocation to pick up a package, build from flakes, etc.
Generally, though, I find myself delegating more and more to a NixOS VM running on the same host. Nix-on-Docker is another option that actually does some stuff more cleanly than native nix-darwin, if you’re okay with containerizing all the things.
Running it on Asahi in particular was my concern. It means you and I both can avoid the NixOS VM step and just use Nix on Linux, which in my experience is fine (I’m sorry it’s treated you so poorly!).
Oh, sorry if I gave the impression that Nix-on-Linux was a bad experience. I chose/continue to choose to use it because it solves a whole bunch of other pain points, like needing to write f’ing awful makefiles (or even worse, use autotools…shudder) to build across multiple target platforms.
I’m just spoiled by how well things work in full-blown NixOS, I guess. :)
Surprised at the positivity here. A single point of failure is fine for many SaaS but for backups - meaning you have something whose value justifies paying to back up off site - it seems disqualifying. I’m also surprised that in the ~9 years between the introduction of the code bug and now there was no attempt to do a disaster recovery or other integration test and replay the logs from s3.
Designing systems is all about tradeoffs. Adding multiple redundant instances of every core service to improve uptime necessarily introduces all kinds of hard-to-reason-about failure modes beyond basic service uptime, not to mention exponentially increasing the cost and operational burden of running your application.
It’s very easy to over-design a system from the beginning to handle exciting failure modes like, “an AWS region went hard-down and we failed over invisibly and just kept trucking!” while de-emphasizing ones like, “our carefully-orchestrated deployment process failed halfway through in a way that left us with a split-brain scenario between our independent per-region clusters, and we spent days diagnosing and coding defenses against the issue, even though the original outage was over in 15 minutes.”
Nine years without data loss in an active cloud backup service is a remarkable achievement; 24 hours of downtime is a very modest hit to reliability metrics in comparison with that.
I didn’t say anything about system design in my comment. I don’t disagree with you but I was talking about management decisions. The single point of failure is Colin and keeping that in place is a management decision. Not attempting to do a disaster recovery replay of the s3 logs once in a while is also a management decision.
A single point of failure is totally fine if that point failing does not result in violating your promises to your users. If you use tarsnap you know what you’re signing up for: an excellent, but bare bones, service by a brilliant individual.
As I gather, there was no data loss and no risk of data loss. There was a only a temporary inability to add data to backups and to access data from backups. And there were integration tests, but a specific scenario was missed.
I think most backup services have larger issues in edge cases.
I wasn’t questioning any promises made by the owner, I am questioning the sensibility of a user who purchases the service. If you have something worth backing up and worth paying to back it up are you getting what you need out of a company with a single human system administrator?
I also wouldn’t assert that there was no risk of data loss. The other tests are nice but the only test that really matters is “when I need to do so, can I replay all of my logs, restoring all of my customers’ data correctly, from s3?” Although he hit bugs that did not result in customer data loss you can’t ignore that in the 9+ years between tests, and with an entirely homebrewed system design, that he could have introduced some bug along the way that caused data loss when replaying logs.
Put together these are all black marks against the management decisions made by the proprietor and it calls into question if Tarsnap is fit for purpose.
I agree that this event is evidence for some of the ‘cons’ one should have considered when one chose to use tarsnap. It is not fit for all purposes or all roles in all backup schemes. It still seems fit for many roles and purposes to me.
If you have something worth backing up and worth paying to back it up are you getting what you need out of a company with a single human system administrator?
Yes because you have another backup setup anyway, to avoid the single point of failure (for example, having a company with a lot of engineers and support people doesn’t help if the failure is a bankruptcy).
For some background, my mindset largely comes from this kick-ass paper How to Lose Money in Derivatives which is about managing risks at hedge funds but has really shaped my way of thinking overall and I think is applicable here.
One of the author’s points is that nobody can escape the need to diversify investments and diversification has to be really rigorous because the exceptionally bad situations force more correlations than you expected. The extreme example here is that if there was ever a situation where S3 was shut down something huge has happened that is going to cause tons of failures across our industry even in places that aren’t on the cloud.
Another point is that liquidation is really the worst thing that can happen. A fund or investment can take losses and survive but when you’ve gone as far as liquidation you’re out of that money forever. The author drags up some examples of funds where they explicitly stated to customers that they intended for the customer to handle diversification themselves in their own portfolios as they could do a better job than the fund knowing their own situation and that the fund was then free to get superior returns from its improved focus. The reality is that nobody is free from needing to diversify and everyone has a duty to avoid blowups.
The similarity with Tarsnap is clear. A financial blowup from liquidation is equivalent to a Tarsnap blowup leaving customer data permanently inaccessible. And Tarsnap is neglecting its duty to manage these risks and diversify, a duty that everyone has and can’t be delegated to your customers. When I look at Tarsnap I am seeing Manchester Trading.
This is indeed an interesting perspective and I will read that paper someday.
The similarity with Tarsnap is clear.
It is. However I disagree with your conclusion, since a diversified Tarsnap would be much more costly (and I probably wouldn’t use it). My other backups will not fail in case of a S3 failure and it is indeed more economical to handle diversification myself. Sometimes, worse is better :)
If I read this correctly, Colin has done periodic recovery tests but only with a subset of the data and that subset didn’t have any machine-rename events in it. That’s a shame, but if that’s the only bug that affects availability that he’s had in 9 years, he’s doing better than pretty much every other service I’ve encountered.
Doing a recovery of all data is cost prohibitive for most backup services: their cost model relies on the fact that most data is write only. I think Colin said that over 95% of data written to tarsnap is never read (might have been 99%) by the end user. This makes sense: tarsnap is providing off-site backups. Typically, you use this as a second tier of backups, with a local NAS or similar as the first tier. You only go to backups at all if you’ve accidentally deleted something or if a machine has failed. You only go to off-site backups if your on-site backups have failed at the same time as the thing that they’re backing up. If you’ve got something like a cheap 3-disk NAS with RAID-Z and regular snapshots then you’ll almost never try to recover things from tarsnap, but when you do it’s very valuable.
In terms of hiring someone else in addition to Colin, I believe he has a process in place for avoiding bus factor (if he’s hit by a bus, there is a backup human who can take over the service), but operating tarsnap is less than one full-time person’s worth of work, so if he hired other people then the price would go up (which would cause some people to leave, which would further put up costs for those remaining behind because the cost of the second person is amortised over fewer customers).
Tomorrow is my son’s 8th birthday, and his mom got him the present of taking his little sister (3) out of town overnight so he can have some 1:1 time with Dad. (We do a lot as a family, and he’s a great big bro, but also enjoys getting one parent’s full attention.)
I’ve been slowly introducing him to the idea of computers as machines you can open up and do things to, not just sealed blocks of glass + aluminum. He got a “lunchbox computer” for Christmas last year, and this is yet another project in that mode.
i bought a framework laptop from the original run. still have the 11th gen mainboard, considering either upgrading to the AMD mainboard or the framework 16 later this year.
the framework is a nice machine. i haven’t had any complaints about battery life, but i also mostly use it plugged in. the build quality overall is very good. the keyboard is very good – i prefer it over my thinkpads.
the trackpad is hot trash. absolutely awful. and, look, i’m not one of these people who thinks that the macbook trackpads are the end-all and be-all of mobile pointing devices. the macbook trackpads are fine. the framework trackpad is not. sensing is good, multitouch is good, but the physical click only works about 50% of the time. framework knows about this. it may have been fixed in a newer hardware revision. i’m not sure.
otherwise, good laptop. i’d buy it again, and i will continue to buy products from framework in spite of the awful trackpad.
Writing it from my 12th gen intel framework 13. The trackpad is fine, no complains. The weak hinges are annoying sometimes (will replace with stronger ones next time I’m ordering anything), and slow battery drain on suspend is still not fully addressed (waiting for next BIOS version).
it’s funny, i have zero problems with the hinges. the first version of the top cover had more flex than i was happy with, but the second version (the CNC’d one) is excellent.
to go into a bit of detail: the trackpad dragging behaviour is what kills me. if i’m doing left-click drags by holding the physical button and then dragging, the physical click has a tendency to either not register or release partway through the drag. it’s worse with right-click and middle-click drags (which i’m also doing with some regularity).
perhaps i need to upgrade my trackpad or input cover. thanks for the heads-up!
it’s funny, i have zero problems with the hinges. the first version of the top cover had more flex than i was happy with, but the second version (the CNC’d one) is excellent.
Just to clarify: when the screen is not in perfect 90 square angle, it’s not possible to wiggle the laptop slightly without the screen folding/opening. Usually this is not an issue, but when trying to use the laptop in improvised spots (e.g. walking while holding it, or laying on the back in bed, or on a wobbly table) it becomes annoying.
the physical click has a tendency to either not register or release partway through the drag
Tried exactly that now and seems perfectly fine here.
Early Framework units had bad trackpads. Clicks failed to register, right/left selection based on area was unreliable, etc.
At one point I think the old ones were considered a warranty replacement. I wanted a new keyboard deck anyway, so I installed one on my first-gen Framework (11th gen CPU) and it fixed the issue w/o any software config or driver changes.
The hinges are bad, though. I hear tell that’s also something they tweaked but TBH I’m probably done tuning this unit until my 16” preorder number comes up and I find out just how painful the upgrade cost is going to be. (Odds are I’ll pay it b/c I love what Framework is doing and the expansion modules look particularly Lego-like in the 16.)
Just another data point: My “framework 13 with intel 12” trackpad works great. Same weak-hinge complaint as dpc_pw, but it’s extremely minor. I feel in love with this thing so bad (despite the tiny screen) that I don’t use my M1 anymore outside of work, and gave away my system76 because the framework made it look awful in comparison. The M1 takes up to 60 seconds to wake from sleep while this thing wakes instantly.
Disclaimer: I’m not a gamer beyond some simple steam games and minecraft. I use my laptop primarily for coding and projects.
I can get behind renewing the push for copyleft, and even attributing much of the original promotion to RMS, but c’mon: the artful B/W portrait, repeated calls out to his prescience, etc. reads like hagiography, brief mention of “toxicity” aside.
The dude is a misogynistic ideologue who abused his platform as a Free Software pioneer to subject other people to his gross views.
So yeah, support copyleft, but don’t sweep aside how the FSF backed RMS and ignored his willingness to blithely dismiss child abuse as “not that big of a deal”.
He’s definitely an ideologue, but for the cause of software freedom, which is a good thing. Labeling him a misogynist is just a politicized insult, based on disliking other political opinions adjacent to gender he has expressed at some point, or just finding him personally awkward and spergy. There’s hell of a lot of prominent technologists who I’d want to see ostracized for their tech-unrelated stated political views ahead of Stallman.
Labeling him a misogynist is just a politicized insult, based on disliking other political opinions adjacent to gender he has expressed at some point
This can be used to handwave away any level of complaint. After reviewing the GeekFeminism wiki article, with citations, I feel comfortable saying that I’m not throwing my lot in with him.
There’s hell of a lot of prominent technologists who I’d want to see ostracized for their tech-unrelated stated political views ahead of Stallman.
He has said and done plenty of things directly related to tech, or software projects, that this too comes off as handwave and dismissive.
I think RMS was/is right about a lot and I don’t care about canceling him, or being upset, but I’m not going to bat for him either, and I’d love to have a figure like him, that I could fully respect and endorse.
a misogynistic ideologue who abused his platform as a Free Software pioneer to subject other people to his gross views.
This is the sort of accusation that nowadays just rolls off my brain like water off a duck’s back.
Oh so he has cooties huh. He’s “gross” and “misogynistic” which basically means the 7/10 mean girls trying to play queen bees of the autists find him unbearable. That’s all. That’s what you sound like: a moralistic christian nun, the kind that people hated as nurses, because you can tell deep down they think you deserve your suffering.
And when you say “abuse” what you actually mean is that he earned his way into his position, but others who are resentful and jealous didn’t. They hate that he doesn’t share their views that every statement and assertion should be padded and bubble wrapped to avoid misinterpretation by people who get off on being offended.
Your team is made of people, who each have individual skills and motivations. As long as you remember and internalise that, you probably won’t go too far wrong. The worst management I’ve seen has always been as a result of failing to get that step right.
Thanks for the recommendation, I’ve not seen that one before. Of all the management books that I’ve read, the one that I’d recommend if you were going to read only one is PeopleWare. I’ve read the first and second editions. I’ve heard good things about the third edition but not seen a copy.
In my spare time, I’m writing a book specifically about managing remote teams, since it’s something I’ve been doing for most of my career and people keep telling me that it’s hard. I hope to finish it over the summer.
This is 100% how you get from “Manager RPG” Level 1 to ~middle tier: remember, pay attention to, and support your people. It’s critical, and worthwhile because human beings are more important in basically every case.
Above a certain level, though, it’s insufficient because how your team feels and acts is only one part of their overall success. In larger orgs/projects, visibility, impact, credit, and scope usually depend at least as much on the “political BS” people call out as they do any one group’s technical (or even communications) excellence.
Past “Senior Engineer” or “Senior Manager” in a mid-to-large org you have to be attuned to the larger organizational temperament and trends if you want to advance and have the maximum impact. Having a dedicated advocate on your behalf who is similarly clued-in can be a good stopgap, but if they leave/lose favor/shift focus you should be prepared to take it on yourself.
Great managers start and end with supporting their team, but in between there’s a lot of thought, communication, and upward/lateral management required to make that successful long-term.
Completely agreed. Managing up is a different set of tasks to managing down. I took the question to be about managing down, but it may be that managing up was not part of the question because the author didn’t know how important it was.
Ensuring that your company understands what your team is doing and that you understand how your team’s activity fits into (and contributes to) overall strategy are both very important. Assuming, of course, that your company has an overall strategy.
Thanks for reiterating this - this is definitely how have I have been managing the small team already. The one thing I am struggling with is they are not providing me with a lot of input and are asking me directly what the paths forward look like. A lot of my response is “Well what do you want to do?” So having a good map or some examples of how I can structure the team for growth is really helpful to foster better conversations.
I had a lot of conversations like that with students. I found that giving them an overview of a few different career paths worked well to start the conversations going. In particular, it was great for finding the things that they realised that they really didn’t want to do. I’d also recommend The Coaching Habit, not least because it has a path for making the person that you’re coaching do most of the work.
Beyond that, people tend to relate better when you’re talking about what you did than when you’re talking about what other people did. If you can, talk about what you did and introduce them to people who can give them other perspectives.
I’d start by asking yourself ‘what motivates this person?’ about each member of your team. There’s a lot written about intrinsic and extrinsic motivations and you really want to know what gets each person excited (building cool things, learning new stuff, micro-optimising things, seeing things deployed at scale, whatever). If you can’t answer that question about someone on your team, spend some time getting to know them. Make sure you have regular informal conversations with everyone, because as soon as you’re on a ‘we’re talking about work’ footing you lose the ability to learn these things. Chatting to your team is not slacking off, it’s part of your job as a manager. Once you understand what motivates people, you can think about the career paths that will increase their opportunities to do these things. And you can help direct them towards doing things that will help them build these skills.
But all of that comes back to ‘remember that your people are people’. They are not resources. They are not fungible. As long as you remember that, you’ll be fine.
…the Bluesky people were looking for an identity system that provided global ids, key rotation and human-readable names.
They must have realized that such properties are not possibly in an open and decentralized system, but instead of accepting a tradeoff they decided they wanted all their desired features and threw away the “decentralized” part…
I’m really curious about this - why is it not possible in a decentralized system? I know there have been past attempts around web of trust. I’ve always wondered why something couldn’t be built using blockchain to store public keys. Key rotation could be performed by signing a new key with the old key and updating the chain.
Am I missing something obvious that’s not currently possible? Are there good research papers in this area I should read?
You don’t need a blockchain to store a bunch of self-signed public keys. PKI has been doing that for ages via certificates.
OTOH, if you want globally-consistent (and globally-readable), unforgeable metadata associated with those keys you need some arbiter that decides who wins in case of conflicts, what followers/“following” graph edges exist, etc.
Nostr actually uses existing PKI (via HTTPS + TLS) to “verify” accounts that claim association with an existing public domain. Everything else is…well, not even “eventually consistent” so much as “can look kinda consistent if you check a lot of relays and don’t sweat the details.”
It’s possible, but you need some kind of distributed consensus, which in practice looks like a blockchain. ENS is one implementation on Ethereum. You also need some mechanism to prevent one person from grabbing every name (if you assume sybil attacks are possible, which they will be on almost any decentralized system). The most common one is to charge money, which is not really ideal for a social network (you want a very small barrier to entry)
You also need some mechanism to prevent one person from grabbing every name
An interesting take on this is done by the SimpleX network: it employs no public identifiers. The SimpleX Chat protocol builds on top and continues this trend. You can build a name service on top, but the takeaway I make is that maybe we don’t need name services as often as we think.
The assertion of Zooko’s Triangle is that you can’t have identities that are simultaneously human-meaningful, globally unique and secure/decentralized. You can only pick two properties. DNS isn’t secure (you have to trust registries like ICANN.) Public keys aren’t human-meaningful. Usernames aren’t unique because different people can be “snej” on different systems.
The best compromise is what are known as Petnames, which are locally-assigned meaningful names given to public keys.
I picked up a $120 Lenovo “education channel” laptop late last year to use as more or less a fancy external hard drive for my digital “go bag” of essential keys/tokens/documents. It’s a shockingly capable little device; compared to a lot of (compact) entry-level business laptops, I’d say the only immediately-noticeable differences are the screen (~11”, and only 768px vertical) and use of MMC instead of NVMe storage.
It also stands alongside the Apple Silicon devices w.r.t battery life. I can get 11-12 hours under normal use. The CPU is under-powered by today’s standards, but the lack of a bazillion cores, big caches, wide vector accelerators, etc. mean there just isn’t that much silicon to keep cool or fed, so: no fan, CPU boost or GPU boost spinning up to take 5x baseline power usage, etc.
See @wpld’s comment above about Gitea, Ltd. and the community response. tl;dr: there’s a bit of a fork in the road right now where Gitea appears to be doing more to support “enterprise-y” use cases, while Forgejo (the Codeberg fork) is more focused on federation and being a fully community-supported and operated project.
I don’t have any reason to think the two won’t be able to rendezvous and share code in the future (even if it looks more like an OpenBSD/NetBSD sort of situation rather than simply different builds of the same shared core) but you might have a better time working with one or the other fork based on which of those major priorities lines up with your needs at the moment.
This is incredible. I think it’s mostly large, established businesses where this is a thing. On the other end of the spectrum you have the overworked startup workers who have to work lots of overtime, and the struggling underpaid freelancers.
I’m not convinced it’s size so much as location of the department within the company and whether that department is on a critical revenue path. I mean it’s hard to imagine this at a tiny to small (<25 headcount) company but such a company won’t really have peripheral departments as such and would somehow need to be simultaneously very dysfunctional and also successful to sustain such a situation.
The original author keeps talking about “working in tech” but the actual jobs listed as examples suggest otherwise: “software developer [at] one of the world’s most prestigious investment banks”, “data engineer for one the world’s largest telecommunications companies”, “data scientist for a large oil company”, “quant for one of the world’s most important investment banks.”
First off, these are not what I’d personally call the “tech industry”.
More importantly, these don’t sound like positions which are on a direct critical path to producing day-to-day revenue. Similarly importantly, they’re also not exactly cost centres within the company, whose productivity is typically watched like a hawk by the bean counters. Instead, they seem more like long-term strategic roles, vaguely tasked with improving revenue or profit at some point in the future. It’s difficult to measure productivity here in any meaningful way, so if leadership has little genuine interest in that aspect, departmental sub-culture can quickly get bogged down in unproductive pretend busywork.
But what do I know, I’ve been contracting for years and am perpetually busy doing actual cerebral work: research, development, debugging, informing decisions on product direction, etc.. There’s plenty of that going on if you make sure to insert yourself at the positions where money is being made, or at least where there’s a product being developed with an anticipation of direct revenue generation.
I’ve seen very similar things at least twice in small companies (less than a hundred people in the tech department). In both cases, Scrum and Agile (which had nothing to do with the original manifesto but this is how it is nowadays) were religion, and you could see this kind of insane inefficiency all the time. But no one but a handful of employees cared about it and they all got into trouble.
From what I’ve seen, managers love this kind of structure because it gives them visibility, control and protection (“every one does Scrum/Agile so it is the right way; if productivity is low, let’s blame employees and not management or the process”). Most employees (managers included) also have no incentive beeing more productive: you do not get more money, and you get more work (and more expectations) every single time. So yes, the majority will vocally announce that a 1h hour task is really hard and will take a week. Because why would they do otherwise?
Last time I was in this situation, I managed to sidestep the problem by joining a new tiny separate team which operated independently and removed all the BS (Scrum, Agile, standups, reviews…) and in general concentrated on getting things done. It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
I’m guessing maybe it isn’t: a single abnormally productive team potentially makes many people look very bad, and whoever leads the team is therefore dangerous and threatens the position of other people in the company without even trying. I’d find it very plausible that the productivity of your team was the root cause of the political issues that eventually unfolded.
This was 80% of the problem indeed. When I said it was another story, I meant that this kind of political game was unrelated to my previous comments on Scrum/Agile. Politics is everywhere, whether the people involved are productive or not.
It’s not just a question of people not wanting to “look bad,” though.
As a professional manager, about 75% of my job is managing up and out to maintain and improve the legibility of my team’s work to the rest of the org. Not because I need to build a happy little empire, but because that’s how I gain evidence to use when arguing for the next round of appealing project assignments, career development, hiring, and promotions for my team.
That doesn’t mean I need to invent busywork for them, but it does mean that random, well-intentioned but poorly-aimed contributions aren’t going to net any real org-level recognition or benefit for the team, or that teammate individually. So the other 25% of my energy goes to making sure my team members understand where their work fits in that larger framework, how to gain recognition and put their time into engaging and business-critical projects, etc., etc.
…then there’s another 50% of my time that goes to writing: emails to peers and collaborators whose support we need, ticket and incident report updates, job listings, performance evaluations, notes to myself, etc. Add another 50% specifically for actually thinking ahead to where we might be in 9-18 months and laying the groundwork for staff development and/or hiring needed to have the capacity for it, as well as the design, product, and marketing buy-in so we aren’t blocked asking for go-to-market help.
Add up the above and you can totally see why middle managers are useless overhead who contribute nothing, and everyone would be better off working in a pure meritocracy without anyone “telling them what to do.”
omg, I’ve recently worked in a ‘unicorn’ where everyone one was preoccupied with how their work will look like from the outside and if it will improve their ‘promo package’. Never before have I worked in a place so full of buzzword driven projects that barely worked. But hey, you need one more cross team project with dynamodb to get that staff eng promo! 🙃 < /rant>
Given your work history (from your profile), have you seen an increase in engineers being willfully ignorant about how their pet project does or does not fit into the big picture of their employer?
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening. Would be interested to hear how you’ve handled that, if you’ve encountered it.
I don’t think there’s any kind of silver bullet, and obviously not everyone is motivated by pay, title, or other forms of institutional recognition.
But over the medium-to-long term, I think the main thing is to show consistently and honestly how paying attention to those drivers gets you more of whatever it is you want from the larger org: autonomy, authority, compensation, exposure in the larger industry, etc.
Folks who are given all the right context, flexibility, and support to find a path that balances their personal goals and interests with the larger team and just persistently don’t are actually performing poorly, no matter their technical abilities.
Of course, not all organizations are actually true to the ethos of “do good by the team and good things will happen for you individually.” Sometimes it’s worth to go to battle to improve it; other times you have accept that a particular boss/biz unit/company is quite happy to keep making decisions based on instinct and soft influence. (What to do about the latter is one of the truly sticky + hard-to-solve problems for me in the entire field of engineering management, and IME the thing that will make me and my team flip the bozo bit hard on our upper management chain.)
Would you be able to elaborate on the last paragraph about making decisions based on instinct and soft influence? Why is it a problem and what do you mean by “soft influence” in particular? Quite interested to understand more.
Both points (instinct + soft influence) refer to the opposite of “data-driven” decision-making. I.e., “I know you and we think alike” so I’m inclined to support your efforts + conclusions. Or conversely, “that thing you’re saying doesn’t fit my mental model,” so even though there are processes and channels in place for us to talk about it and come to some sort of agreement, I can’t be bothered.
It’s also described as “type 1” thinking in the Kahneman model (fast vs. slow). Not inherently wrong, but also very prone to letting bias and comfort drown out actually-critical information when you’re wrestling with hard choices.
Being on the “supplicant” end and trying to use facts to argue against unquestioned biases is demoralizing and often pointless, which is the primary failure mode I was calling out.
This is true and relevant, but it’s also key to point out why instinct-driven decisions are preferred in so many contexts.
By comparison, data-driven decision-making is slower, much more expensive, and often (due to poor statistical rigor) no better.
Twice in my career I have worked with someone whose instincts consistently steered the team in the right direction, and given the option that’s what I’d always prefer. Both of these people were kind and understanding to supplicants like me, and - with persistence - could be persuaded to see new perspectives.
Excellent points! Claiming to be “data driven” while cherry-picking the models and signals you want is really another form of instinctive decision-making…but also, the time + energy needed to do any kind of science in the workplace can easily be more than you (individually or as a group) have to give.
If you have collaborators (particularly in leadership roles) with a) good instincts, b) the willingness to change their mind, and c) an attitude of kindness towards those who challenge their answers, then you have found someone worth working with over the long-term. I personally have followed teammates who showed those traits between companies more than once, and aspire to at least very occasionally be that person for someone else.
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening.
I think this is part of the natural life-cycle of the software developer - the majority of developers I’ve known have had an extended period where this was true, usually around 7-10 years professional experience.
This is complicated by most of them going into management around the 12-year mark, meaning that only 2-3 years of their careers combine “experienced enough to get it done” with “able to regulate their focus to a narrow target”.
I think those timelines have been compressed these days. For better or worse, many people hold senior or higher engineering roles with significantly fewer than 7-10 years experience.
My experience suggests that what you’ve observed still happens - just with less experience behind the best-practices and axe-sharpening o_O
I don’t understand this at all. It’s titled “the fundamental difference between Terraform and Kubernetes” but then goes on to explain that they’re identical.
I think the fundamental difference they’re getting at is K8s continually evaluates reality vs desired and adjusts it as required. Terraform only does so when you run an apply. (Not sure that’s a fundamental difference, but that’s their terminology. 🤷🏻♂️)
I guess if you ran while :; do terraform apply --auto-approve; done you could equate it to being the same as K8s.
This is the most terrifying shell script fragment I’ve seen in recent memory. 😱
Yes, in theory your TF code should be audited and have guard rails around destroying critical resources etc. etc. but…IME it’s usually at least partly the human in the loop that prevents really horrible, data-destroying changes from happening.
Kubernetes, on the other hand, makes it (intentionally) difficult to create services that are “precious” and stateful, so you’re somewhat less likely to fire that particular footgun.
…and lest it seem I’m dumping on TF and talking up K8s: I think Kubernetes gives you all kinds of other ways to complicate and break your own infrastructure. It just usually takes more than a single-line change anywhere in your dependency graph to potentially schedule a stateful node for deletion (and likely re-creation just afterward, but that doesn’t protect you from local state being lost).
They’re also equally good at helping you create a bunch of “objects” for which your cloud provider can charge you a substantial monthly fee, which I think helps to explain their mutual popularity amongst the AWS/GCP/Azure/etc. set.
Beyond that, Terraform also understands dependencies between services while k8s continuously tries to reconcile (so if you create some resource that depends on another, Terraform will wait for the first to be created before creating the second, while Kubernetes will immediately try creating both and the second will just fail until the first is created).
They’re quite different. Here are some things you can do with Terraform and Kubernetes that aren’t symmetric, showing that they are distinct:
we can use the Kubernetes provider for Terraform to declare objects for a Kubernetes cluster, and apply Terraform multiple times against multiple Kubernetes clusters
we can apply Terraform from inside a Kubernetes Pod, regardless of which Terraform providers we are using
we can leave a Kubernetes cluster running for a week, constantly changing state, without interrupting coworkers (while Terraform requires a lock in order to apply state changes)
Oh, I’m not claiming that they are identical. I just think the OP is: it’s of the form “the fundamental difference between things A and B is that A does X, while B does X”.
I am satisfied by caius’s sibling comment that it’s trying to talk about the bandwidth difference: K8s’s feedback loop is very fast, while TF’s isn’t. But I don’t regard that difference as particularly fundamental, and I don’t think the ones you’ve identified strike me as fundamental either. So I’m asking: what is?
I don’t use either tool regularly, and I don’t have great mental models of them, but just in case it helps clarify, I think what I’m looking for might be ownership. K8s materializes the “resources” it manages inside itself, while TF prods external systems into changing state.
This was a good dive into container and namespace primitives. It is a bit strange to me that and article about pods, containers, and Docker would omit even a passing mention of podman which in addition to the obvious naming overlap actually offers “pods” independent of k8s.
So yes, this is a good walkthrough of how to emulate pods with Docker, but misses the chance to at least close with something like, “and if this approach is interesting to you, check out Podman which does it well and also gives you a bunch of other nice things that neither bare OCI containers or Docker do, like systemd integration and full control of the underlying host VM image (on non-Linux systems).”
Part of why writing simple network services in Rust is so nice is that Serde + any of the high-level web server crates make it the work of literally minutes to define a simple data structure and accept + emit it in a safe, validated way. I can more or less ignore the wire format until I start hitting really edgy edge cases.
But the same could be said of haskell’s aeson library (surely the main inspiration for serde), so it must be something else that is the “killer” argument to choose rust instead.
This is very cool! I made a very, very dumb “Datalog query console” inside my first Clojure+DataScript project; the ability to manipulate queries as structured data out of the box is a good demonstration of why Lisp-flavored Datalog tools are so compelling.
For those interested in the space, here’s another “datalog in notebooks” I’ve played with a bit in the past: https://github.com/ekzhang/percival
It doesn’t have the nice first-class syntax of Datalog-in-Clojure, but it does use WASM and Vega in a pretty interesting way to build a bunch of visualization capabilities on top of the core query engine.
But what’s in it for me? I actually didn’t even know I could do that and now I ask - why would I want to get my packages from there and not via apt? What are the benefits except one more tool and repo?
Not every application, hardware appliance, or even general-purpose server is ideally suited to having a Linux kernel and userland underneath it.
Docker masquerades as a cross-platform packaging and distribution tool, but what it really provides is a cross-distro abstraction for a subset of modern-ish Linux environments. Windows and Mac hosts only get support by way of a VM managed by Docker or containerd.
Even more frustrating to me is the fact that Docker actually strengthens the dominance of 2-3 distros (Alpine, Ubuntu, and CoreOS) because they’re the standard base for most other images.
So: pkgsrc and other “ports” systems are important to keep software running in places other than that very limited set of environments. (I likewise think that Firefox and other non-Chromium browsers are critically important so that “portable applications” don’t just mean, “runs in Electron or a browser tab.”)
The problem is that this is not a use case a great many people have. If you use a linux distro, you already have a package manager. If you use a BSD, it has one too. MacOS has brew (and macports, even nix). Everyone already has all the software they need so it is very hard to make a case for pkgsrc.
A couple of years aog I got interested and played around with pkgsrc on debian to learn a thing or two. It’s cool that it exists, but it is not really all that new or useful so that people would get excited.
I believe the reason why there is not much mindshare is that it is perceived as yet another package manager with no immediately obvious value.
pkgsrc is the native package manager on one of the BSDs (used to be 2) and SmartOS. On macOS, it is up there with brew and macports. On Linux, it is useful for distros that do not have their own package managers (Oasis comes to mind). Could be useful even on mainstream distros like Debian when you are stuck on the stable release and you need newer versions. That said, not all packages are up-to-date in pkgsrc but I’m glad it exists and people continue to work on it.
How many users does SmartOS have? It is the niche of the niche of the niche. How many people really care about NetBSD? Very few, it seems.
pkgsrc is not up there with brew and macports. Everybody and their mom use brew, a few people - including me - use macports and a very few people use nix. I have never met anyone using pkgsrc on a Mac and I have been working in “every dev has a mac company for the last decade”.
I am not saying pkgsrc is bad, I am saying almost nobody cares b/c there is not much to gain for someone not running NetBSD (or SmartOS) where it is the native package manager.
After reading their home page description, I have no idea what this is actually for:
Servo’s mission is to provide an independent, modular, embeddable web engine, which allows developers to deliver content and applications using web standards.
It’s Mozilla’s effort to write a browser rendering engine in Rust. Started out as a research project, produced some decent components that made it into Firefox, and then got shelved a year or two ago when Mozilla came under new management and decided to cut costs.
doesn’t it also perform certain threading improvements which you normally don’t have in render engines ? (apart from the CSS layout system already being threaded itself)
Yes it does. Servo is far more parallelized than traditional rendering engines (both Gecko and KHTML-descended - i.e. WebKit and Blink) because of both Rust’s safe parallelism fu and the absence of decades of legacy code architecture. This is why Stylus and WebRender (Servo components that were ported into Gecko), for example, are so much faster than their predecessors in Gecko.
I know Tauri has theorized using this as the embeddable web engine (instead of WebKit) with the goal of improving cross-OS consistency. Not sure if this funding news is an any way related to that, though.
By using the OS’s native web renderer, the size of a Tauri app can be less than 600KB.
I dunno about this. One of the reason Tauri outputs can be so small is the leveraging of the “web view” on the existing platform (as opposed to shipping the entire Chromium engine like Electron). It could help with the consistency, but I would assume this would fall under a flag where some will prefer the lighter footprint over that consistency.
Agreed — I don’t anticipate it becoming a default. I don’t know what the binary size of Servo is, but hopefully it can be a compelling option against Electron/Chromium since it isn’t a fully fledged browser.
At some point, the size of functionally-equivalent C++ and Rust binaries should more or less converge, at least assuming similar compiler backends + codegen (I.e., LLVM/Clang).
Choice of coding style, size of external dependencies, asset storage, etc. are all independent of implementation language, but have a sizable impact on binary size.
I’m personally more interested in Servo for a) choice and competition against the “New IE”, and b) an engine and community more open to experimentation and extension than Chromium/V8.
At some point, the size of functionally-equivalent C++ and Rust binaries should more or less converge
For sure — though is Servo closer to being functionally-equivalent to Chromium, or Blink? I guess my hope is that Tauri + Servo is smaller than Electron + Chromium. From a quick skim it appears Chromium is substantially bigger than Blink itself, so Tauri only requiring the web engine aspect might be a size gain alone.
I’m also on board with the added competition, as long as it’s maintained going forward. If a new engine gains usage but then ceases development it would serve to slow adoption of new standards.
This is a great demo of Cloudflare’s tech stack, but the lock-in factor is strong. You shouldn’t run this unless you’re planning on being a Cloudflare customer forever, because it will never run anywhere else.
Other implementations tend to be huge (Mastodon, Misskey) or tailored to a single kind of activity (Pixelfed, Lemmy) that make them awkward to use as a base for further experimentation. I for one would be very excited to bang on a lightweight-but-somewhat-featureful alternative like this, once suitably modified to run on an open platform.
Generally speaking, I’m very resistant to choosing APIs that are locked to one vendor. Services that are generally available but perhaps simpler to deploy on one vendor’s cloud (see: AWS RDS/Aurora) seem like a pragmatic option for those who already have a relationship with that provider. Workers at least seem headed for some level of standardization, for much the same reason that K8s took off so quickly: everyone wants to get ahead of Amazon’s owning this space like they do object storage and general compute.
If you want small, there’s GotoSocial (single Go binary, SQLite) or Honk (single Go binary, SQLite, no API though). Even more minimally, there’s Epicyon which doesn’t even have a DB, it uses flat files. There’s also another Python one which is strictly single-user but I can’t remember the name of it right now.
It should be noted that GoToSocial is still very much alpha software, and while it does implement most of the mastodon api it isn’t 100% compatible yet.
This is not to be a downer on it, I really like GTS, I’ve contributed to it (and want to contribute more), and follow along their dev chat. It’s still very usable, just don’t expect it to be able to do everything yet.
The only real issues I’ve had with GtS are 1) OOB OAuth doesn’t give you the code on a page, you have to copy it from the URL (ultra minor and most people will never encounter this); 2) friend requests from Honk don’t work because of the “partial actor key” weirdness GtS has BUT this is exacerbated by a weirdness in Honk where it stores the assumed owner of a key instead of the stated owner. Need to dig into this some more before presenting it as a patch / issue tho; and 3) the weird version number breaks a bunch of libraries which use the version number for feature support (looking at you, Mastodon.py!)
Oh and Brutaldon doesn’t work but I can’t get that to work with Akkoma either and I’d be inclined to blame Brutaldon for that rather than GtS.
At least as of version 0.6.0 OOB auth displays it on a page for you, because I was playing around with it and got a nice page that looks like the rest of the UI with the token displayed for me.
The version number thing I think is known, because there’s some client that can’t do any searching because it just assumes because it’s low it uses the old mastodon search endpoint. That said I think I agree with their stance that they don’t want to turn the version field into a user-agent string situation. My own thoughts are everybody implementing the mastodon api should probably just add a new “api_version” field or something, which reports the api compatibility, because asking clients to implement logic to determine capabilities based on different server backends feels backwards when your goal is explicitly compatibility with the api.
I’m on v0.6.0 git-36aa685 (Jan 9) and I get a 404 page with the code in the URL for OOB. Just updated to the latest git (v0.6.0 git-132c738) and it’s still a 404 page with the code in the URL. But like I said, this is an ultra-minor issue most people will never experience.
[version number <=> features is nonsense]
Yep, totally agree but it’s going to be a good long while before we get an api_version field (have you seen how long it takes the Mastodon devs to agree on anything?) and we have to deal with these clients now (e.g. I have to hack my Home Assistant’s local copy of Mastodon.py to get notifications out to my GtS instance.)
Well, there’s /api/v1/XYZ and /api/v2/XYZ but that doesn’t really encompass what these libraries are trying to do by checking the version number because they’re trying to avoid calling endpoints that didn’t exist in older Mastodon versions.
e.g., Mastodon.py wraps all of its endpoint methods with stuff like this:
_DICT_VERSION_CARD is 3.2.0 which means that you’ll get an exception if you try to call whatever.status_card(blah) on a server that advertises a version lower than 3.2.0. GotoSocial is currently at v0.6.0 (which breaks the parsing anyway and stops anything from working because v6 isn’t an integer) which would mean that you couldn’t call any endpoints, supported or not.
What they should have done is have a capabilities endpoint which lists what your server can do and let the libraries use that as their guide rather than guesstimating from version numbers. Much like SMTP or IMAP banners. Or use the client-server protocol which is server-agnostic but they’ve been waffling about that since 2019 with no progress…
I don’t think they’ll ever really focus on the C2S AP protocol, it doesn’t have the same features and hacking them on sounds like a ton of work. I do agree that some sort of capabilities endpoint would be really nice.
Also yeah, agreed that trying to get mastodon itself to add api support for it feels like a huge battle, but I feel like if enough of the alternative servers like akkoma and gts agreed on an implementation, and then implemented something like it, it could force mastodon to conform? But that also has its own problems.
At the end of the day I understand why everyone conforms to the mastodon api, but it would be nice if there was some group outside of mastodon and Eugen defining the api spec, with input from the community and developers outside of mastodon, but that’s pie in the sky wistful thinking, and requires a lot of organization from someone, and buy-in from all the other server devs.
Huh, not sure what makes my GTS instance different, but I’m on the release commit, GoToSocial 0.6.0 git-f9e5ec9, and going through the oauth oob flow gave me this after doing my user login (https://imgur.com/sfm19cx), and the URL I used to initiate the flow was https://<my domain>/oauth/authorize?client_id=<app's client id>&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=read
What’s the tl;dr on this thing? It had nice drawings, and it looked like the person put in effort, so I wanted to support content like this by reading it and discussing it, but my eyes just glazed over (I probably have adult ADHD). It began to sound like a rant against cloud only software (which I also agree with: I like to work on the train during my commute) but then it got so long and windy. What was the solution?
From my understanding (after reading a few other posts on the author’s site), it’s about two things:
modern software being far too complex for any single person to understand completely; and
modern software growing such that it will use all the resources you have, even at the expense of power consumption.
The author lives on a sailboat, with electricity provided by small solar panels, where every resource (whether that be electricity, water, food…) needs to be used conservatively. When they first started on their journey in that boat, they were using modern stuff - MacBooks and ThinkPads, running contemporary OSes, but quickly discovered that with the ever-growing reliance of modern tech on a high-speed internet connection (and the large battery drain involved with using any of that tech), they needed something better.
Some context is that this is from a uxn developer. A small assembly language for making lightweight applications.
I say this presentation touches on two main points:
The developers have lived on an off-grid sailboat so don’t have access to traditional software development tools. Most people wouldn’t consider the barriers someone living in the Solomon Islands would face trying to write an Android or iPhone app. Even consider the amount of data needed to write a node.js app or to use fancy development environments on NixOS. Latency makes remote development not feasible.
And the rest of the presentation is about software preservation. If I build an Android app, will you be able to run it in 10 years? How can we make something more portable? Virtual machines can be pretty useful, but they have to present the right level of abstraction.
The subscription model of software is kind of disturbing, but I can’t tell if that’s because I’ve become an old fuddy duddy. I do very few classes of things on a computer nowadays.
Work (write software) - proprietary tools supplied by work
Write (fiction) - plain text editor
Store photos - OS folders
Store audio recordings - OS folders
Save documents - OS folders + Google docs
I worry about the longevity of my data formats, not the software I used to create them.
I assume that the hardware and software platforms will move. It is inexorable. I detest subscription models - I want to own something: software, hardware whatever. I don’t care that it gets obsolete, I just care that it works like it used to. I don’t want it to suddenly stop working because of money or connectivity.
However when I buy new hardware, I accept that it may not run old software. Hence my concern with the durability of data formats.
I do get super annoyed when Disney tries to force me to discard my nice 8 year old iPad just to use its updated App. Fortunately enough people feel like me, and the App store now allows me to use older versions of the App.
If I build an Android app, will you be able to run it in 10 years
In my experience, very likely yes. I have dug up a ten year old Android apk file before and found it still working perfectly on modern Android. It pops up a warning now to the effect that “this is really old and might not still work” but it does just fine.
Android had a very different culture to both Apple and the main body of Google and actively cares about backwards compatibility.
Starlink has issues but mostly works really well. If we have systems like Starlink, but more accessible (price and regions) then do we actually need to worry about a 10GB download of Xcode? Today, a rich software developer could put Starlink on their sailboat and not think about data, right?
Lithium batteries can be expensive today but with electric cars and home batteries, etc. the recycled battery market will make it very cheap to have a lot of storage on a sailboat. I know batteries are heavy and space is a premium so solar is limited, but do you still you have to think about energy, if you have enough storage?
Starlink isn’t going to cover the whole world, and to your point mostly benefits those in the developed world who can afford $125/mo. or more for access.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
This “Elon will save us!” culture has to end. He might well create little bubbles of gilded comfort in for the rich and powerful (satellite Internet, pay-to-play social media, and fast, clean, $100k cars) but his interest ends where the profit margins approach zero.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
Yes, I agree that price is a huge issue but I think Starlink is opening up to regions without ground stations - I guess the latency might not be the 50ms standard, but I imagine it will still be a good thing for places like the Solomon Islands.
This “Elon will save us!” culture has to end.
I would love for Starlink to be worker owned or see an alternative. I just care about the digital divide and Starlink has done things for places like rural Australia that our Government is not able to. I know the example is 10GB Xcode for iPhone development, but what about watching a YouTube video or participating in a Zoom call for school?
And on price, a 4Mbps fibre connection in the Solomon Islands is $170USD per month at the moment. Yes, Starlink’s price is ridiculous for these regions but you have to understand that it’s sadly an improvement.
At some point, you’re going to have to stop. The problem (and underlying motivation behind projects like uxn) is that we just don’t know where to stop. The abstraction towers are getting higher each year, and that 10GB isn’t going to be 10GB in a few years with newer architectures, patterns for programming, and support for legacy devices being shipped under the moniker of a development environment.
Whatever standards you target, they won’t be sufficient once they’re widely used and ultimately pushed beyond their limit. Growth is the problem, not the constants of the current year.
I “read” like maybe 20%. Maybe I missed the point. It seems like the author is drawn to what I’d call “programming in the dirt”, where if we got stranded on some desert island, the programs they could bring with (on paper, in carry-on luggage) would be
fairly readable in a small way (not too-simple like Brainfuck or too-complex like full x86-64 ISA)
possible to run on a runtime you could actually build on the island (not needing the “mainland” if you will)
(somewhat game-oriented?)
Gilligan’s PL, if you will. I take it the author has been developing their own language most similar to 6502 assembly.
Other PLs that come to mind would be: Don Knuth’s MMIX, BASIC, maybe even Standard ML or Featherweight Java.
It’s HTML, generated on the server, using composed templates…like some of us were writing in 1999, and again in 2009, and probably even (heretical as it might seem) in 2019, or 2023!
I do appreciate that we’re stepping back a bit from the ridiculous workflow of transpiring JS variants to other JS variants to polyfilled alt-web runtimes to paint HTML using deeply-nested JS class hierarchies, but…c’mon. (Likewise, “filesystem-based routing” is another facepalm term. We had this by default with CGI, JSP, CF, PHP, et. al. way back in the before-times, too.)
You’re really “Columbusing” this hard, folks. (Def.: like Columbus, you stumbled into a place that lots of people were already happily occupying, claiming it as a grand discovery, and self-identifying as an intrepid explorer.)
To be clear: I don’t bear any ill will towards the author of this article, who did pull together some reasonable crates and design fragments, and seems to be at least somewhat self-aware about the fact that this isn’t actually 100% new terrain. That being said, the ahistorical subgenre of “server components” as wild new terrain bugs me rather a lot.
I hear you and mostly agree, but in my experience what’s often the case is that there isn’t a good historically aware webdev canon (for lack of a better word).
As a community and industry, we default to be exclusively forward facing, there is no history there is only now etc., which often ends up in some version of “columbusing” as you called it. But that’s at least in part because a lot of people that are into webdev these days have no experience, or even mental model of older significantly simpler technologies than the mess of an ecosystem most of the web relies on these days. Heck, some have never even heard of CGI and have maybe only seen the abbreviation when configuring a webserver (if they even configured it themselves at all, I mean why do that if you can just use a container, right?).
I’m also “an old” (to reference your profile), but I’d encourage you to keep the frustration at bay and instead recruit folks like this into the church of “Hey that’s neat, that reminds me of xyz from way back when - maybe you’d like to check it out! History can be neat too, there’s lots of useful ideas in ‘legacy’ technologies.”
For what it’s worth, I remember the eye-rolling and dismissive/negative comments from old school devs when I was young and thought interpreted languages with garbage collection are just universally better suited to do everything and anything. That wasn’t a good experience. What was a good experience though was having folks introduce me to cool / useful stuff from their era.
idk, I’ve used server-side includes, though I think it was in 2002 rather than 1999. They sucked ass. CGI too. PHP as well; though it seems it might have been a step in the right direction that sure wasn’t obvious at the time. Never used JSP or CF so I can’t comment. But on the whole dynamic web content has always been a morass of tradeoffs and complication.
I’ve been recently playing with Elixir’s server-side rendering functionality in Phoenix and what strikes me about it is that the things you make can compose. You can take a smaller, simpler component and build something bigger out of it. You can take raw HTML and build a small component out of it and put it anywhere any HTML would be valid anyway. idk about Maud but with Phoenix it actually validates your generated HTML as well, so if you make a typo or type error or some other bug it will generally give you a compiler warning instead of the browser going “hang on I think I can figure this out”.
idk if it will change the world or hold up to its promises in reality, or whether it will turn out to be just another broken abstraction, but so far it’s pretty nice.
Maud will give you compilation errors so you can’t open a and not close it for example. My “components” are just Rust functions so if you don’t provide the correct arguments while invoking them you get build errors as well.
A different matter is for my custom markdown components though. The definitions of the component itself is built with Maud so checked at build time, but since Markdown is rendered at runtime (because I’m planning to move my content into a Postgres db) there’s not much I can do about correct usage of them.
author here! Yes, I’m aware this is not anything new at all, I just wanted to hightlight that it’s not difficult to put together a webpage (in this case, with Rust) - and the “react server components” that recently got some hype are not a novel idea.
For me the best parts of building this were:
I’m currently working on improving the DX of this little stack by using Leptos instead of Maud for templating. This allows some slightly more advanced features.
You may be interested in a thread called Static site generation in the Leptos Discord.
Essentially we can have our cake and eat it too. A local dev server that uses SSR and does live reload, while also supporting fully static site generation.
I don’t want to be that guy but does the Nix package manager work on Asahi, since it supports aarch64?
Yes! I’ve been running full nixOS on my M1 Air with the help of Asahi kernels and mesa built to the nix packages/modules at https://github.com/tpwrules/nixos-apple-silicon
oooh, i’ve been looking for something like this! I don’t suppose I could also run it all on zfs root, for the win…
Just make sure the Asahi kernel version is compatible with the ZFS module, set
boot.supportedFileSystems
when building the installer, and things should be all set. I think that was the only issue I ran into when I tried that specific setup a few months ago.Yes, there’s should be no reason it won’t work on either ALARM or Fedora Asahi Remix because we support running Nix on standalone distributions on aarch64 all the same as NixOS.
The only exception I can think of is maybe some SELinux incompatibilities on Fedora? I believe I’ve read about having to do some weird hacks, but I don’t know the details myself.
Try it and see?I run Nix on my personal and work macs and it…basically works?
Like any mix of Nix and <OS that isn’t NixOS> you have to be really mindful of what’s in your path, find out which shebangs + and hard-coded commands in scripts are lying around, be okay with never being able to use (say) pre-compiled blobs for Node or Python libraries might get pulled automatically, etc.
Modulo all of that, you can at least do a quick ‘nix run’ invocation to pick up a package, build from flakes, etc.
Generally, though, I find myself delegating more and more to a NixOS VM running on the same host. Nix-on-Docker is another option that actually does some stuff more cleanly than native nix-darwin, if you’re okay with containerizing all the things.
Running it on Asahi in particular was my concern. It means you and I both can avoid the NixOS VM step and just use Nix on Linux, which in my experience is fine (I’m sorry it’s treated you so poorly!).
Oh, sorry if I gave the impression that Nix-on-Linux was a bad experience. I chose/continue to choose to use it because it solves a whole bunch of other pain points, like needing to write f’ing awful makefiles (or even worse, use
autotools
…shudder) to build across multiple target platforms.I’m just spoiled by how well things work in full-blown NixOS, I guess. :)
Surprised at the positivity here. A single point of failure is fine for many SaaS but for backups - meaning you have something whose value justifies paying to back up off site - it seems disqualifying. I’m also surprised that in the ~9 years between the introduction of the code bug and now there was no attempt to do a disaster recovery or other integration test and replay the logs from s3.
Designing systems is all about tradeoffs. Adding multiple redundant instances of every core service to improve uptime necessarily introduces all kinds of hard-to-reason-about failure modes beyond basic service uptime, not to mention exponentially increasing the cost and operational burden of running your application.
It’s very easy to over-design a system from the beginning to handle exciting failure modes like, “an AWS region went hard-down and we failed over invisibly and just kept trucking!” while de-emphasizing ones like, “our carefully-orchestrated deployment process failed halfway through in a way that left us with a split-brain scenario between our independent per-region clusters, and we spent days diagnosing and coding defenses against the issue, even though the original outage was over in 15 minutes.”
Nine years without data loss in an active cloud backup service is a remarkable achievement; 24 hours of downtime is a very modest hit to reliability metrics in comparison with that.
I didn’t say anything about system design in my comment. I don’t disagree with you but I was talking about management decisions. The single point of failure is Colin and keeping that in place is a management decision. Not attempting to do a disaster recovery replay of the s3 logs once in a while is also a management decision.
A single point of failure is totally fine if that point failing does not result in violating your promises to your users. If you use tarsnap you know what you’re signing up for: an excellent, but bare bones, service by a brilliant individual.
As I gather, there was no data loss and no risk of data loss. There was a only a temporary inability to add data to backups and to access data from backups. And there were integration tests, but a specific scenario was missed.
I think most backup services have larger issues in edge cases.
I wasn’t questioning any promises made by the owner, I am questioning the sensibility of a user who purchases the service. If you have something worth backing up and worth paying to back it up are you getting what you need out of a company with a single human system administrator?
I also wouldn’t assert that there was no risk of data loss. The other tests are nice but the only test that really matters is “when I need to do so, can I replay all of my logs, restoring all of my customers’ data correctly, from s3?” Although he hit bugs that did not result in customer data loss you can’t ignore that in the 9+ years between tests, and with an entirely homebrewed system design, that he could have introduced some bug along the way that caused data loss when replaying logs.
Put together these are all black marks against the management decisions made by the proprietor and it calls into question if Tarsnap is fit for purpose.
I agree that this event is evidence for some of the ‘cons’ one should have considered when one chose to use tarsnap. It is not fit for all purposes or all roles in all backup schemes. It still seems fit for many roles and purposes to me.
I get your point but also explained my thoughts on risk management in depth over here
Yes because you have another backup setup anyway, to avoid the single point of failure (for example, having a company with a lot of engineers and support people doesn’t help if the failure is a bankruptcy).
For some background, my mindset largely comes from this kick-ass paper How to Lose Money in Derivatives which is about managing risks at hedge funds but has really shaped my way of thinking overall and I think is applicable here.
One of the author’s points is that nobody can escape the need to diversify investments and diversification has to be really rigorous because the exceptionally bad situations force more correlations than you expected. The extreme example here is that if there was ever a situation where S3 was shut down something huge has happened that is going to cause tons of failures across our industry even in places that aren’t on the cloud.
Another point is that liquidation is really the worst thing that can happen. A fund or investment can take losses and survive but when you’ve gone as far as liquidation you’re out of that money forever. The author drags up some examples of funds where they explicitly stated to customers that they intended for the customer to handle diversification themselves in their own portfolios as they could do a better job than the fund knowing their own situation and that the fund was then free to get superior returns from its improved focus. The reality is that nobody is free from needing to diversify and everyone has a duty to avoid blowups.
The similarity with Tarsnap is clear. A financial blowup from liquidation is equivalent to a Tarsnap blowup leaving customer data permanently inaccessible. And Tarsnap is neglecting its duty to manage these risks and diversify, a duty that everyone has and can’t be delegated to your customers. When I look at Tarsnap I am seeing Manchester Trading.
This is indeed an interesting perspective and I will read that paper someday.
It is. However I disagree with your conclusion, since a diversified Tarsnap would be much more costly (and I probably wouldn’t use it). My other backups will not fail in case of a S3 failure and it is indeed more economical to handle diversification myself. Sometimes, worse is better :)
If I read this correctly, Colin has done periodic recovery tests but only with a subset of the data and that subset didn’t have any machine-rename events in it. That’s a shame, but if that’s the only bug that affects availability that he’s had in 9 years, he’s doing better than pretty much every other service I’ve encountered.
Doing a recovery of all data is cost prohibitive for most backup services: their cost model relies on the fact that most data is write only. I think Colin said that over 95% of data written to tarsnap is never read (might have been 99%) by the end user. This makes sense: tarsnap is providing off-site backups. Typically, you use this as a second tier of backups, with a local NAS or similar as the first tier. You only go to backups at all if you’ve accidentally deleted something or if a machine has failed. You only go to off-site backups if your on-site backups have failed at the same time as the thing that they’re backing up. If you’ve got something like a cheap 3-disk NAS with RAID-Z and regular snapshots then you’ll almost never try to recover things from tarsnap, but when you do it’s very valuable.
In terms of hiring someone else in addition to Colin, I believe he has a process in place for avoiding bus factor (if he’s hit by a bus, there is a backup human who can take over the service), but operating tarsnap is less than one full-time person’s worth of work, so if he hired other people then the price would go up (which would cause some people to leave, which would further put up costs for those remaining behind because the cost of the second person is amortised over fewer customers).
Tomorrow is my son’s 8th birthday, and his mom got him the present of taking his little sister (3) out of town overnight so he can have some 1:1 time with Dad. (We do a lot as a family, and he’s a great big bro, but also enjoys getting one parent’s full attention.)
Amongst other planned activities, we’re going to spend some time over the weekend on building this kit: https://www.waveshare.com/product/robotics/buildmecar-kit.htm
I’ve been slowly introducing him to the idea of computers as machines you can open up and do things to, not just sealed blocks of glass + aluminum. He got a “lunchbox computer” for Christmas last year, and this is yet another project in that mode.
i bought a framework laptop from the original run. still have the 11th gen mainboard, considering either upgrading to the AMD mainboard or the framework 16 later this year.
the framework is a nice machine. i haven’t had any complaints about battery life, but i also mostly use it plugged in. the build quality overall is very good. the keyboard is very good – i prefer it over my thinkpads.
the trackpad is hot trash. absolutely awful. and, look, i’m not one of these people who thinks that the macbook trackpads are the end-all and be-all of mobile pointing devices. the macbook trackpads are fine. the framework trackpad is not. sensing is good, multitouch is good, but the physical click only works about 50% of the time. framework knows about this. it may have been fixed in a newer hardware revision. i’m not sure.
otherwise, good laptop. i’d buy it again, and i will continue to buy products from framework in spite of the awful trackpad.
Writing it from my 12th gen intel framework 13. The trackpad is fine, no complains. The weak hinges are annoying sometimes (will replace with stronger ones next time I’m ordering anything), and slow battery drain on suspend is still not fully addressed (waiting for next BIOS version).
it’s funny, i have zero problems with the hinges. the first version of the top cover had more flex than i was happy with, but the second version (the CNC’d one) is excellent.
to go into a bit of detail: the trackpad dragging behaviour is what kills me. if i’m doing left-click drags by holding the physical button and then dragging, the physical click has a tendency to either not register or release partway through the drag. it’s worse with right-click and middle-click drags (which i’m also doing with some regularity).
perhaps i need to upgrade my trackpad or input cover. thanks for the heads-up!
Just to clarify: when the screen is not in perfect 90 square angle, it’s not possible to wiggle the laptop slightly without the screen folding/opening. Usually this is not an issue, but when trying to use the laptop in improvised spots (e.g. walking while holding it, or laying on the back in bed, or on a wobbly table) it becomes annoying.
Tried exactly that now and seems perfectly fine here.
Early Framework units had bad trackpads. Clicks failed to register, right/left selection based on area was unreliable, etc.
At one point I think the old ones were considered a warranty replacement. I wanted a new keyboard deck anyway, so I installed one on my first-gen Framework (11th gen CPU) and it fixed the issue w/o any software config or driver changes.
The hinges are bad, though. I hear tell that’s also something they tweaked but TBH I’m probably done tuning this unit until my 16” preorder number comes up and I find out just how painful the upgrade cost is going to be. (Odds are I’ll pay it b/c I love what Framework is doing and the expansion modules look particularly Lego-like in the 16.)
Just another data point: My “framework 13 with intel 12” trackpad works great. Same weak-hinge complaint as dpc_pw, but it’s extremely minor. I feel in love with this thing so bad (despite the tiny screen) that I don’t use my M1 anymore outside of work, and gave away my system76 because the framework made it look awful in comparison. The M1 takes up to 60 seconds to wake from sleep while this thing wakes instantly.
Disclaimer: I’m not a gamer beyond some simple steam games and minecraft. I use my laptop primarily for coding and projects.
I can get behind renewing the push for copyleft, and even attributing much of the original promotion to RMS, but c’mon: the artful B/W portrait, repeated calls out to his prescience, etc. reads like hagiography, brief mention of “toxicity” aside.
The dude is a misogynistic ideologue who abused his platform as a Free Software pioneer to subject other people to his gross views.
So yeah, support copyleft, but don’t sweep aside how the FSF backed RMS and ignored his willingness to blithely dismiss child abuse as “not that big of a deal”.
He’s definitely an ideologue, but for the cause of software freedom, which is a good thing. Labeling him a misogynist is just a politicized insult, based on disliking other political opinions adjacent to gender he has expressed at some point, or just finding him personally awkward and spergy. There’s hell of a lot of prominent technologists who I’d want to see ostracized for their tech-unrelated stated political views ahead of Stallman.
This can be used to handwave away any level of complaint. After reviewing the GeekFeminism wiki article, with citations, I feel comfortable saying that I’m not throwing my lot in with him.
He has said and done plenty of things directly related to tech, or software projects, that this too comes off as handwave and dismissive.
I think RMS was/is right about a lot and I don’t care about canceling him, or being upset, but I’m not going to bat for him either, and I’d love to have a figure like him, that I could fully respect and endorse.
This is the sort of accusation that nowadays just rolls off my brain like water off a duck’s back.
Oh so he has cooties huh. He’s “gross” and “misogynistic” which basically means the 7/10 mean girls trying to play queen bees of the autists find him unbearable. That’s all. That’s what you sound like: a moralistic christian nun, the kind that people hated as nurses, because you can tell deep down they think you deserve your suffering.
And when you say “abuse” what you actually mean is that he earned his way into his position, but others who are resentful and jealous didn’t. They hate that he doesn’t share their views that every statement and assertion should be padded and bubble wrapped to avoid misinterpretation by people who get off on being offended.
Your team is made of people, who each have individual skills and motivations. As long as you remember and internalise that, you probably won’t go too far wrong. The worst management I’ve seen has always been as a result of failing to get that step right.
IMO, the book Engineering Management for the Rest of Us does a good job at pointing that out, how to deal with conflicts that arise and so on.
Thanks for the recommendation, I’ve not seen that one before. Of all the management books that I’ve read, the one that I’d recommend if you were going to read only one is PeopleWare. I’ve read the first and second editions. I’ve heard good things about the third edition but not seen a copy.
In my spare time, I’m writing a book specifically about managing remote teams, since it’s something I’ve been doing for most of my career and people keep telling me that it’s hard. I hope to finish it over the summer.
Thanks for reminding me of PeopleWare, it’s supposed to be a classic but I haven’t read it (yet).
This is 100% how you get from “Manager RPG” Level 1 to ~middle tier: remember, pay attention to, and support your people. It’s critical, and worthwhile because human beings are more important in basically every case.
Above a certain level, though, it’s insufficient because how your team feels and acts is only one part of their overall success. In larger orgs/projects, visibility, impact, credit, and scope usually depend at least as much on the “political BS” people call out as they do any one group’s technical (or even communications) excellence.
Past “Senior Engineer” or “Senior Manager” in a mid-to-large org you have to be attuned to the larger organizational temperament and trends if you want to advance and have the maximum impact. Having a dedicated advocate on your behalf who is similarly clued-in can be a good stopgap, but if they leave/lose favor/shift focus you should be prepared to take it on yourself.
Great managers start and end with supporting their team, but in between there’s a lot of thought, communication, and upward/lateral management required to make that successful long-term.
Completely agreed. Managing up is a different set of tasks to managing down. I took the question to be about managing down, but it may be that managing up was not part of the question because the author didn’t know how important it was.
Ensuring that your company understands what your team is doing and that you understand how your team’s activity fits into (and contributes to) overall strategy are both very important. Assuming, of course, that your company has an overall strategy.
Thanks for reiterating this - this is definitely how have I have been managing the small team already. The one thing I am struggling with is they are not providing me with a lot of input and are asking me directly what the paths forward look like. A lot of my response is “Well what do you want to do?” So having a good map or some examples of how I can structure the team for growth is really helpful to foster better conversations.
I had a lot of conversations like that with students. I found that giving them an overview of a few different career paths worked well to start the conversations going. In particular, it was great for finding the things that they realised that they really didn’t want to do. I’d also recommend The Coaching Habit, not least because it has a path for making the person that you’re coaching do most of the work.
Beyond that, people tend to relate better when you’re talking about what you did than when you’re talking about what other people did. If you can, talk about what you did and introduce them to people who can give them other perspectives.
I’d start by asking yourself ‘what motivates this person?’ about each member of your team. There’s a lot written about intrinsic and extrinsic motivations and you really want to know what gets each person excited (building cool things, learning new stuff, micro-optimising things, seeing things deployed at scale, whatever). If you can’t answer that question about someone on your team, spend some time getting to know them. Make sure you have regular informal conversations with everyone, because as soon as you’re on a ‘we’re talking about work’ footing you lose the ability to learn these things. Chatting to your team is not slacking off, it’s part of your job as a manager. Once you understand what motivates people, you can think about the career paths that will increase their opportunities to do these things. And you can help direct them towards doing things that will help them build these skills.
But all of that comes back to ‘remember that your people are people’. They are not resources. They are not fungible. As long as you remember that, you’ll be fine.
Man I SO want to believe this article, because I am a huge fan of the Fediverse. Open protocols, FLOSS all the way down = sign me up!
But reading the article, my spidey sense kept tingling: “This smells like a partisan piece”.
And it is. Knowing the author is a Nostr fan explains a lot.
I’d love for someone without so much skin in the game to do an actual technical breakdown around what is and isn’t read/good/bad about Bluesky.
He author is more than a Nostr fan, he’s the original protocol creator.
So yeah, not a particularly impartial voice in this debate.
From the rant:
I’m really curious about this - why is it not possible in a decentralized system? I know there have been past attempts around web of trust. I’ve always wondered why something couldn’t be built using blockchain to store public keys. Key rotation could be performed by signing a new key with the old key and updating the chain.
Am I missing something obvious that’s not currently possible? Are there good research papers in this area I should read?
It’s well-trodden folklore; I’d start with Zooko’s triangle. Several forms of the triangle can be formalized, depending on the context.
You don’t need a blockchain to store a bunch of self-signed public keys. PKI has been doing that for ages via certificates.
OTOH, if you want globally-consistent (and globally-readable), unforgeable metadata associated with those keys you need some arbiter that decides who wins in case of conflicts, what followers/“following” graph edges exist, etc.
Nostr actually uses existing PKI (via HTTPS + TLS) to “verify” accounts that claim association with an existing public domain. Everything else is…well, not even “eventually consistent” so much as “can look kinda consistent if you check a lot of relays and don’t sweat the details.”
It’s possible, but you need some kind of distributed consensus, which in practice looks like a blockchain. ENS is one implementation on Ethereum. You also need some mechanism to prevent one person from grabbing every name (if you assume sybil attacks are possible, which they will be on almost any decentralized system). The most common one is to charge money, which is not really ideal for a social network (you want a very small barrier to entry)
An interesting take on this is done by the SimpleX network: it employs no public identifiers. The SimpleX Chat protocol builds on top and continues this trend. You can build a name service on top, but the takeaway I make is that maybe we don’t need name services as often as we think.
The assertion of Zooko’s Triangle is that you can’t have identities that are simultaneously human-meaningful, globally unique and secure/decentralized. You can only pick two properties. DNS isn’t secure (you have to trust registries like ICANN.) Public keys aren’t human-meaningful. Usernames aren’t unique because different people can be “snej” on different systems.
The best compromise is what are known as Petnames, which are locally-assigned meaningful names given to public keys.
Zooko’s Triangle describes the properties human-meaningful, secure, and decentralized. DNS is secure, not decentralized.
Oops, thanks for the correction. I was working from memory.
I picked up a $120 Lenovo “education channel” laptop late last year to use as more or less a fancy external hard drive for my digital “go bag” of essential keys/tokens/documents. It’s a shockingly capable little device; compared to a lot of (compact) entry-level business laptops, I’d say the only immediately-noticeable differences are the screen (~11”, and only 768px vertical) and use of MMC instead of NVMe storage.
It also stands alongside the Apple Silicon devices w.r.t battery life. I can get 11-12 hours under normal use. The CPU is under-powered by today’s standards, but the lack of a bazillion cores, big caches, wide vector accelerators, etc. mean there just isn’t that much silicon to keep cool or fed, so: no fan, CPU boost or GPU boost spinning up to take 5x baseline power usage, etc.
I remember there being a lot of noise about making Gitea support instance-to-instance federation - is this still under development?
https://github.com/go-gitea/gitea/issues/18240
See @wpld’s comment above about Gitea, Ltd. and the community response. tl;dr: there’s a bit of a fork in the road right now where Gitea appears to be doing more to support “enterprise-y” use cases, while Forgejo (the Codeberg fork) is more focused on federation and being a fully community-supported and operated project.
I don’t have any reason to think the two won’t be able to rendezvous and share code in the future (even if it looks more like an OpenBSD/NetBSD sort of situation rather than simply different builds of the same shared core) but you might have a better time working with one or the other fork based on which of those major priorities lines up with your needs at the moment.
This is incredible. I think it’s mostly large, established businesses where this is a thing. On the other end of the spectrum you have the overworked startup workers who have to work lots of overtime, and the struggling underpaid freelancers.
I’m not convinced it’s size so much as location of the department within the company and whether that department is on a critical revenue path. I mean it’s hard to imagine this at a tiny to small (<25 headcount) company but such a company won’t really have peripheral departments as such and would somehow need to be simultaneously very dysfunctional and also successful to sustain such a situation.
The original author keeps talking about “working in tech” but the actual jobs listed as examples suggest otherwise: “software developer [at] one of the world’s most prestigious investment banks”, “data engineer for one the world’s largest telecommunications companies”, “data scientist for a large oil company”, “quant for one of the world’s most important investment banks.”
First off, these are not what I’d personally call the “tech industry”.
More importantly, these don’t sound like positions which are on a direct critical path to producing day-to-day revenue. Similarly importantly, they’re also not exactly cost centres within the company, whose productivity is typically watched like a hawk by the bean counters. Instead, they seem more like long-term strategic roles, vaguely tasked with improving revenue or profit at some point in the future. It’s difficult to measure productivity here in any meaningful way, so if leadership has little genuine interest in that aspect, departmental sub-culture can quickly get bogged down in unproductive pretend busywork.
But what do I know, I’ve been contracting for years and am perpetually busy doing actual cerebral work: research, development, debugging, informing decisions on product direction, etc.. There’s plenty of that going on if you make sure to insert yourself at the positions where money is being made, or at least where there’s a product being developed with an anticipation of direct revenue generation.
I’ve seen very similar things at least twice in small companies (less than a hundred people in the tech department). In both cases, Scrum and Agile (which had nothing to do with the original manifesto but this is how it is nowadays) were religion, and you could see this kind of insane inefficiency all the time. But no one but a handful of employees cared about it and they all got into trouble.
From what I’ve seen, managers love this kind of structure because it gives them visibility, control and protection (“every one does Scrum/Agile so it is the right way; if productivity is low, let’s blame employees and not management or the process”). Most employees (managers included) also have no incentive beeing more productive: you do not get more money, and you get more work (and more expectations) every single time. So yes, the majority will vocally announce that a 1h hour task is really hard and will take a week. Because why would they do otherwise?
Last time I was in this situation, I managed to sidestep the problem by joining a new tiny separate team which operated independently and removed all the BS (Scrum, Agile, standups, reviews…) and in general concentrated on getting things done. It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
I’m guessing maybe it isn’t: a single abnormally productive team potentially makes many people look very bad, and whoever leads the team is therefore dangerous and threatens the position of other people in the company without even trying. I’d find it very plausible that the productivity of your team was the root cause of the political issues that eventually unfolded.
This was 80% of the problem indeed. When I said it was another story, I meant that this kind of political game was unrelated to my previous comments on Scrum/Agile. Politics is everywhere, whether the people involved are productive or not.
It’s not just a question of people not wanting to “look bad,” though.
As a professional manager, about 75% of my job is managing up and out to maintain and improve the legibility of my team’s work to the rest of the org. Not because I need to build a happy little empire, but because that’s how I gain evidence to use when arguing for the next round of appealing project assignments, career development, hiring, and promotions for my team.
That doesn’t mean I need to invent busywork for them, but it does mean that random, well-intentioned but poorly-aimed contributions aren’t going to net any real org-level recognition or benefit for the team, or that teammate individually. So the other 25% of my energy goes to making sure my team members understand where their work fits in that larger framework, how to gain recognition and put their time into engaging and business-critical projects, etc., etc.
…then there’s another 50% of my time that goes to writing: emails to peers and collaborators whose support we need, ticket and incident report updates, job listings, performance evaluations, notes to myself, etc. Add another 50% specifically for actually thinking ahead to where we might be in 9-18 months and laying the groundwork for staff development and/or hiring needed to have the capacity for it, as well as the design, product, and marketing buy-in so we aren’t blocked asking for go-to-market help.
Add up the above and you can totally see why middle managers are useless overhead who contribute nothing, and everyone would be better off working in a pure meritocracy without anyone “telling them what to do.”
omg, I’ve recently worked in a ‘unicorn’ where everyone one was preoccupied with how their work will look like from the outside and if it will improve their ‘promo package’. Never before have I worked in a place so full of buzzword driven projects that barely worked. But hey, you need one more cross team project with dynamodb to get that staff eng promo! 🙃 < /rant>
Given your work history (from your profile), have you seen an increase in engineers being willfully ignorant about how their pet project does or does not fit into the big picture of their employer?
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening. Would be interested to hear how you’ve handled that, if you’ve encountered it.
I don’t think there’s any kind of silver bullet, and obviously not everyone is motivated by pay, title, or other forms of institutional recognition.
But over the medium-to-long term, I think the main thing is to show consistently and honestly how paying attention to those drivers gets you more of whatever it is you want from the larger org: autonomy, authority, compensation, exposure in the larger industry, etc.
Folks who are given all the right context, flexibility, and support to find a path that balances their personal goals and interests with the larger team and just persistently don’t are actually performing poorly, no matter their technical abilities.
Of course, not all organizations are actually true to the ethos of “do good by the team and good things will happen for you individually.” Sometimes it’s worth to go to battle to improve it; other times you have accept that a particular boss/biz unit/company is quite happy to keep making decisions based on instinct and soft influence. (What to do about the latter is one of the truly sticky + hard-to-solve problems for me in the entire field of engineering management, and IME the thing that will make me and my team flip the bozo bit hard on our upper management chain.)
Thanks for the reply, that’s quite helpful and matches a lot of what’s been banging around in my head.
Would you be able to elaborate on the last paragraph about making decisions based on instinct and soft influence? Why is it a problem and what do you mean by “soft influence” in particular? Quite interested to understand more.
Both points (instinct + soft influence) refer to the opposite of “data-driven” decision-making. I.e., “I know you and we think alike” so I’m inclined to support your efforts + conclusions. Or conversely, “that thing you’re saying doesn’t fit my mental model,” so even though there are processes and channels in place for us to talk about it and come to some sort of agreement, I can’t be bothered.
It’s also described as “type 1” thinking in the Kahneman model (fast vs. slow). Not inherently wrong, but also very prone to letting bias and comfort drown out actually-critical information when you’re wrestling with hard choices.
Being on the “supplicant” end and trying to use facts to argue against unquestioned biases is demoralizing and often pointless, which is the primary failure mode I was calling out.
This is true and relevant, but it’s also key to point out why instinct-driven decisions are preferred in so many contexts.
By comparison, data-driven decision-making is slower, much more expensive, and often (due to poor statistical rigor) no better.
Twice in my career I have worked with someone whose instincts consistently steered the team in the right direction, and given the option that’s what I’d always prefer. Both of these people were kind and understanding to supplicants like me, and - with persistence - could be persuaded to see new perspectives.
Excellent points! Claiming to be “data driven” while cherry-picking the models and signals you want is really another form of instinctive decision-making…but also, the time + energy needed to do any kind of science in the workplace can easily be more than you (individually or as a group) have to give.
If you have collaborators (particularly in leadership roles) with a) good instincts, b) the willingness to change their mind, and c) an attitude of kindness towards those who challenge their answers, then you have found someone worth working with over the long-term. I personally have followed teammates who showed those traits between companies more than once, and aspire to at least very occasionally be that person for someone else.
That’s helpful, thanks for clarifying.
I think this is part of the natural life-cycle of the software developer - the majority of developers I’ve known have had an extended period where this was true, usually around 7-10 years professional experience.
This is complicated by most of them going into management around the 12-year mark, meaning that only 2-3 years of their careers combine “experienced enough to get it done” with “able to regulate their focus to a narrow target”.
I think those timelines have been compressed these days. For better or worse, many people hold senior or higher engineering roles with significantly fewer than 7-10 years experience.
My experience suggests that what you’ve observed still happens - just with less experience behind the best-practices and axe-sharpening o_O
My team explicitly doesn’t use scrum and all the other teams are asking: “How then would you ever get anything done?”
Well… a lot better.
I don’t understand this at all. It’s titled “the fundamental difference between Terraform and Kubernetes” but then goes on to explain that they’re identical.
I think the fundamental difference they’re getting at is K8s continually evaluates reality vs desired and adjusts it as required. Terraform only does so when you run an apply. (Not sure that’s a fundamental difference, but that’s their terminology. 🤷🏻♂️)
I guess if you ran
while :; do terraform apply --auto-approve; done
you could equate it to being the same as K8s.This is the most terrifying shell script fragment I’ve seen in recent memory. 😱
Yes, in theory your TF code should be audited and have guard rails around destroying critical resources etc. etc. but…IME it’s usually at least partly the human in the loop that prevents really horrible, data-destroying changes from happening.
Kubernetes, on the other hand, makes it (intentionally) difficult to create services that are “precious” and stateful, so you’re somewhat less likely to fire that particular footgun.
…and lest it seem I’m dumping on TF and talking up K8s: I think Kubernetes gives you all kinds of other ways to complicate and break your own infrastructure. It just usually takes more than a single-line change anywhere in your dependency graph to potentially schedule a stateful node for deletion (and likely re-creation just afterward, but that doesn’t protect you from local state being lost).
They’re also equally good at helping you create a bunch of “objects” for which your cloud provider can charge you a substantial monthly fee, which I think helps to explain their mutual popularity amongst the AWS/GCP/Azure/etc. set.
Just noticed the typo in it, stops someone copy/pasting it I guess.
I’m not convinced sticking TF in a while loop is a good choice, but that’s how my scaled out mental model of the two tools differs.
Beyond that, Terraform also understands dependencies between services while k8s continuously tries to reconcile (so if you create some resource that depends on another, Terraform will wait for the first to be created before creating the second, while Kubernetes will immediately try creating both and the second will just fail until the first is created).
They’re quite different. Here are some things you can do with Terraform and Kubernetes that aren’t symmetric, showing that they are distinct:
Oh, I’m not claiming that they are identical. I just think the OP is: it’s of the form “the fundamental difference between things A and B is that A does X, while B does X”.
I am satisfied by caius’s sibling comment that it’s trying to talk about the bandwidth difference: K8s’s feedback loop is very fast, while TF’s isn’t. But I don’t regard that difference as particularly fundamental, and I don’t think the ones you’ve identified strike me as fundamental either. So I’m asking: what is?
I don’t use either tool regularly, and I don’t have great mental models of them, but just in case it helps clarify, I think what I’m looking for might be ownership. K8s materializes the “resources” it manages inside itself, while TF prods external systems into changing state.
You could conceivably write a Terraform Custom Resource Definition for Kubernetes and make a controller that continuously applies it. 🙃
This was a good dive into container and namespace primitives. It is a bit strange to me that and article about pods, containers, and Docker would omit even a passing mention of podman which in addition to the obvious naming overlap actually offers “pods” independent of k8s.
So yes, this is a good walkthrough of how to emulate pods with Docker, but misses the chance to at least close with something like, “and if this approach is interesting to you, check out Podman which does it well and also gives you a bunch of other nice things that neither bare OCI containers or Docker do, like systemd integration and full control of the underlying host VM image (on non-Linux systems).”
Rust’s
serde
is definitely a go-to for me.Part of why writing simple network services in Rust is so nice is that Serde + any of the high-level web server crates make it the work of literally minutes to define a simple data structure and accept + emit it in a safe, validated way. I can more or less ignore the wire format until I start hitting really edgy edge cases.
But the same could be said of haskell’s
aeson
library (surely the main inspiration for serde), so it must be something else that is the “killer” argument to choose rust instead.aeson
is only for json,serde
is format agnosticThis is very cool! I made a very, very dumb “Datalog query console” inside my first Clojure+DataScript project; the ability to manipulate queries as structured data out of the box is a good demonstration of why Lisp-flavored Datalog tools are so compelling.
For those interested in the space, here’s another “datalog in notebooks” I’ve played with a bit in the past: https://github.com/ekzhang/percival
It doesn’t have the nice first-class syntax of Datalog-in-Clojure, but it does use WASM and Vega in a pretty interesting way to build a bunch of visualization capabilities on top of the core query engine.
But what’s in it for me? I actually didn’t even know I could do that and now I ask - why would I want to get my packages from there and not via apt? What are the benefits except one more tool and repo?
pkgsrc has quarterly releases.
Not every application, hardware appliance, or even general-purpose server is ideally suited to having a Linux kernel and userland underneath it.
Docker masquerades as a cross-platform packaging and distribution tool, but what it really provides is a cross-distro abstraction for a subset of modern-ish Linux environments. Windows and Mac hosts only get support by way of a VM managed by Docker or containerd.
Even more frustrating to me is the fact that Docker actually strengthens the dominance of 2-3 distros (Alpine, Ubuntu, and CoreOS) because they’re the standard base for most other images.
So: pkgsrc and other “ports” systems are important to keep software running in places other than that very limited set of environments. (I likewise think that Firefox and other non-Chromium browsers are critically important so that “portable applications” don’t just mean, “runs in Electron or a browser tab.”)
Well one of the things is it can be bootstrapable to majority of OSes.
https://pkgsrc.smartos.org/
The problem is that this is not a use case a great many people have. If you use a linux distro, you already have a package manager. If you use a BSD, it has one too. MacOS has brew (and macports, even nix). Everyone already has all the software they need so it is very hard to make a case for pkgsrc.
A couple of years aog I got interested and played around with pkgsrc on debian to learn a thing or two. It’s cool that it exists, but it is not really all that new or useful so that people would get excited.
I believe the reason why there is not much mindshare is that it is perceived as yet another package manager with no immediately obvious value.
pkgsrc is the native package manager on one of the BSDs (used to be 2) and SmartOS. On macOS, it is up there with brew and macports. On Linux, it is useful for distros that do not have their own package managers (Oasis comes to mind). Could be useful even on mainstream distros like Debian when you are stuck on the stable release and you need newer versions. That said, not all packages are up-to-date in pkgsrc but I’m glad it exists and people continue to work on it.
How many users does SmartOS have? It is the niche of the niche of the niche. How many people really care about NetBSD? Very few, it seems.
pkgsrc is not up there with brew and macports. Everybody and their mom use brew, a few people - including me - use macports and a very few people use nix. I have never met anyone using pkgsrc on a Mac and I have been working in “every dev has a mac company for the last decade”.
I am not saying pkgsrc is bad, I am saying almost nobody cares b/c there is not much to gain for someone not running NetBSD (or SmartOS) where it is the native package manager.
Smartos people also provides Binary for MacOS using pkgsrc https://pkgsrc.smartos.org/install-on-macos/
I understand all that. You are not reading what I am saying.
After reading their home page description, I have no idea what this is actually for:
It’s Mozilla’s effort to write a browser rendering engine in Rust. Started out as a research project, produced some decent components that made it into Firefox, and then got shelved a year or two ago when Mozilla came under new management and decided to cut costs.
doesn’t it also perform certain threading improvements which you normally don’t have in render engines ? (apart from the CSS layout system already being threaded itself)
Yes it does. Servo is far more parallelized than traditional rendering engines (both Gecko and KHTML-descended - i.e. WebKit and Blink) because of both Rust’s safe parallelism fu and the absence of decades of legacy code architecture. This is why Stylus and WebRender (Servo components that were ported into Gecko), for example, are so much faster than their predecessors in Gecko.
This is an engine to render web content. Firefox has gecko, and all the chrome clones use blink. It converts html + css into pixels for your screens.
I know Tauri has theorized using this as the embeddable web engine (instead of WebKit) with the goal of improving cross-OS consistency. Not sure if this funding news is an any way related to that, though.
I dunno about this. One of the reason Tauri outputs can be so small is the leveraging of the “web view” on the existing platform (as opposed to shipping the entire Chromium engine like Electron). It could help with the consistency, but I would assume this would fall under a flag where some will prefer the lighter footprint over that consistency.
Agreed — I don’t anticipate it becoming a default. I don’t know what the binary size of Servo is, but hopefully it can be a compelling option against Electron/Chromium since it isn’t a fully fledged browser.
Anything to compete against Electron/Chromium/Blink/V8 would be awesome. I wonder if Ladybird is doing well.
At some point, the size of functionally-equivalent C++ and Rust binaries should more or less converge, at least assuming similar compiler backends + codegen (I.e., LLVM/Clang).
Choice of coding style, size of external dependencies, asset storage, etc. are all independent of implementation language, but have a sizable impact on binary size.
I’m personally more interested in Servo for a) choice and competition against the “New IE”, and b) an engine and community more open to experimentation and extension than Chromium/V8.
For sure — though is Servo closer to being functionally-equivalent to Chromium, or Blink? I guess my hope is that Tauri + Servo is smaller than Electron + Chromium. From a quick skim it appears Chromium is substantially bigger than Blink itself, so Tauri only requiring the web engine aspect might be a size gain alone.
I’m also on board with the added competition, as long as it’s maintained going forward. If a new engine gains usage but then ceases development it would serve to slow adoption of new standards.
This is a great demo of Cloudflare’s tech stack, but the lock-in factor is strong. You shouldn’t run this unless you’re planning on being a Cloudflare customer forever, because it will never run anywhere else.
That’s definitely true today, but I also think this could be an interesting starting point for a Deno-based ActivityPub server:
D1 -> vanilla SQLite (plus Litestream or similar)
Workers K/V -> Redis
Page Functions -> Actual, plain JS functions hosted by Deno (local workers)
Images -> pict-rs (or similar)
Other implementations tend to be huge (Mastodon, Misskey) or tailored to a single kind of activity (Pixelfed, Lemmy) that make them awkward to use as a base for further experimentation. I for one would be very excited to bang on a lightweight-but-somewhat-featureful alternative like this, once suitably modified to run on an open platform.
Generally speaking, I’m very resistant to choosing APIs that are locked to one vendor. Services that are generally available but perhaps simpler to deploy on one vendor’s cloud (see: AWS RDS/Aurora) seem like a pragmatic option for those who already have a relationship with that provider. Workers at least seem headed for some level of standardization, for much the same reason that K8s took off so quickly: everyone wants to get ahead of Amazon’s owning this space like they do object storage and general compute.
If you want small, there’s GotoSocial (single Go binary, SQLite) or Honk (single Go binary, SQLite, no API though). Even more minimally, there’s Epicyon which doesn’t even have a DB, it uses flat files. There’s also another Python one which is strictly single-user but I can’t remember the name of it right now.
There is one—honk(3), albeit a bit unusual.
Fair - I should have been explicit with the “no mastodon-compatible API” since that’s what apps and frontends expect.
It should be noted that GoToSocial is still very much alpha software, and while it does implement most of the mastodon api it isn’t 100% compatible yet.
This is not to be a downer on it, I really like GTS, I’ve contributed to it (and want to contribute more), and follow along their dev chat. It’s still very usable, just don’t expect it to be able to do everything yet.
The only real issues I’ve had with GtS are 1) OOB OAuth doesn’t give you the code on a page, you have to copy it from the URL (ultra minor and most people will never encounter this); 2) friend requests from Honk don’t work because of the “partial actor key” weirdness GtS has BUT this is exacerbated by a weirdness in Honk where it stores the assumed owner of a key instead of the stated owner. Need to dig into this some more before presenting it as a patch / issue tho; and 3) the weird version number breaks a bunch of libraries which use the version number for feature support (looking at you, Mastodon.py!)
Oh and Brutaldon doesn’t work but I can’t get that to work with Akkoma either and I’d be inclined to blame Brutaldon for that rather than GtS.
At least as of version 0.6.0 OOB auth displays it on a page for you, because I was playing around with it and got a nice page that looks like the rest of the UI with the token displayed for me.
The version number thing I think is known, because there’s some client that can’t do any searching because it just assumes because it’s low it uses the old mastodon search endpoint. That said I think I agree with their stance that they don’t want to turn the version field into a user-agent string situation. My own thoughts are everybody implementing the mastodon api should probably just add a new “api_version” field or something, which reports the api compatibility, because asking clients to implement logic to determine capabilities based on different server backends feels backwards when your goal is explicitly compatibility with the api.
I’m on
v0.6.0 git-36aa685
(Jan 9) and I get a 404 page with the code in the URL for OOB. Just updated to the latest git (v0.6.0 git-132c738
) and it’s still a 404 page with the code in the URL. But like I said, this is an ultra-minor issue most people will never experience.Yep, totally agree but it’s going to be a good long while before we get an
api_version
field (have you seen how long it takes the Mastodon devs to agree on anything?) and we have to deal with these clients now (e.g. I have to hack my Home Assistant’s local copy of Mastodon.py to get notifications out to my GtS instance.)Maybe I’m missing something but API versioning feels like something you’d want from day 0?
Well, there’s
/api/v1/XYZ
and/api/v2/XYZ
but that doesn’t really encompass what these libraries are trying to do by checking the version number because they’re trying to avoid calling endpoints that didn’t exist in older Mastodon versions.e.g.,
Mastodon.py
wraps all of its endpoint methods with stuff like this:_DICT_VERSION_CARD
is 3.2.0 which means that you’ll get an exception if you try to callwhatever.status_card(blah)
on a server that advertises a version lower than 3.2.0. GotoSocial is currently at v0.6.0 (which breaks the parsing anyway and stops anything from working becausev6
isn’t an integer) which would mean that you couldn’t call any endpoints, supported or not.What they should have done is have a capabilities endpoint which lists what your server can do and let the libraries use that as their guide rather than guesstimating from version numbers. Much like SMTP or IMAP banners. Or use the client-server protocol which is server-agnostic but they’ve been waffling about that since 2019 with no progress…
https://github.com/mastodon/mastodon/issues/10520
I don’t think they’ll ever really focus on the C2S AP protocol, it doesn’t have the same features and hacking them on sounds like a ton of work. I do agree that some sort of capabilities endpoint would be really nice.
Also yeah, agreed that trying to get mastodon itself to add api support for it feels like a huge battle, but I feel like if enough of the alternative servers like akkoma and gts agreed on an implementation, and then implemented something like it, it could force mastodon to conform? But that also has its own problems.
At the end of the day I understand why everyone conforms to the mastodon api, but it would be nice if there was some group outside of mastodon and Eugen defining the api spec, with input from the community and developers outside of mastodon, but that’s pie in the sky wistful thinking, and requires a lot of organization from someone, and buy-in from all the other server devs.
Huh, not sure what makes my GTS instance different, but I’m on the release commit,
GoToSocial 0.6.0 git-f9e5ec9
, and going through the oauth oob flow gave me this after doing my user login (https://imgur.com/sfm19cx), and the URL I used to initiate the flow washttps://<my domain>/oauth/authorize?client_id=<app's client id>&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=read
I shall have to do a clean install and see if there’s some step I’ve missed during the various updates (which it definitely sounds like I have!)
What’s the tl;dr on this thing? It had nice drawings, and it looked like the person put in effort, so I wanted to support content like this by reading it and discussing it, but my eyes just glazed over (I probably have adult ADHD). It began to sound like a rant against cloud only software (which I also agree with: I like to work on the train during my commute) but then it got so long and windy. What was the solution?
From my understanding (after reading a few other posts on the author’s site), it’s about two things:
The author lives on a sailboat, with electricity provided by small solar panels, where every resource (whether that be electricity, water, food…) needs to be used conservatively. When they first started on their journey in that boat, they were using modern stuff - MacBooks and ThinkPads, running contemporary OSes, but quickly discovered that with the ever-growing reliance of modern tech on a high-speed internet connection (and the large battery drain involved with using any of that tech), they needed something better.
The article talks about a few of the things they drew inspiration from, and then at the end talks about the thing they created - uxn/Varvara - a relatively simple sorta-Forth-ish bytecode VM, which has been ported to many different OSes, and running on many different “embedded” platforms.
Some context is that this is from a uxn developer. A small assembly language for making lightweight applications.
I say this presentation touches on two main points:
The developers have lived on an off-grid sailboat so don’t have access to traditional software development tools. Most people wouldn’t consider the barriers someone living in the Solomon Islands would face trying to write an Android or iPhone app. Even consider the amount of data needed to write a node.js app or to use fancy development environments on NixOS. Latency makes remote development not feasible.
And the rest of the presentation is about software preservation. If I build an Android app, will you be able to run it in 10 years? How can we make something more portable? Virtual machines can be pretty useful, but they have to present the right level of abstraction.
Thanks @iris and @puffnfresh
The subscription model of software is kind of disturbing, but I can’t tell if that’s because I’ve become an old fuddy duddy. I do very few classes of things on a computer nowadays.
I worry about the longevity of my data formats, not the software I used to create them.
I assume that the hardware and software platforms will move. It is inexorable. I detest subscription models - I want to own something: software, hardware whatever. I don’t care that it gets obsolete, I just care that it works like it used to. I don’t want it to suddenly stop working because of money or connectivity.
However when I buy new hardware, I accept that it may not run old software. Hence my concern with the durability of data formats.
I do get super annoyed when Disney tries to force me to discard my nice 8 year old iPad just to use its updated App. Fortunately enough people feel like me, and the App store now allows me to use older versions of the App.
In my experience, very likely yes. I have dug up a ten year old Android apk file before and found it still working perfectly on modern Android. It pops up a warning now to the effect that “this is really old and might not still work” but it does just fine.
Android had a very different culture to both Apple and the main body of Google and actively cares about backwards compatibility.
My thoughts:
Starlink has issues but mostly works really well. If we have systems like Starlink, but more accessible (price and regions) then do we actually need to worry about a 10GB download of Xcode? Today, a rich software developer could put Starlink on their sailboat and not think about data, right?
Lithium batteries can be expensive today but with electric cars and home batteries, etc. the recycled battery market will make it very cheap to have a lot of storage on a sailboat. I know batteries are heavy and space is a premium so solar is limited, but do you still you have to think about energy, if you have enough storage?
Starlink isn’t going to cover the whole world, and to your point mostly benefits those in the developed world who can afford $125/mo. or more for access.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
This “Elon will save us!” culture has to end. He might well create little bubbles of gilded comfort in for the rich and powerful (satellite Internet, pay-to-play social media, and fast, clean, $100k cars) but his interest ends where the profit margins approach zero.
Yes, I agree that price is a huge issue but I think Starlink is opening up to regions without ground stations - I guess the latency might not be the 50ms standard, but I imagine it will still be a good thing for places like the Solomon Islands.
I would love for Starlink to be worker owned or see an alternative. I just care about the digital divide and Starlink has done things for places like rural Australia that our Government is not able to. I know the example is 10GB Xcode for iPhone development, but what about watching a YouTube video or participating in a Zoom call for school?
And on price, a 4Mbps fibre connection in the Solomon Islands is $170USD per month at the moment. Yes, Starlink’s price is ridiculous for these regions but you have to understand that it’s sadly an improvement.
At some point, you’re going to have to stop. The problem (and underlying motivation behind projects like
uxn
) is that we just don’t know where to stop. The abstraction towers are getting higher each year, and that 10GB isn’t going to be 10GB in a few years with newer architectures, patterns for programming, and support for legacy devices being shipped under the moniker of a development environment.Whatever standards you target, they won’t be sufficient once they’re widely used and ultimately pushed beyond their limit. Growth is the problem, not the constants of the current year.
In some ways we’re slowly improving, e.g. home energy efficiency - but yeah, good point, constant growth does exist in software!
I “read” like maybe 20%. Maybe I missed the point. It seems like the author is drawn to what I’d call “programming in the dirt”, where if we got stranded on some desert island, the programs they could bring with (on paper, in carry-on luggage) would be
Gilligan’s PL, if you will. I take it the author has been developing their own language most similar to 6502 assembly.
Other PLs that come to mind would be: Don Knuth’s MMIX, BASIC, maybe even Standard ML or Featherweight Java.