Probably installing a blower on my forge. If the motor ain’t blown, I might use an electric control for managing the speed of the motor instead of controlling air intake by means of blocking out the pipe with a sheet of metal, adjusted for desired flow. Probably cutting a sheet of metal into a square shape, putting holes in it, in order to finish putting the forge to usable shape. Maybe installing the chimney on the forge so that the smoke emitted by the burning coal doesn’t end up in my lungs. That maybe is more of a “maybe this week, maybe next week” kinda deal, I really don’t wanna breathe that shit. Maybe spending a bit of time using my grinder with a metal brush attachment to remove the rust coat from the forge, although I might keep that in, unless it’s actively detrimental.
One more week of vacation, where I will definitely get books on Iceland in order to plan the honeymoon trip this summer. I might, maybe, work on getting a forge up and running, or buy an old one from someone within a reasonable distance from here. If I am very very very lucky, I might even hit heated steel with a hammer.
If you can, I loved the freedom of renting a small camper van when visiting Iceland. We had a rough plan of the sites we wanted to see and with the van you can spend as much or as little time at each place.
We rented a gorgeous little house by the sea, we’re there for 8 days. We want to do a few museums, go for a thermal bath, do the thing where you go down the volcano. There’s also this fishing trip thing we saw that I think would be cool to do, but as I list those things it already sounds kinda tight 😂 I mean, gotta keep some time for hiking within Reikjavík, and hiking outside of Reikjavík too. Gonna go by too fast.
Networking is the place where I notice how tall modern stacks are getting the most.
Debugging networking issues inside of Kubernetes feels like searching for a needle in a haystack. There are so, so many layers of proxies, sidecars, ingresses, hostnames, internal DNS resolvers, TLS re/encryption points, and protocols that tracking down issues can feel almost impossible.
Even figuring out issues with local WiFi can be incredibly difficult. There are so many failure modes and many of them are opaque or very difficult to diagnose. The author here resorted to WireShark to figure out that 50% of their packets were re-transmissions.
I wonder how many of these things are just inherent complexity that comes with different computers talking to each other and how many are just side effects of the way that networking/the internet developed over time.
Considering most “container” orchestrators (at least the ones I’ve used) operate on virtual overlay networks and require a whole bunch of fuckery to get them to talk to each other, on top of whatever your observability-platform-of-the-week is, the complexity is both necessary and not. Container orchestration is a really bad way of handling dynamic service scaling IMO. For every small piece of the stack you need yet-another-container™️ which is both super inefficient (entire OS sans-kernel) and overcomplicated.
I’m not wed to containers, but they often seem like the least bad thing (depending on the workload and requirements). The obvious alternative is managing your own hosts, but that has its own pretty significant tradeoffs.
Containers themselves are fine for a lot of cases. The networking layer (and also the storage/IO layer) are a large source of complexity that, IMO, is not great. It’s really unfortunate we’re to the point where we’re cramming everything on top of Linux userspace.
There’s a bunch of different options that have varying degrees of pain to their respective usage, and different systemic properties between each of them.
For me the killer feature is really supervision and restarting containers that appear to be dead, with the supervision done by a distributed system that can migrate the containers.
As a warning I’d like to point out that using awk and jq splitting on just ,
will subtly break if you have strings in your CSV files that contain that character as part of the string. Good stuff though!
That spreadsheet programs decided to call what they do “CSV” has caused no end of confusion since the expansion of the acronym literally contradicts the format { unless you expand it “Commas (don’t actually) Separate (all possible) Values” ;) }. I would not be shocked to hear that internally they wished they could’ve used “EQCSV” but 1970s era DOS 8.3 filename limits motivated “CSV” instead.
The best approach here is to have a “translator program” convert escape-quoted-CSV into values “actually separated” by delimiter bytes: https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim or its C port are one example. As part of a pipeline you likely get parallel speed up. Arbitrary data fields can be done by https://en.wikipedia.org/wiki/Tab-separated_values#Conventions_for_lossless_conversion_to_TSV Anyway, then you can use ideas from the article without as much worry.
Once confident ASCII newline only terminates records, if you have big files & enough space (temporarily anyway) for a copy you can also then segment by “nearest ASCII newline to 1/N” and parallelize processing on it.
I have taken to using ASCII characters 28-31 to build any “CSV” files I create. It eliminates any delimiter collision and, being ASCII, is nicely handled by most tools.
It is historically odd that people don’t use those characters more. It’s not like they’re in the 128-255 range and can’t be used or something. They’re right there, universally available, and designed for specifically this purpose. The only problem with them is they can’t encode nested records/arbitrary data, but no one wants that anyway.
Another downside is if you’re producing a CSV for someone else, you’ll have to explain to them such delimiters exist and how to make use of them if they’re opening the file in Excel.
I guess the answer to the immediate historical question is that Excel supports CSV, so CSV is popular. But the deeper question remains: Why did Excel use CSV instead of the actual delimiters that were designed for exactly this purpose? Then again Excel was clearly, with all due respect, designed by amateurs, who for example don’t know how leap years work, so it wouldn’t be out of character for them to just not be aware of the ASCII separator fields either.
Why did Excel use CSV instead of the actual delimiters that were designed for exactly this purpose?
Because those delimiters aren’t on keyboards. Spreadsheet programs before Excel used CSV too.
Then again Excel was clearly, with all due respect, designed by amateurs, who for example don’t know how leap years work, so it wouldn’t be out of character for them to just not be aware of the ASCII separator fields either.
Excel’s leap year bug is intentional because it’s supposed to be 100% backwards compatible with Lotus, and that includes having the exact same logic bugs.
Because those delimiters aren’t on keyboards. Spreadsheet programs before Excel used CSV too.
This is precisely the problem. If I output a CSV using these symbols in one of my projects, then I also have to provide handy copy/paste symbols in the README and hope someone comes back to find that instead of giving up as soon as it looks funky when they open it. These are great for my personal use, but I don’t consider them appropriate for general consumption if I want my code to be used by many.
I wish that the fediverse had a PGP-like trust system, where I can specify the trust of a given party to my fediverse instance, and then validate the trust of randoms relative to the explicit trust of parties I do trust. Preferably have the possibility to apply some ranking based on hop distance between me and the rando, and trust level (maybe decrementing as you further remove yourself from the rando). This would/could have a nice effect of reinforcing networks where you’re likely to have “actual contact” with the other parties, which is really what I want in a social network: stuff from people I know, or that the people I know can vouch for.
You can have some priorization of content based on “stuff I like”, a “recommendation engine”-sort, if you will, however you decide to implement that, but to me it would be more practical and desireable to have the social priorization first.
I think Urbit’s ID model goes a long way laying the ground work for this type of thing.
I think the federated approach will never work for reasons others describe here (even email and the web broadly are failures that primarily lead to centralized systems).
To really solve this requires fixing problems earlier in the stack: https://zalberico.com/essay/2022/09/28/tlon-urbit-computing-freedom.html
Unfortunately, Urbit has moldbug’s neofeudalism at its core, baked in the design of the protocol and language. And artificial choices like 2**32 systems (or whatever it’s nomenclature), choices of language to obfuscate ideas, and leadership - all of these show me the original designer’s “ideals” are inherent in that design.
I’ll pass on that.
It’s worth a deeper look imo.
I don’t align with the politics of the founder, but the reasons for the system design are independent of that (and I think correct).
Smart people tend to prematurely undervalue things when they dismiss them for unrelated reasons - I think that’s largely the case here.
I was careful in how I said my response.
If it was just because the founder was present, it’s one thing. He’s no longer there. However, the ideals of neo-feudalism apply at all levels of Urbit, specifically around “land ownership”, “disowning users on your land”, and the like.
The system forces an hierarchy where one shouldn’t necessarily exist. Instead, it instead forces it on everyone, in the way feudalism did so in history. That inherent design choice is what I wholly reject.
And the language all inherent of Urbit also serves to cover and distract from these core choices. And along with distracting, it also does a good job in making sure that ideas in that system are effectively land-locked in understanding their way of things, without a good translation.
As a corollary, Lobsters also has a feudalist-like invitation system. However, one above you cannot “disown” you or otherwise control you (unlike urbit), destroying your account. And I’d think that @jcs and other sysops here would also frown pretty greatly if I started selling invites here.
(Edit: as an aside, Mastodon and the fediverse is different. Sure, we’re running on someone else’s server. And they can boot us. However, I can move elsewhere, no longer under the influence of admins I don’t like. Or I can make my own. There’s no way to make your own “urbit” - it doesn’t federate, and it’s owned by someone who can deplatform you for no reason.)
The land metaphor doesn’t matter - it’s the IDs that enable moderation to actually be possible at the user level and it’s the mild scarcity of these IDs (4billion initially) and cheap, but non-zero cost that prevent the spam problems that cause things to recentralize.
Federated systems are worse about this - a handful of servers end up being actually feudalistic and capriciously enforce rules (see: https://twitter.com/LefterisJP/status/1593934653114785793?s=20&t=Pp1ZI6q-UstZEOwCksReyA). It will always be a handful of servers because these systems don’t solve the root problems that cause recentralization (spam, linux sysadmin complexity, true p2p). You end up in a worst of both worlds situation: a crappier experience than good centralized systems, but with even worse security. It doesn’t solve any of the problems it sets out to at scale due to incentives that lead to recentralization.
On Urbit there’s no distinction between user and ‘server’ so this doesn’t happen. The hierarchy only serves to route traffic updates to prevent version mismatch problems that plague federation (they’re more like ISP routers) as well as the ability to do public key lookups for setting up p2p connections between users. You could also just run urbits outside the hierarchy entirely if you wanted to for some reason and there’s a large number of traffic routing nodes, so there will be a lot of options along with the ability for users to push back (akin to web users pushing back on ISP routing).
The language/OS design is about solving complexity problems that lead to recentralization (which are hard to solve) that’s why separating the kernel from the OS it’s running on is important (and having it be a functional event log is important) - everything stems from that core idea.
I’d like to turn that on its head: email is a resounding proof that federation works. Same goes for the web. True, email and the web at large have largely coalesced into a handful of ginormous players. That being said, you still can send email to those even if you’re outside that oligopoly under very specific conditions. Within that oligopoly, it mostly works. I think for email and the web the problem is more the ease of access (or lack thereof) for the layperson. It has not been a commercial focus to make it easy for Everyone To Host Their Own Crap because I don’t think there is a whole lot of money to be made in it (relative to the costs of supporting Everybody).
It “technically” works, but it failed to achieve its goals (of the 90s cypherpunks anyway). My argument is that fixing the underlying system design could fix the incentives that lead everything to centralization, but it won’t happen via federation and it (likely) won’t happen with the existing tools.
email is a resounding proof that federation works.
I can’t even apply for a hCaptcha accessibility cookie using Yandex because I need to “use a real email address”. Handing the unstoppable deluge of spam email addresses (both servers and compromised accounts) is an entire industry. Gmail drops inbound and outbound mail effectively randomly. Email is an abject failure, which is why in developing countries most communication is done over centralised social media, be it WhatsApp, Facebook or their local thing. We only use email because it was good enough as the only option, and reliance on it ballooned.
I had briefly falled down a similar rabbit hole a while back with one of those Planck ortho-linear keyboards, but I returned to a regular keyboard layout for the following reasons:
But if it works for the OP, I’m all for it. Ergonomics are super important in this line of work.
I have the Planck EZ and more or less the same problems, but about 4 months in:
-I have made custom layouts for games -I sometimes work away from my home office and at those times I do carry the Plack EZ along with all the wiring -In extremis I can always fall back to the computer’s builtin keyboard and it’s not all that jarring (for me, anyway).
Also 4 months in I have observed that I am using the keyboard wrong and/or the columnar layout is not helping me much. My fingers travel a lot anyway. I think I must be using it “wrong”
Yeah, it’s not insurmountable, but I think I underplayed how much I play video games, and it’s not feasible to program a new layer every time you find a new game with slightly different keybindings, I find.
As much as I like the whole QMK open firmware project (and it’s related projects), it’s not exactly a rapid process to change things around.
True. I don’t play a whole lot on the computer, I’m more of a console person (and will often prefer a controller even on computer). When I do play on the computer, games tend to have similar bindings by “genre”, more or less. If I wanted to play more using my Planck, I probably would have layers by genres.
i have been alternating between a staggered-qwerty (laptop keyboard) and ortho-colemak (the ferricy), and i am comfortable with both now! i have been able to consistently hit ~90 WPM on both layouts, it takes me a few minutes of typing to “switch” my brain over from one to the other.
Nice overview. I’ve been rocking 42 keys for nearly a decade now and I’d never go back, but I really only have one layer I use regularly.
One thing I’m curious about that the article didn’t mention: how long did it take you to get proficiency in this layout? (For me it took about 3 weeks to get fast on the Ergodox, and once I had that proficiency, bringing it down to 42 keys on the Atreus only took 2 weeks, but from what I hear about other people switching to the Atreus, 3-4 weeks is common.)
glad you liked it. big fan of the unibody-split design of the atreus.
the descent to 34 was gradual. i started out with a Lotus58, plucked out a few keys until i got to 48, 36 and finally 34. All in all, it took me around 3 months to go from a 60% to a 35%. That being said, I am not as fast as I was on staggered-qwerty yet. I am currently hovering at about 90 WPM on the ferricy, whereas I could hit upwards of 130 on qwerty. going from 36 to 34 was particularly tricky, every key is load-bearing at that point.
I’ve been using non-standard layouts for 15+ years, and a mix of ortholinear and normal staggered keyboards for 5+ years. I can switch layouts mid-sentence and go between staggered and ortho layouts in a breeze as well (the only awkward part is to have two keebs on the same table), the entire typing should be in your muscle memory and not in your head. It can be done, without issue.
And keyboards like these have nearly nothing to do with ergonomics. Keyboards are awkward and stupid to use for humans. :)
And keyboards like these have nearly nothing to do with ergonomics. Keyboards are awkward and stupid to use for humans. :)
Do you think any writing/typing implement is ergonomic?
The closest are probably Maltron, Kinesis Advantage, Dactyl and friends. And while I love using dvorak I don’t have any delusions that my layout of choice would improve ergonomics in any way (beyond placebo, which is powerful in itself).
I switched to ortholinear about 4-ish months ago. I swap back to a standard TKL board for gaming, though that’s partially because I have a tented split keyboard. At first I had a little trouble switching back and forth between ortholinear and staggered, but after the first maybe 2 weeks I don’t have much trouble switching back and forth.
Ergonomics are super important in this line of work.
Agreed. And it’s great that there are so many keyboard options because it seems everyone needs something different. I love the Planck, despite its flaws. After trying a few different styles I settled on the Planck because I have small hands and the less distance my fingers have to travel the better.
Most companies are not using cloud as a replacement for colo. RDS, SQS, S3, managed elasticsearch, etc are really really valuable and difficult to replicate on your own. Of course the cloud vendors want to lock you in to these services and then overcharge you for the basics, just like some grocery stores lure you in with cheap specialty foods and then overcharge for bread and milk. It doesn’t mean it’s a bad deal though.
RDS and S3 are standouts in part because the lock-in is operational, not architectural.
You can develop against vanilla PostgreSQL, deploy on RDS, then change your mind – or at least threaten AWS with a change at contract renewal time – and switch to Fly.io’s managed Postgres. (Or any of the other excellent hosted offerings.) Or go “on-prem”, “edge”, etc. (I.e., run your own servers.)
S3 was a moat but the API + domain model are now available from your choice of vendors, including Minio if you want to roll your own.
I’m far more suspicious of applications that make heavy use of SQS, DynamoDB, etc. without having a really strong proof they need that scale and all the distsys pain it brings. You can get a long way on Celery (or your choice of “worker” tools) running batch jobs from your monolith against a “queue” table in Postgres. IME most projects/companies fail long before they outgrow the “COSP” threshold.
For cost management, disaster recovery and business continuity, and the ability to work + test your systems offline, I think minimal cloud provider API surface in your application is a Good Thing. That + “don’t create a ton of microservices” (also good advice in most cases) usually implies: monolith + one big backend database + very select service extractions for e.g. PII that shouldn’t sit in the main DB.
You can develop against vanilla PostgreSQL, deploy on RDS, then change your mind – or at least threaten AWS with a change at contract renewal time – and switch to Fly.io’s managed Postgres.
How does this work with security, though? Fly.io’s managed Postgres is going to be open to the internet, presumably, whereas in AWS I can control (and log) network access as I see fit.
I think Fly has a pretty good story here, actually: https://fly.io/docs/reference/private-networking/
But really, any managed DB vendor is going to have better network controls than “just use pg_hba.conf”. Most even offer AWS VPC bridging.
Thanks for the link. I was maybe thinking of Supabase when I wrote the comment. Like if the business is providing managed databases but no compute then doesn’t the database basically have to be open to the internet so the backend servers can reach it? Eg talking to Supabase from Vercel or Netlify? Or can something clever be done with eg Wireguard to secure it all?
There are a few approaches that services like this take. Sometimes they provide access over a VPN (e.g. through Wireguard, this is what Fly.io managed Postgres does if you connect from a non-Fly.io service and how you connect to private RDS databases from outside AWS), and sometimes they do just have a database listening on an Internet IP/port (maybe secured by some IP whitelisting, usually secured by TLS, and definitely secured by username/password authentication; this is what DigitalOcean managed databases, Supabase direct connections, and public RDS databases do)
I guess it goes without saying that if you
… then go to the big cloud providers and pay your premium (be aware of the network and database per-operation fees), you already made up your mind.
But I’d bet that are maybe 1% of the customers.
need 99,99+% uptime and want to sueblame somebody big otherwise
I haven’t checked in a while, but I’ve never seen a cloud service actually meet this 99.99+% uptime. I don’t think any of them are very transparent about their historical outages anymore as they realized they weren’t having good uptime performance.
I checked a few years ago for $WORK, when some boss type wanted to move to the cloud, I compared out all the data I could gather from the various cloud providers and we handily beat them in uptime and total cost across time. I think I went back 5-ish years at the time, though I can’t seem to find that spreadsheet at the moment.
I agree there are valid reasons to move, but I would never blindly recommend switching dedicated stable compute to the cloud. Bursty compute however is a perfect fit for the cloud, and easy to recommend.
I’m always worried about comparisons in uptime to someone’s single company to big clouds. AWS will have both more issues and more varied ones, but they’ll be often limited in scope. It’s hard to compare it to a smaller local setup without a list of specific risks and expected time to recovery. At an extreme, the box under my desk at home had 100% uptime in the last few years, but I wouldn’t make decisions based on that.
I agree a single companies uptime comparison vs cloud providers isn’t very useful to outsiders, but it can be useful in that single companies decision making. That’s why we did the comparison.
need 99,99+% uptime and want to sueblame somebody big otherwise
More importantly, don’t want to pay for in-house expertise to manage the systems when it is not part of their core competency. For smaller companies, they often need 10% of a very qualified sysadmin. They can either hire a full-time one for 10x the price of what they actually need, or outsource to a cloud provider and, even if the markup is 100%, be paying 80% less.
need a distributed database for a ton of access that “Just works”
The ‘Just works’ bit is far more important here than the ‘distributed’ or ‘ton of accesses’ part, because it translates to not having to pay an administrator.
want a “familiar” stack where you can just slap some specific product of the three letter company as a requirement in the job description
Again, this is a cost-saving thing. It’s much easier to hire in-house talent or to outsource a particular project to a (small or large) company if the infrastructure that they’re building on is generic and not something weird and bespoke that the developers would need to learn about.
In a huge number of cases, the cost of the infrastructure (cloud or on-prem) is tiny in comparison to the cost of the people to manage it. Using the cloud lets the provider amortise the cost of this over millions of customers and pass on a big chunk of that saving to you.
Buying a big server has a few drawbacks. If any hardware component fails, then you need to RMA that part, which means you need either an expensive support contract or you need someone on staff who is competent to identify the faulty component and send it back. If a cloud server fails, then your VM is restarted on another machine. If you are using PaaS offerings then someone else is responsible for building a platform that handles hardware failure and you don’t even notice.
If you want a separate test and production version, then you need at least two of those big servers, whereas with even IaaS offerings it’s trivial to spin up a clone of the production server for a test deployment on a different vnet and if you’re using PaaS then it’s even easier, and the number of test instances can easily scale with the number of developers in both cases.
TL;DR: If you think the cost of the hardware is important then either you’re thinking about massive deployments or you completely misunderstand the economics of this kind of thing.
In my experience the companies I have worked for tend to end up at least doubling their spend when moving from dedicated to cloud for little added benefit and almost the exact same maintenance burden, in one case a company I worked for they went from £3,200/year spend on a managed 24-core/112GB RAM dedicated box with 1 hour SLA on having a tech at the datacenter make changes/do maintenance/etc to ~£1,400/month spend on far less resource except now they now had to handle the server changes/maintenance in house on top of managing the cloud infra which actually required hiring someone new to handle.
For my own company we rent two dedicated boxes (16-core/64GB RAM each) at a total cost of £108/mo which provides more than enough capacity, and in the past six years has had five nines uptime while costing a fraction of what it would have to go with cloud.
now had to handle the server changes/maintenance in house
I’m not sure I understand. What server maintenance are you doing for a cloud based servers that’s comparable to the dedicated one?
with 1 hour SLA on having a tech at the datacenter make changes/do maintenance/etc
That’s 1h SLA to having someone look at the issue, not for a working replacement, correct?
I’m not sure I understand. What server maintenance are you doing for a cloud based servers that’s comparable to the dedicated one?
It was more running updates, kernel patches and such. With the managed setup the hosting provider acted as ops and took responsibility for ensuring updates didn’t break production, they were our sysops. There were a few cases when we were being bottlenecked by various hardware and requested it replaced. Every so often I got a call from the datacenter’s ops team to confirm server access was legitimate, or to inform me the server had some unusual activity on and they had investigated over night.
That’s 1h SLA to having someone look at the issue, not for a working replacement, correct?
Typically it was an instant phone call to someone in the datacenter who would either remote into the box, or walk over to it and deal with it in the rack, the SLA was on getting hold of someone on the floor to talk with who could remote in and diagnose what was wrong live. No call centre, no account handler, no middle men, a direct line to an experienced sysops engineer; that’s pretty rare nowadays.
A couple of nits, directly:
More importantly, don’t want to pay for in-house expertise to manage the systems when it is not part of their core competency.
I would argue that managing systems is a core part of developer competency, and I’m tired of people acting like it’s not–especially when those people seem to frequently employed by companies whose business models depend on the meme of systems administration being some black art that can only be successfully trusted to the morlocks lurking in big data centers.
Using the cloud lets the provider amortise the cost of this over millions of customers and pass on a big chunk of that saving to you.
This is manifestly not what’s happening, though, as we’re seeing. The savings are being passed on to the shareholders–and if they aren’t, we should all be shorting MSFT and AMZN!
If you want a separate test and production version, then you need at least two of those big servers
Or, you know, you host both things on the same box under different VMs, or under different vhosts. This has been a problem with a well-known solution since the late 90s (though sadly not reliably applied).
you completely misunderstand the economics of this kind of thing.
Well…
I submit that perhaps we aren’t the only ones who misunderstand the economics. :)
~
To be clear, there are some things like S3 that I just cannot be arsed to host. Hosted Postgres is nice when you don’t want to bother setting up metrics and automatic backups–but then again, I’m pretty sure that if somebody wrote a good script for provisioning that or a runbook then the problem would go away. It’s also totally fine to keep a beefy machine for most things and then spin off certain loads/aspects to cloud hosting if that’s your kink.
Remember, there was a time when the most sensible thing was to send your punchcards and batch jobs down to the IBM service bureau, because it was more economical. These things go in cycles.
Addendum, reading back over this:
The more I think about this, the bigger issue is probably that if you run your own infra there’s the requirement that there be some continuity of ownership and knowledge–and that is difficult in an industry right now where average tenure is something like less than two years for startups.
Most of my career so far has been, essentially, cleaning up somebody else’s historical mistakes by paving over them with my soon-to-be historical mistakes. An endemic part of the problem is always that very specific and arcane parts of the system are forgotten, or stop being understood, as the flow of brains does its thing. I used to be in camp “rewrite”, a decade ago. I’m now firmly in the camp “nooooooooo, fix it, please don’t do this to me, please please please fix it”
I’m honestly dumbstruck by how obvious this is once it’s pointed out explicitly.
Even when I started out 15+ years back, I had the distinct impression that traditional “ops” roles tended to have far higher average tenures than developer roles.
I would argue that managing systems is a core part of developer competency
I am not talking about developers, I am talking about companies. Most big cloud customers are not software companies, they are companies that have some in-house infrastructure that is a cost centre for their business: it is a necessary cost for them to make money, but it is not the thing that they make money from. They may employ some developers, but managing infrastructure and writing code are different (though somewhat overlapping) skill sets. Importantly, developers are not always the best administrators and, even when they are, time that they spend managing infrastructure is time that they are not spending adding features or fixing bugs in their code.
For a lot of these companies, they outsource the development as well, so the folks that wrote the code are contractors who are there for a few months and are then gone. An FTE sysadmin is a much higher cost.
This is manifestly not what’s happening, though, as we’re seeing. The savings are being passed on to the shareholders–and if they aren’t, we should all be shorting MSFT and AMZN!
That doesn’t follow. If it costs 100 times as much to manage 1000 machines as it does to manage one, then a company that passes on half of the saving to their customers will still be raking in cash. The amount that it costs to maintain a datacenter of a few tens of thousands of machines with a homogeneous set of services running in large deployments across them is vastly less that the cost of each customer maintaining their own share of that infrastructure.
We’ve seen figures in this very thread of at least a 2x price increase using cloud providers.
The numbers I’ve seen there are comparing hardware cost to hardware cost, which ignores the bit that’s actually expensive. They’re also talking about IaaS, which does not get most of the savings. And they’re talking about companies with steady-state loads, which is where IaaS does the worst. Renting a 64-core server is probably more expensive than buying one (a cloud vendor will pay less for it by buying in bulk, but that’s not a huge difference, and they want to make a profit). The benefit that you should get from IaaS is that you can move between a 2-core server and a 64-core server with a single click (or script) so that you can scale up for bursts. If you are a shop with a trickle of sales across the year and 100 times as many on cyber monday, for example, then you might need a 64-core system for 2 days a year and be happy with a 2-core machine the rest of the time. Comparing buying and renting a 64-core machine for the entire year is missing the point.
Not just small companies. Larger companies often have terrible tech ops. Moving to ops as a service can be a way to fix that, though there is the danger that your existing ops people and processes will infect what you do in the cloud and either prevent you from getting the advantages or even make it worse than what you had.
Interesting, it didn’t occur to me that only 1% of customers would want good uptime they’re not responsible for, a reliable database, and an easy to match watch-word for hiring.
I’ve got 99,99 SLA one some tiny box at some irrelevant hoster in germany, with a downtime of 1 hour in 10 years when the whole box died (was up then again in 1hour on another system). So you could say I’ve got my 99,9% without any failover.
If that’s possible for a normal company with only some KVM + guaranteed CPU, RAM and bandwidth, you may not need the big(tm) cloud for that same hardware.
I have seen far more (and longer) outages caused by messing up with cloud systems than by hardware failure.
Some examples I have personally seen:
also I didn’t mention it, but I’ve got a 24/7 hotline in case my system is down, won’t pay anything as long as it’s not my fault (then I’m billed for every 15 minutes), and I did use it at one sunday when the network latency spiked
need 99,99+% uptime and want to sueblame somebody big otherwise
Many companies and even just clubs and stuff had that kind of uptime long before cloud providers even were a thing and if you look at guarantees from cloud providers you will generally not find more guarantees than what most companies provide. While cloud providers have more staff they also have way more complexity than smaller companies, bringing their own kinds of outages and every now and then you hit limitations of managed services, need to upgrade cause they decided to change something, which can be less planable than in your own company. And good luck if you hit some bug based on the particulars on how you use the service and going through layers of support lines, unless you are really big - big enough to easily do most stuff in-house.
Elastic Search I set up ten years ago on physical machines and was fairly trivial. I think early on that was one of their main selling points. We even helped a very big bank to set it up on their infrastructure. When we came over to discuss any remaining topics they were done and built their own orchestration around it. Fun fact they built basically their own Nomad/Kubernetes and I think it was largely shell script (not completely sure though!). I don’t know how it is these days though.
S3 is pretty easy to replace and low maintenance with things like minio and seaweedfs.
And also, if you ever run any serious setup where you (think you) need the cloud you will certainly end up troubleshooting issues on the managed services, but only after scraping together enough evidence that it’s their issues. Even more fun when you have to go through their partners first. So you need people that are both experts in the field, but also experts with the particular cloud providers. So, in any capacity where you think you might actually need cloud providers you certainly need people that could easily set things up on their own. And that is why you can make a ton of money DevOps jobs, if you like doing that. There’s always need.
But even if you happen to never run into any of these problems. You usually need experts for technologies you use, way before your standard server setup is even close to limit you somehow. And usually it’s not a clear cut how much they need to know. So they will certainly know how to run these technologies. Again, that’s if you don’t run into any issues with your cloud provider’s setup and that at some point will happen, even with Amazon and Google. After all they also run physical hardware, have tons of management infrastructure that also can have bugs, have situations that their monitoring doesn’t detect.
The biggest thing is that you can blame them, but then you need to be able to proof it, which can be really hard at times, especially if you don’t know their setup.
I think there is a lot of “right sounding” things said about cloud computing, that also typically aren’t inherently wrong, but still at best only apply to the practical reality to a certain degrees and cloud providers would be stupid not to make statements based on that and people wanting to get DevOps jobs, do consulting, sell books do the same. I think it’s rarely intentional though. It’s just easy to make a generic true-ish statement to justify what we do. But that goes into psychology.
That’s the thing. There are a small number of companies whose domain/problem space is such that they can 100% avoid lock-in by treating cloud instances strictly as VMs and running all their own services, but as your needs grow that can be SUPER hard to maintain without a sizable investment in engineering which not every company is willing to make.
Maybe they should? But they aren’t.
And that’s why the abuses of techno-optimism from the ruling classes are creating a new wave of luddism inside and outside the tech industry.
The argument from OP is not new: the conflict of humans vs machines has been a major trope of 20th century philosophy, literature and art, especially after the brutality of nazi-fascism in Europe. Actually, it’s the whole premise of entire fields of study, political institutions and organizations.
Obviously, this stuff is not taught to engineers, that are trained to implement acritically anything that is requested from them. Just sprinkle some ethics-washing on top and they will believe they are the good guys.
It’s always fun (not really) when techbros discover they are perceived as the “bad guys” outside their bubble. They get mad at people writing “no programmers” or “no cryptobros” on dating apps or “if you work in tech, everybody hates you. Just leave” on the walls of a gentrified neighborhood.
Obviously, this stuff is not taught to engineers
Depends on the schools, in Québec (maybe in the rest of Canada, I don’t remember) we are required to take a course on ethics in engineering. I also had course on sociology (also geared towards technology and engineering), but I don’t know if it’s required outside of Polytechnique of Montréal.
This kind of courses are taught throughout the world, for what I know, but they are very very shallow compared to the responsibility and a power that a software engineer has. Also they tend to reinforce an idea of ethics that supports the status quo and usually draws the line at “95% of what is being done with technology is totally ok, the remaining 5% must be eradicated and please don’t put AI in your weapons”. I don’t know the one you took, but all the syllabi of the courses I’ve seen are wildly insufficient.
Canada uses the word “engineer” very differently from USA. Here is it a regulated term with requirements to be one (including ethics training). In USA it can describe almost any practical job, but in this context often means “someone paid to write code”.
Hi, I hail from Quebec too, and I’ve been practising software development for the past decade and a half. Can’t legally call myself an engineer, only went through college. Most of the people I have worked with over a decade and a half are not legally allowed to call themselves engineers. So “this stuff is not taught to engineers” is not true, from a very technical standpoint, but the reality on the ground is that indeed, the practitioners are not taught that stuff.
That’s a pretty good point, almost all my colleagues went to the same engineering school, so I tend to forget that not all software developer went to engineering school.
They get mad at people writing “no programmers” or “no cryptobros” on dating apps or “if you work in tech, everybody hates you. Just leave” on the walls of a gentrified neighborhood.
They sure do love the engineers’ salaries when it comes to supporting a family or paying taxes for their community programs, though. Damnedest thing.
Finishing up work where I’m currently employed, preparing for my new place of employment. That’s gonna be most of it, really. Not-work-wise, snow’s been melting a lot, so probably a good deal of cleaning up on the yard.
There’s a decent snow storm supposedly headed our way tonight lasting through much of tomorrow so we’ll hunker down. I will be planting seeds for the garden, as well as build the temporary indoors greenhouse that’ll hold the plants until spring. We have huge maple trees on the property and my wife’s father might come by on sunday to see if we can pull out some sap to make syrup this spring. Oh. Using the snowblower a lot too.
I might put together a very simple note system that fits with my immediate flow, having seen a few people post about that this week. More or less, i only type notes from my computer, so it should be easy to write and consume from there, where I also do my work. Also, infrequently, I want to refer to my notes using my phone, so l will have a very minimal web interface on top. Probably slap a text search on top. No tags, no categories, l will lean on the search functionality for that. Initially I thought of using codesearch for that, and maybe rank the search results by frequency of search terms or something. Idk. I want it to be really simple to use, and have zero frills.
Otherwise I’m feeling a little down and overwhelmed and professionally bored and I’m probably going to use Final Fantasy XIV as a temporary cure for that.
Running around in circles, mostly. I wanted to port plan9’s wikifs to Go so I could run it on one of my Linux boxes, and then I realized I could try to just port it to plan9port and get the added benefit of using the acme tools for wikifs, then I struggled a good deal because I don’t know C enough to do that easily, so I went back to my Go idea, and then it occured to me that I could compare other packages that have been ported to plan9port and compare the different bits and use that, which brings us up to now. And now, I decided to take a step back and really examine my wiki needs and since it’s personal notes, I don’t really care about versioning, and I don’t care about browser usage except in consulting, and that means I can throw away most of my make-believe needs.
Also I’m exhausted for a variety of reasons so I got myself a month of Final Fantasy XIV in hopes that it would relax me a bit. So far so good.
I would assume that since it’s possible to do that, there’s a non-zero population of people doing that.
Browsers use sqlite for storage. https://www.acquireforensics.com/services/tech/mozilla-firefox.html
See comment above: https://lobste.rs/s/j5x2pm/apple_s_custom_nvme_drives_are_amazingly#c_m3wfvq
SQLite is ubiquitous on Apple devices. Almost anything that needs to store structured data uses it, either directly (Mail.app, or Cocoa’s URL cache) or as the backing store of Core Data.
Yes but if you look into it (even comments here in this post), sqlite doesn’t suffer from the problem and they do full sync.
And on consumer devices, I’d assume that the speed problem isn’t as critical as if you were doing, idk, prod SaaS db with tens of thousands of concurrent users touching things
This aesthetic hits me right in the childhood. I love it and I wish I could bring back that look in my 2021 macbook
In the meantime, there’s Poolsuite / FM and Macintosh.js.
unmedicated schizophrenics to find each other and thereby elevate their delusions into national movements
This line really irritates me. The tone of it was unnecessary.
Schizophrenics are victims of their own minds (and often other people).
It’s not their fault that they’re delusional, they can’t help it.
Also, from my own experience, it’s usually stupid people making national movements.
The actual crazy ones end up drinking or drugging themselves to death.
I’m speaking from experience as a schizophrenic, who also has friends and family that are schizophrenic.
I know that this is off-topic, but that line really threw me off and I can’t stop fixating on it.
The whole article kinda got ignored when I read the line.
Agreed. That statement is in poor taste and makes it sound like mental illness is just a malicious choice people can make.
While I do understand your reasoning, he was explicitly speaking of unmedicated schizophrenics. While many possibly don’t know of their disease and don’t seek professional help (or even outright refuse it), which is also the case with depression leading to many preventable suicides, I understood this section not only to be a singular criticism of those schizophrenics, but of the general online culture where it’s not prevalent for content consumers to reach out and ask those they’re following to seek professional help when they exhibit pathological behaviour, but instead often even cheer this on, leading to a vicious cycle.
One good example for this is Nikocado Avocado who literally eats himself to death. There may be some in the community asking him to stop and seek help, but the majority seem to close their eyes for this and only see the entertainment.
Schizophrenic traits seem to be beneficial for a career in social media, and there’s this clear dichotomy where very successful social media persona possibly know of their disease but knowingly don’t seek professional help out of fear of changing and becoming uninteresting.
To support this point, there’s even an extreme where streamers like ‘Sweet Anita’ market themselves with their mental illness. Even though I neither know her medical history or am a medical expert, there’s a residual doubt that she might possibly choose a weaker medication or no medication at all for the aforementioned reasons.
The author was a bit careless in his formulation and I understood him to mean ‘willingly unmedicated schizophrenics’.
Tbh to me it read a lot like usage of a mental illness as a slur targeting the “conspiracies” crowds.
I wasn’t scare-quoting, merely regular-quoting, as a way to refer to what the author seems to be implying. Maybe wrongly so, too; it’s what I interpreted.
I was attempting humor, although my experience in the industry thus far suggests that there’s going to be no shortage of any kind of madness any time soon.
I don’t think Betteridge’s Law applies to blogs like this especially since this is obviously rhetorical, while Betteridge’s “Law” (which it’s really not) applies to possibly actionable (or at least evaluable) headlines about factual conditions (rather than rhetorical questions about possible future).
This year, we moved. Got 5 kids, the house was getting kinda tight (of course that’s debatable and subjective) so we bought a family-appropriately-sized centennial house about an hour further away from my employer’s office, taking shameless advantage of the pandemic to solidify a perma-remote position. I got lucky and sold my house just as the prices were stopping to pandemic-surge, and hadn’t quite reached yet the place where I bought. It’s been a lot of work so far, moving 7 people ain’t trivial. I also learned to make mead. I like mead.
Asides from that, work-wise, it’s been pretty quiet. Kind of a weird death-march project where we kept hitting hurdles and pushing back the deadline. A tip: pushing back a deadline several times makes you stress about the deadline several times. Avoid if possible.
For 2022, we’re hopefully gonna take it a bit easier, I’m gonna finally learn Elixir and Phoenix in earnest because reasons, and I might look for a new gig when my stock options vest in November unless they top me up with a good deal of RSUs. Maybe a smaller company. I’d like a small-medium (50-200 employees) non-hypergrowth-oriented company with decent insurance. Dreaming does not cost much.
Hahahahaha yeah I can imagine why. My eldest is 10, my youngest is soon to be 2, and there’s no twins in between
I would like to research technology that does not require attention. I would also research technology that reuses old hardware instead of throwing it away. I would also research technology that makes running the software in your own home (as opposed as in a SaaS) feasible again, possibly piggy-backing on the “reusing old hardware” research.
I would love for my mom to be able to (relatively) securely run a piece of software at home without having to understand the intricacies of setting it up. Something like how my smart tv runs, at most. I know that phones home, but like. You get the idea? Kinda? Idk.
More local shit that doesn’t require your attention, more software that you own on hardware that you own, that will still run when my company goes under. Peaceful, community-building things, that help you be in the real world, not suck you into a virtual hellscape.