I totally subscribe to the points that the author makes, regarding making poor trade-off decisions for build vs buy.
Logging and metrics are a great example of these. However, in my small organisations, the cost for 20 application servers servers (just servers) for metrics was greater than the actual server leasing costs! Madness from Microservices!
Logging also has the unfortunate habit of leaking passwords or credentials when developers forget that logging is actually, well, logged somewhere, or debug/dev logging sneaks into production. Privacy and in some cases evidential tamper-free trails is critically important.
Once you’ve got metrics being stored, you immediately find that the real effort – and value – is in creating and layering business-specific graphs, reports, and alerts, on top of that raw data. No hosted solution can ever help you with this as its always bespoke.
In both cases, setting up our own logging & monitoring tools was trivial effort, on our own kit. Once you’ve done that, the expertise is portable to almost any other business or customer, and you can knuckle down to the real effort of delivering business value through data insights.
collectd, graphite, riemann, graylog, influxdb, prometheus etc all deliver a hell of a punch on their own if you want to go the opensource route. Once you’ve developed a metrics/monitoring/logging stack once, you’re good to re-use it every time. The long-term payoff is huge.
That is a crazy bug. So I wasn’t wrong to be suspicious of M:N threading? I understand why Go went that route, but the complexity is crazy.
Here is another example of a years-long bug related to M:N threading (which I saved years ago):
the only runtime that gets this right is Erlang/OTP’s BEAM, with pre-emptive concurrency.
I’d argue that the bug is having direct calls between code produced by two different compilers that disagree about what the ABI is, rather than it being m:n threading itself that’s the culprit. You could hypothetically want really tiny stacks for some other reason than m:n threading and get the same bug?
Jess is super smart and doing amazing things with containers, that being said the claims she makes about what jails & zones can’t do are not correct.
I would also argue that while you can’t take apart the jails deathstar, in practice you have all the flexibility you need:
I’m not sure on how much of this could be changed at runtime, but most of the usual needs (introspection, sharing) have straightforward solutions.
I haven’t succeeded in getting dtrace working from the host system into a jailed process but maybe somebody will point out how to do that. Then its pretty much a full house as far as I’m concerned.
I’ll stay in my FreeBSD prison tyvm.
As a FreeBSD user, I cannot disagree with the argument in here. I think that “boring” is entirely the wrong word. Perhaps the author is trying to ride the bandwagon of “boring” posts lately. It sounds to me more like he just wants his infra to work and Linux isn’t cutting it.
Speaking of how much time one spends debugging infrastructure, I’d say Apache Foundation projects represent a significant fraction of my debugging time.
Personally I do find this stuff exciting. ZFS makes me want to do upgrades every day, and while sending snapshots across the network I’m positively jumping for joy. The simplicity of CARP and network load balancing with haproxy gives me goosebumps. But for most of my colleagues they definitely want boring infrastructure, stuff that doesn’t wake them up at night, stuff that helps them ship apps, not the layers underneath it.
previously, erlang & webmachine, now Elixir + Cowboy + Plug or Phoenix.