Threads for tonyarkles

  1. 23

    Is a language good because it has many features? My current thesis is that adding features to languages can open up new ways to encode entire classes of bugs, but adding features cannot remove buggy possibilities.

    1. 23

      If you have a foot-gun in your arsenal and you add a new safe-gun, sure, technically that’s just one more way you can shoot yourself in the foot, but that’s missing the point of having a safe-gun.

      Many features can be used as less bug prone alternatives to old constructs. E.g., match expression instead of a switch statement where you could forget the assignment or forget a break and get unintentional fall-through. Same way features like unique_ptr in C++ can help reduce bugs compared to using bare pointers.

      1. 12

        Another thing worth mentioning is that PHP has also grown some good linters that keep you away from the unsafe footguns. I believe it’s gotten really good over the years.

        1. 7

          Just to fill this out:

          Psalm

          PHPStan

          EA Inspections Extended

          Sonar

          I actually run all of these. Obviously no linter is perfect and you can still have bugs but if you’re passing all of these with strict types enabled, you’re not writing the bad amateur code that got PHP it’s reputation from the “bad old days”. PHP’s not perfect but it’s no more ridiculous than, say, JavaScript, which curiously doesn’t suffer from the same street cred problems.

          1. 6

            …JavaScript, which curiously doesn’t suffer from the same street cred problems.

            I see what you’re saying, but JS actually does kinda have serious street cred problems. I mean, there are a ton of people who basically view JS programmers as second-class or less “talented”. And JS as a language is constantly mocked. I think the difference is that JS just happens to be the built-in language for the most widely deployed application delivery mechanism of all time: the web browser.

        2. 1

          It’s not as if match replaced switch; and why did it have default falkthrough to begin with, whilst match doesn’t?

          1. 2

            It’s probably just taken verbatim from C. It’s funny because PHP seems to have taken some things from Perl, which curiously does not have this flaw (it does allow a fallthrough with the next keyword, so you get the best of both worlds).

            1. 1

              Switch has been in PHP since at least version 3.0 which is from the 1990s. Match doesn’t replace switch in the language but it can replace switch in your own code, making it better.

          2. 15

            I disagree. People saying this usually have C++ on their mind, but I’d say C++ is an unusual exception in a class of its own. Every other language I’ve seen evolving has got substantially better over time: Java, C#, PHP, JS, Rust. Apart from Rust, these are old languages, that kept adding features for decades, and still haven’t jumped the shark.

            PHP has actually completely removed many of its worst footguns like magic quotes or include over HTTP, and established patterns/frameworks that keep people away from the bad parts. They haven’t removed issues like inconsistent naming of functions, because frankly that’s a cosmetic issue that doesn’t get in the way of writing software. It’s very objectionable to people who don’t use PHP. PHP users have higher-priority higher-impact wishes for the language, and PHP keeps addressing these.

            1. 2

              removed many of its worst footguns

              or the infamous mysql API (that was replaced by mysqli)

              edit: Also I like that the OOP vs functional interfaces keep existing. My old code just runs and I get the choice between OOP and functional stuff (and I can switch as I like)

              1. 1

                I liked the original mysql api. Was the easiest to use with proper documentation back then. A footgun is good analogy. A gun can be used in a perfectly safe manner. Of course if you eyeball the barrel or have no regard for basic safety rules about it being loaded or where it is pointed to at any time, then yeah, things are going to go south sooner or later.

                Likewise, the old functional mysql api was perfectly usable and I never felt any worry about being hacked through sql injection. If you are going to pass numbers as string parameters or rely on things like auto-escape, then just like in the gun example, things are not going to end well. But let’s all be honest, at the point it is expected to be hacked.

                1. 1

                  I haven’t been around the PHP community in any serious capacity for probably 17 years now, but “with proper documentation” was a double edged sword. The main php.net website was a fantastic documentation reference, except for the part where lots of people posted really terrible solutions to problems on the same page as the official documentation. As I grew as a developer, I learned where a lot of the footguns were, but starting out the easy path was to just grab the solution in the comments on the page and use it, with all of the accompanying downfalls.

                  1. 1

                    Already back in the day, it baffled me that the site even had comments, let alone people relying on them.nI would never blindly trust anything in the comments.

            2. 8

              There is only one way of modifying a language that works in practice: add new features. As one of my colleagues likes to say, you can’t take piss out of a swimming pool. Once a feature is in a language, you can’t remove it without breaking things. You can; however, follow this sequence:

              1. Add new feature.
              2. Recommend against using old feature.
              3. Refactor your codebase to avoid the old feature.
              4. Add static analysis checks to CI that you aren’t using the old feature.
              5. Provide compiler options to make use of the old features a hard error.

              At this point, the old feature technically exists in the language, but not in your codebase and not in new code. I’ve seen this sequence (1-4, at least) used a lot in C++, where unsafe things from C++98 were gradually refactored into modern C++ (C++11 and later), things like the C++ Core Guidelines were written to recommend against the older idioms, then integrated into static analysers and used in CI, so you the old usages gradually fade.

              If you manage to get to step 5, then you can completely ignore the fact that the language still has the old warts.

              1. 6

                I thought I was going crazy. Needed validation as no one would state the obvious.

                None of these features is a game changer for PHP. And even less so is all the composer and laravel craze that pretty much boils down to a silly explosion of javaesque boilerplate code.

                Heck, even the introduction of a new object model back in PHP 5 had marginal impact on the language at best.

                PHP’s killer features were:

                • Place script in location to deploy and map a URL to it
                • Out of the box support MySQL. Easy to use alternatives were payed back then, and connecting to MySQL or PostgreSQL was a PITA in most languages.
                • A robust template engine. It still is among the best and most intuitive to use our there. Although alternatives exist for every language.
                • Affordable availability on shared hosting with proper performance. This blew the options out of the water, with alternatives coating up to three orders of magnitude more for a minimum setup.

                These things are not killer features anymore. Writing a simple webapp with a Sinatra-like framework it’s easier than setting up PHP. The whole drop file to deploy only made sense in the days of expensive shared servers. It is counterproductive in the $3 vps era.

                I would prefer if the language would:

                1. Ship a robust production grade http server to use with the language instead of the whole mess it requires to be used via third party web servers

                2. Even better. Drop the whole http request and response as default input/output. It makes no sense nowadays. It is just a cute reliq from past decades. Which is more a source of trouble than a nicety.

                1. 1

                  Place script in location to deploy and map a URL to it

                  Which was possible for years before PHP via CGI and is no longer possible for PHP in many setups. PHP != mod_php

                  1. 6

                    Which was possible for years before PHP via CGI

                    mod_php did this better than CGI did at the time.

                    1. From what I remember from trying out this stuff at the time, the .htaccess boilerplate for mod_cgi was more hassle and harder to understand.
                    2. CGI got a rep for being slow. fork/exec on every request costs a little, starting a new Perl interpreter or whatever on every request cost a lot. (and CGI in C was a productivity disaster)
                    3. PHP had features like parsing query strings and form bodies for you right out of the box. No need to even write import cgi.

                    Overall the barrier to entry to start getting something interactive happening in PHP was much lower.

                    From what I remember the documentation you could find online was much more tutorial shaped for PHP than what you could find online for CGI.

                    PHP != mod_php

                    Sure now, but pm is discussing the past. PHP == mod_php was de facto true during the period of time in which PHP’s ubiquity was skyrocketing. Where pm above describes what PHP’s killer features “were”, this is the time period they are describing.

                    1. 4

                      mod_php did this better than CGI did at the time.

                      It also did it much worse. With CGI, the web browser would fork, setuid to the owner of the public_html directory, and then execve the script. This had some overhead. In contrast, mod_php would run the PHP interpreter in-process. This meant that it had read access to all of the files that the web server had access to. If you had database passwords in your PHP scripts, then you’d better make sure that you trust all of the other users on the system, because they can write a PHP script that reads files from your ~/public_html and sends them to the requesting client. A lot of PHP scripts had vulnerabilities that let them dump the contents of any file that the PHP interpreter could read and this became any file the web server could read when deployed with mod_php. I recall one system I was using being compromised because the web server could read the shadow password file, someone was able to dump it, and then they were able to do an offline attack (back then, passwords were hashed with MD5 and an MD5 rainbow table for a particular salt was something that was plausible to generate) and find the root password. They then had root access on the system.

                      This is part of where the PHP hate came from: ‘PHP is fast’ was the claim, and the small print was ‘as long as you don’t want any security’.

                      1. 1

                        This is completely irrelevant to the onboarding experience.

                        Either way, empirically, people didn’t actually care all that much about the fact that their php webhosts were getting broken into.

                        1. 1

                          This is completely irrelevant to the onboarding experience.

                          It mattered for the people who had their database credentials stolen because mod_php gave everyone else on their shared host read access to the file containing them. You’re right that it didn’t seem to harm PHP adoption though.

                    2. 2

                      Not to the same extent at all. CGI would spawn a process on the operative system per request. It was practically impossible to keep safe. PHP outsourced the request lifecycle out of the developer’s concern. And did so with a huge performance gain compared to CGI. While in theory you could to “the same” with CGI, in practice,.it was just not viable. When PHP4 arrived, CGi was already in a downwards spiral already, with most hosting providers disabling access to it. While Microsoft and Sun microsystems followed PHP philosophy by offering ASP and JSP, which had their own share of popularity.

                      PHP is, by and large, mod_php and nowadays fpm. The manual introductory tutorial even assumes such usage. Had they packaged it early on as a regular programming language, with its primary default interpreter hooked up to standard streams, it might have been forgotten today. Although personally I think they should have made that switch long ago.

                  2. 4

                    “Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.”

                    https://schemers.org/Documents/Standards/R5RS/HTML/

                    1. 1

                      I think it’s the same principle as with source code: you want as little as possible while keeping things readable and correct

                    1. 44

                      Name popular OSS software, written in Haskell, not used for Haskell management (e.g. Cabal).

                      AFAICT, there are only two, pandoc and XMonad.

                      This does not strike me as being an unreasonably effective language. There are tons of tools written in Rust you can name, and Rust is a significantly younger language.

                      People say there is a ton of good Haskell locked up in fintech, and that may be true, but a) fintech is weird because it has infinite money and b) there are plenty of other languages used in fintech which are also popular outside of it, eg Python, so it doesn’t strike me as being a good counterexample, even if we grant that it is true.

                      1. 28

                        Here’s a Github search: https://github.com/search?l=&o=desc&q=stars%3A%3E500+language%3AHaskell&s=stars&type=Repositories

                        I missed a couple of good ones:

                        • Shellcheck
                        • Hasura
                        • Postgrest (which I think is a dumb idea, lol, but hey, it’s popular)
                        • Elm
                        • Idris, although I think this arguably goes against the not used for Haskell management rule, sort of

                        Still, compare this to any similarly old and popular language, and it’s no contest.

                        1. 15

                          Also Dhall

                          1. 9

                            I think postgrest is a great idea, but it can be applied to very wrong situations. Unless you’re familiar with Postgres, you might be surprised with how much application logic can be modelled purely in the database without turning it into spaghetti. At that point, you can make the strategic choice of modelling a part of your domain purely in the DB and let the clients work directly with it.

                            To put it differently, postgrest is an architectural tool, it can be useful for giving front-end teams a fast path to maintaining their own CRUD stores and endpoints. You can still have other parts of the database behind your API.

                            1. 6

                              I don’t understand Postgrest. IMO, the entire point of an API is to provide an interface to the database and explicitly decouple the internals of the database from the rest of the world. If you change the schema, all of your Postgrest users break. API is an abstraction layer serving exactly what the application needs and nothing more. It provides a way to maintain backwards compatibility if you need. You might as well just send sql query to a POST endpoint and eliminate the need for Postgrest - not condoning it but saying how silly the idea of postgrest is.

                              1. 11

                                Sometimes you just don’t want to make any backend application, only to have a web frontend talk to a database. There are whole “as-a-Service” products like Firebase that offer this as part of their functionality. Postgrest is self-hosted that. It’s far more convenient than sending bare SQL directly.

                                1. 6

                                  with views, one can largely get around the break the schema break the API problem. Even so, as long as the consumers of the API are internal, you control both ends, so it’s pretty easy to just schedule your cutovers.

                                  But I think the best use-case for Postgrest is old stable databases that aren’t really changing stuff much anymore but need to add a fancy web UI.

                                  The database people spend 10 minutes turning up Postgrest and leave the UI people to do their thing and otherwise ignore them.

                                  1. 1

                                    Hah, I don’t get views either. My philosophy is that the database is there to store the data. It is the last thing that scales. Don’t put logic and abstraction layers in the database. There is plenty of compute available outside of it and APIs can do precise data abstraction needed for the apps. Materialized views, may be, but still feels wrong. SQL is a pain to write tests for.

                                    1. 11

                                      Your perspective is certainly a reasonable one, but not one I or many people necessarily agree with.

                                      The more data you have to mess with, the closer you want the messing with next to the data. i.e. in the same process if possible :) Hence Pl/PGSQL and all the other languages that can get embedded into SQL databases.

                                      We use views mostly for 2 reasons:

                                      • Reporting
                                      • Access control.
                                      1. 2

                                        Have you checked row-level security? I think it creates a good default, and then you can use security definer views for when you need to override that default.

                                        1. 5

                                          Yes, That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG, and our app(s) auth directly to PG. We happily encourage direct SQL access to our users, since all of our apps use RLS for their security.

                                          Our biggest complaint with RLS, none(?) of the reporting front ends out there have any concept of RLS or really DB security in general, they AT BEST offer some minimal app-level security that’s usually pretty annoying. I’ve never been upset enough to write one…yet, but I hope someone someday does.

                                          1. 2

                                            That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG

                                            When each user has it its own role, usually that means ‘Role explosion’ [1]. But perhaps you have other methods/systems that let you avoid that.

                                            How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                            [1] https://blog.plainid.com/role-explosion-unintended-consequence-rbac

                                            1. 3

                                              Well in PG a role IS a user, there is no difference, but I agree that RBAC is not ideal when your user count gets high as management can be complicated. Luckily our database includes all the HR data, so we know this person is employed with this job on these dates, etc. We utilize that information in our, mostly automated, user controls and accounts. When one is a supervisor, they have the permission(s) given to them, and they can hand them out like candy to their employees, all within our UI.

                                              We try to model the UI around “capabilities”, all though it’s implemented through RBAC obviously, and is not a capability based system.

                                              So each supervisor is responsible for their employees permissions, and we largely try to stay out of it. They can’t define the “capabilities”, that’s on us.

                                              How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                              Unfortunately PG’s RBAC doesn’t really allow us to do that easily, and we luckily haven’t yet had a need to do something that detailed. It is possible, albeit non-trivial. We try to limit our access rules to more basic stuff: supervisor(s) can see/update data within their sphere but not outside of it, etc.

                                              We do limit users based on their work location, but not their logged in location. We do log all activity in an audit log, which is just another DB table, and it’s in the UI for everyone with the right permissions(so a supervisor can see all their employee’s activity, whenever they want).

                                              Certainly different authorization system(s) exist, and they all have their pros and cons, but we’ve so far been pretty happy with PG’s system. If you can write a query to generate the data needed to make a decision, then you can make the system authorize with it.

                                      2. 4

                                        My philosophy is “don’t write half-baked abstractions again and again”. PostgREST & friends (like Postgraphile) provide selecting specific columns, joins, sorting, filtering, pagination and others. I’m tired of writing that again and again for each endpoint, except each endpoint is slightly different, as it supports sorting on different fields, or different styles of filtering. PostgREST does all of that once and for all.

                                        Also, there are ways to test SQL, and databases supporting transaction isolation actually simplify running your tests. Just wrap your test in a BEGIN; ROLLBACK; block.

                                        1. 2

                                          Idk, I’ve been bitten by this. Probably ok in a small project, but this is a dangerous tight coupling of the entire system. Next time a new requirement comes in that requires changing the schema, RIP, wouldn’t even know which services would break and how many things would go wrong. Write fully-baked, well tested, requirements contested, exceptionally vetted, and excellently thought out abstractions.

                                          1. 6

                                            Or just use views to maintain backwards compatibility and generate typings from the introspection endpoint to typecheck clients.

                                    2. 1

                                      I’m a fan of tools that support incremental refactoring and decomposition of a program’s architecture w/o major API breakage. PostgREST feels to me like a useful tool in that toolbox, especially when coupled with procedural logic in the database. Plus there’s the added bonus of exposing the existing domain model “natively” as JSON over HTTP, which is one of the rare integration models better supported than even the native PG wire protocol.

                                      With embedded subresources and full SQL view support you can quickly get to something that’s as straightforward for a FE project to talk to as a bespoke REST or GraphQL backend.. Keeping the schema definitions in one place (i.e., the database itself) means less mirroring of the same structures and serialization approaches in multiple tiers of my application.

                                      I’m building a project right now where PostgREST fills the same architectural slot that a Django or Laravel application might, but without having to build and maintain that service at all. Will I eventually need to split the API so I can add logic that doesn’t map to tuples and functions on them? Sure, maybe, if the app gets traction at all. Does it help me keep my tiers separate for now while I’m working solo on a project that might naturally decompose into a handful of backend services and an integration layer? Yep, also working out thus far.

                                      There are some things that strike me as awkward and/or likely to cause problems down the road, like pushing JWT handling down into the DB itself. I also think it’s a weird oversight to not expose LISTEN/NOTIFY over websockets or SSE, given that PostgREST already uses notification channels to handle its schema cache refresh trigger.

                                      Again, though, being able to wire a hybrid SPA/SSG framework like SvelteKit into a “native” database backend without having to deploy a custom API layer has been a nice option for rapid prototyping and even “real” CRUD applications. As a bonus, my backend code can just talk to Postgres directly, which means I can use my preferred stack there (Rust + SQLx + Warp) without doing yet another intermediate JSON (un)wrap step. Eventually – again, modulo actually needing the app to work for more than a few months – more and more will migrate into that service, but in the meantime I can keep using fetch in my frontend and move on.

                                  2. 2

                                    I would add shake

                                    https://shakebuild.com

                                    not exactly a tool but a great DSL.

                                  3. 21

                                    I think it’s true that, historically, Haskell hasn’t been used as much for open source work as you might expect given the quality of the language. I think there are a few factors that are in play here, but the dominant one is simply that the open source projects that take off tend to be ones that a lot of people are interested in and/or contribute to. Haskell has, historically, struggled with a steep on-ramp and that means that the people who persevered and learned the language well enough to build things with it were self-selected to be the sorts of people who were highly motivated to work on Haskell and it’s ecosystem, but it was less appealing if your goals were to do something else and get that done quickly. It’s rare for Haskell to be the only language that someone knows, so even among Haskell developers I think it’s been common to pick a different language if the goal is to get a lot of community involvement in a project.

                                    All that said, I think things are shifting. The Haskell community is starting to think earnestly about broadening adoption and making the language more appealing to a wider variety of developers. There are a lot of problems where Haskell makes a lot of sense, and we just need to see the friction for picking it reduced in order for the adoption to pick up. In that sense, the fact that many other languages are starting to add some things that are heavily inspired by Haskell makes Haskell itself more appealing, because more of the language is going to look familiar and that’s going to make it more accessible to people.

                                    1. 15

                                      There are tons of tools written in Rust you can name

                                      I can’t think of anything off the dome except ripgrep. I’m sure I could do some research and find a few, but I’m sure that’s also the case for Haskell.

                                      1. 1

                                        You’ve probably heard of Firefox and maybe also Deno. When you look through the GitHub Rust repos by stars, there are a bunch of ls clones weirdly, lol.

                                      2. 9

                                        Agree … and finance and functional languages seem to have a connection empirically:

                                        • OCaml and Jane St (they strongly advocate it, mostly rejecting polyglot approaches, doing almost everything within OCaml)
                                        • the South American bank that bought the company behind Clojure

                                        I think it’s obviously the domain … there is simple a lot of “purely functional” logic in finance.

                                        Implementing languages and particularly compilers is another place where that’s true, which the blog post mentions. But I’d say that isn’t true for most domains.

                                        BTW git annex appears to be written in Haskell. However my experience with it is mixed. It feels like git itself is more reliable and it’s written in C/Perl/Shell. I think the dominating factor is just the number and skill of developers, not the language.

                                        1. 5

                                          OCaml also has a range of more or less (or once) popular non-fintech, non-compiler tools written in it. LiquidSoap, MLDonkey, Unison file synchronizer, 0install, the original PGP key server…

                                          1. 3

                                            Xen hypervisor

                                            1. 4

                                              The MirageOS project always seemed super cool. Unikernels are very interesting.

                                              1. 3

                                                Well, the tools for it, rather than the hypervisor itself. But yeah, I forgot about that one.

                                            2. 4

                                              I think the connection with finance is that making mistakes in automated finance is actually very costly on expectation, whereas making mistakes in a social network or something is typically not very expensive.

                                            3. 8

                                              Git-annex

                                              1. 5

                                                Not being popular is not the same as being “ineffective”. Likewise, something can be “effective”, but not popular.

                                                Is JavaScript a super effective language? Is C?

                                                Without going too far down the language holy war rabbit hole, my overall feeling after so many years is that programming language popularity, in general, fits a “worse is better” characterization where the languages that I, personally, feel are the most bug-prone, poorly designed, etc, are the most popular. Nobody has to agree with me, but for the sake of transparency, I’m thinking of PHP, C, JavaScript, Python, and Java when I write that. Languages that are probably pretty good/powerful/good-at-preventing-bugs are things like Haskell, Rust, Clojure, Elixir.

                                                1. 4

                                                  In the past, a lot of the reason I’ve seen people being turned away from using Haskell based tools has been the perceived pain of installing GHC, which admittedly is quite large, and it can sometime be a pain to figure out which version you need. ghcup has improved that situation quite a lot by making the process of installing and managing old compilers significantly easier. There’s still an argument that GHC is massive, which it is, but storage is pretty cheap these days. For some reason I’ve never seen people make similar complaints about needing to install multiple version of python (though this is less off an issue these days).

                                                  The other place where large Haskell codebases are locked up is Facebook - Sigma processes every single post, comment and massage for spam, at 2,000,000 req/sec, and is all written in Haskell. Luckily the underlying tech, Haxl, is open source - though few people seem to have found a particularly good use for it, you really need to be working at quite a large scale to benefit from it.

                                                  1. 2

                                                    hledger is one I use regularly.

                                                    1. 2

                                                      Cardano is a great example.

                                                      Or Standard Chartered, which is a very prominent British bank, and runs all their backend on Haskell. They even have their own strict dialect.

                                                      1. 2

                                                        GHC.

                                                        1. 1

                                                          https://pandoc.org/

                                                          I used pandoc for a long time before even realizing it was Haskell. Ended up learning just enough to make a change I needed.

                                                        1. 5

                                                          Vector go in, pairs come out. None of all of that other stuff matters. You could call this “premature de-optimization”, because the other answers were more wordy, but the other answers were making the same mistake the imperative guys were: mixing up analysis and code without realizing it.

                                                          I tend to like the term “functional decomposition” for this, but it’s very overloaded. I wish there was a term to grasp for taking the problem and rotating it so that as much as its complexity as possible points into the “generic tooling dimension.” In other words, we find that instead of decomposing the problem along the business axis, we can rotate it into the tooling axis and then we can pull out complexity through a pure, colorless tool like zip, leaving the business domain less complex “for free”.

                                                          I notice this a lot in our domain code at work. (D is a surprisingly functional language.) When I’m trying to operate on some data, often the first thing I do is define some one-liner helpers so I have the generic verbs to even talk about the transformation I want to do. Preferably, these should just describe changes in the shape of the dataset, without commenting on the actual meaning of the data. So you can understand them on their own, maybe with some example integers for a unittest, before you go on to apply them to the actual problem.

                                                          1. 2

                                                            Well, it’s Saturday morning and I’m feeling particularly philosophical. To riff on your idea about the axes, you got me thinking about Principal Component Analysis and a sort of qualitative version of it: how well do the “pieces” of a codebase align with either the business axis or the tooling axis? I’ve worked on codebases where the first two unit vectors of “PCA” of the “pieces” of a codebase would align very well with the business & tooling axes, and other codebases where they wouldn’t. The ones with good alignment were definitely way better to work with!

                                                            1. 2

                                                              I tend to like the term “functional decomposition” for this

                                                              This is usually how it’s taught: as a way to “clean up” the code. The problem is that we want to hold the code at arm’s length, apply some mechanical cleanup processes to it. You end up solving the immediate problem at the expense of the larger system.

                                                            1. 1

                                                              While we were busy fixing the linker to save 1MB, iOS 15 launched and quietly gave us 35MB more.

                                                              My original reaction was that Apple should have stood their ground and not caved into any pressure they might have been getting to relax the constraint. After all, bloat is bad, right? But then, I know from the problems with the original 128K Macintosh, as described in Insanely Great, that sometimes an arbitrarily chosen constraint really can be too severe.

                                                              1. 6

                                                                That last line there made me laugh pretty hard and reflect on Google App Engine circa 2008. At the time it only supported Python, it had a 2 second request limit, and a cold start of your app had to fit into the 2 second limit.

                                                                We jumped through all kinds of hoops to satisfy that, including reverse engineering parts of the system so that we could execute backend requests in parallel even though the public API didn’t support that. And then the runtime limit was bumped from 2 seconds to 30 seconds.

                                                                And… a year or two later the CTO left and joined the GAE team!

                                                              1. 2

                                                                At first glance, this seems impossible. But with global warming and the environment in mind, I do wonder if it wouldn’t solve many our problems. It wouldn’t be be economically sane to offer infinite software updates for a one-time purchase, but for a recurring fee? That seems a lot more viable. Just sell the updates instead of giving them away for free. It is not even unthinkable, considering the massive movement towards subscription services that has been going on in the software world.

                                                                1. 2

                                                                  I don’t think this kind of law would solve any environmental problems. The environmental impact of a piece of consumer electronics like a PSP is that it took some amount of energy to build it in a factory (and build all of its inputs, transitively, until you get to the energy cost of mining them out of the ground), and took some amount of energy to ship it from the factory to the consumer. Exactly like every other physical good that people use - there’s nothing special about consumer electronics.

                                                                  The chain of reasoning that someone might perform to conclude that a law requiring perpetual software support of consumer hardware devices would solve the environmental problem of global warming is something like: a law would force manufacturers to provide software support for a longer period of time -> consumers would use their devices for longer than they currently do in the absence of such a law -> consumers would be less prone to buying new devices -> this would reduce demand at electronics factories to build and sell new consumer electronic devices -> they would use less energy as a production input -> less energy implies less burning of fossil fuels for energy -> less CO2 emitted into the atmosphere.

                                                                  I can see multiple problems with this chain of reasoning; to pick one in particular, I don’t think people actually do buy new electronic devices solely because their old ones no longer receive software updates. I think people buy new electronic devices because the state of the art of electronics devices is actually advancing, and people want to be able to do new things that new devices let them do and old devices can’t. A PSP that still received software updates doesn’t actually replace something like a Nintendo Switch - I can’t use a PSP to play Breath of the Wild or Metroid Prime 4, and if economic conditions compelled video game manufacturers to design modern games like these for the PSP’s hardware, because it was prohibitively expensive to sell new hardware, they would likely be inferior games.

                                                                  Something that people who are concerned about the environmental impact of human economic activity don’t think about clearly is that the environmental benefit of avoiding energy/resource use in a particular economic production process only happens if it is prohibitively expensive to use that particular energy or resources. A world where everyone was still playing video games on a PSP because after 2008 it became prohibitively expensive for companies to build a new video game system, would be one where everyone’s material standard of living was lower because all sorts of electronics made with similar manufacturing techniques would also be prohibitively expensive. If the cheapest smartphones cost $50,000, there would be way fewer smartphones manufactured because way fewer people could afford them at that price, and a billion people in the 3rd would would just not be accessing the internet.

                                                                  1. 2

                                                                    I don’t think people actually do buy new electronic devices solely because their old ones no longer receive software updates. I think people buy new electronic devices because the state of the art of electronics devices is actually advancing

                                                                    Well, the size of the set of people who upgrade because a device improved is not empty, but is the size of the set of people who upgrade due to needless obsolescence fully empty? Two things can both be upgrade reasons to different people. There’s an environmental win if anybody stops upgrading, not necessarily everybody.

                                                                    I’m typing this from a 2013 era desktop that I could easily afford to replace. I don’t because it works, it’s updated, replacing it seems like effort, and I’m just too lazy. So I don’t think it’s true that I’d buy new electronics unless it’s prohibitively expensive - I just need moving to a new thing to be less convenient than using the old thing, which tends to favor the old thing unless the broader ecosystem actively rejects its ongoing use.

                                                                    1. 4

                                                                      The other part of this if were specifically talking about environmental impact is spare parts. Not just in the right-to-repair sense, but how long should Sony be required to keep the manufacturing line for PSP replacement batteries running? LiPos have a bad shelf life in general; they can’t really just produce 15 years worth of spares and stuff them in a warehouse somewhere because 15 years later they won’t be any good.

                                                                      1. 2

                                                                        But your new desktop allows you to use all the things modern desktops do (which is also why Android 2.3 has endured such a long time). Imagine a world where most modern software were not available for your machine. Sure, you get software updates for your 2013era Firefox 18 but nothing new is coming out, since all machines have moved to, say, RISC-V since and support for x86 was dropped entirely. This is the situation with the PSP.

                                                                        Yes sure, I have an old MacPro 2010 that upgraded with a halfway decent graphics card makes it a power-hungry but feasible Steam machine for the kind of gaming I do, but this is because post-2010 software still runs on it. If I had been limited to software from that era it would be way less useful and its main use would be to be carted around to vintage computer festivals to show off how well it runs Hypercard or so.

                                                                        1. 1

                                                                          The fact that this is feasible for you reflects the fact that in many ways desktop PC technology has actually stopped getting better, at least meaningfully better, and the time where it topped out was very roughly around 2010. If the computer you had was of 2008-vintage rather than 2013 (just five years older), there’s a much better chance that you’d be interested in upgrading - you’d see that many things people can do with modern desktop computers, such as visiting modern websites, don’t work very well on that hardware, you’d be much less likely to have a SSD, which really did represent a noticeable performance improvement (and which uses resources and energy to make, just like a PSP or any other piece of electronics).

                                                                          PCs themselves don’t generally have software updates in and of themselves anyway; rather it’s the operating systems on them that do (and various specialized components like the CPU or SSD might have their own separate and less routine firmware update process).

                                                                          If you ran modern Windows on your PC, you would still get updates for as long as Microsoft supported it - so if you ran Windows XP on that machine you’d be out of luck, but if it was Windows 7 you might still be ok. Of course eventually Microsoft will stop supporting Windows 7, but will support some new version of Windows that you can buy (will such a law make it illegal for Microsoft to stop supporting a version of Windows? Force them to start supporting Windows XP again? Windows 95?). Perhaps Microsoft might want to stop supporting hardware configurations that don’t have any USB 3 ports, perhaps by only making it possible to actually install the OS over USB 3. USB 3.1 was released as a standard in 2013, so it’s unlikely that your computer has it (although you could still add it via a PCI card - another electronic component that uses energy and resources to manufature! - if necessary). Maybe the law would have to have a provision that made it illegal not to require OS manufacturers in the mid-2020s to assume that USB 3 exists…

                                                                          Of course if you ran Linux on that machine, it would be supported for longer. But even the Linux kernel dropped support for i386 machines as of kernel version 3.8 - with Linus Torvalds’ full blessing. Dropping software support for 25 year old hardware doesn’t sound like a forever software update to me - maybe the Linux foundation would need to be sued or criminally charged under this law?

                                                                          1. 1

                                                                            If the computer you had was of 2008-vintage rather than 2013 (just five years older), there’s a much better chance that you’d be interested in upgrading

                                                                            As luck would have it, I read your comment from (and am replying from) a 2007 MacBook.

                                                                            you’d see that many things people can do with modern desktop computers, such as visiting modern websites,

                                                                            That’s true, but not because of hardware. This 2007 MacBook is a great example of the lack of software updates forcing unnecessary obsolescence - the newest OS X it supports is 10.7, meaning it can’t run a modern browser and can’t browse a lot of sites. The hardware is fine. It had a user serviceable disk, RAM, and battery, so it’s had an SSD for a long time and its third battery holds a charge fairly well. From an environmental point of view, upgrading to an SSD seems less damaging than replacing the entire device.

                                                                            But to be clear though, I was just taking issue with your comment that the only way to avoid rapid hardware upgrades is to dramatically raise cost. That logic sort of implies that the instant consumers get money we go buy electronics, so if they cost more it’d take longer to get that money and we’d buy less. Tech incomes though mean most of us on this site can afford to buy new hardware tomorrow, so there must be a reason we don’t that’s not about money.

                                                                            Updates “forever” seems unrealistic and I never meant to imply that it should happen. It’s still a valid question though whether it’s appropriate to end software updates when a majority of devices manufactured of a particular model are still in active use. When that occurs, it strongly suggests software updates are being used to drive hardware sales when users are otherwise happy with the hardware.

                                                                        2. 2

                                                                          Something that people who are concerned about the environmental impact of human economic activity don’t think about clearly is that the environmental benefit of avoiding energy/resource use in a particular economic production process only happens if it is prohibitively expensive to use that particular energy or resources.

                                                                          This is a bit of a misrepresentation. All of the climate activism I have been to recently has emphasised the importance of economic and climate justice taking place at the same time. The point is to redistribute resources and to globally reduce emissions. For some this will mean getting by with less. For many more this would mean greater access to resources.

                                                                          More accurate carbon pricing on products doesn’t have to mean that people who are currently poor have to do without, but for that to happen there will have to be massive redistribution through aid and increased wages for poorer people all over the world (which will mean increased product prices on many items that are currently cheap only because human time is valued so differently in the global south versus the north, though automation and efficiencies will also become more competitive as labour becomes reasonably priced and that can help here).

                                                                          1. 1

                                                                            This is a bit of a misrepresentation. All of the climate activism I have been to recently has emphasised the importance of economic and climate justice taking place at the same time. The point is to redistribute resources and to globally reduce emissions. For some this will mean getting by with less. For many more this would mean greater access to resources.

                                                                            Unless a specific anti-carbon-emission scheme entails literally every human being in the world getting by with less resources whose production entails emitting CO2, it won’t actually work for the purpose of reducing CO2 emissions. What activists claim in the course of performing activism bears little relationship to what would actually happen in a world where a given policy actually exists and people make economic decisions in response to it.

                                                                            More accurate carbon pricing on products doesn’t have to mean that people who are currently poor have to do without, but for that to happen there will have to be massive redistribution through aid and increased wages for poorer people all over the world (which will mean increased product prices on many items that are currently cheap only because human time is valued so differently in the global south versus the north, though automation and efficiencies will also become more competitive as labour becomes reasonably priced and that can help here).

                                                                            It does mean that people who are currently poor (along with everyone else) has to do without, because “doing without” is the actual mechanism by which CO2 is prevented from being emitted into the atmosphere.

                                                                            1. 1

                                                                              Less CO2 can be emitted at the same time as the carbon budget for most people increasing because of the massive inequality in global carbon emissions. See e.g. https://ourworldindata.org/grapher/consumption-co2-per-capita

                                                                              1. 1

                                                                                This might be true; there’s a lot of economic processes producing goods and services that involve emitting CO2 besides consumer electronics manufacturing. Maybe flying planes turns out to dominate CO2 emission linked to economic activity and making it illegal to run an airline would reduce aggregate CO2 emissions more than everyone on earth being able to afford a cell phone. The point remains that if the specific thing you’re trying to reduce is resource consumption associated with inputs to making consumer electronics, the only way to do this is to make consumer electronics more expensive, so fewer people can afford to buy one, so fewer get physically made (making something illegal is one way of making it expensive - you could imagine a law saying that a person could only legally own one of a cell phone or a game console, and that would engender a black market in unregulated cell phones/game consoles, which would cost more money because of the illegality).

                                                                      1. 5

                                                                        Out of ADHD meds (there’s a shortage) and I don’t feel like I can focus on anything productive. Funny how quickly you can get used to normalcy and then it abruptly stops!

                                                                        1. 3

                                                                          If you don’t smoke, and it’s not a crime to do so in your jurisdiction (like it is here) you might find nicotine-containing vapes help.

                                                                          1. 4

                                                                            I found out about this some months ago when again the was a shortage. (hello fellow cigar enjoyer!)

                                                                            If anyone else is interested: Some study with n=17 on nicotine patches as a stimulant. Indeed nicotine patches seem to help me focus. Of course no one should pick up smoking; if someone wants to try this, try patches or at most vaping after you inform yourself on the danger/harm involved. I don’t recommend uninformed medical decisions, and I don’t recommend this at all if you believe you’re susceptible to addictions since nicotine is very addictive.

                                                                            1. 2

                                                                              Going through sort of the reverse thing right now. Recently diagnosed as probable ADHD. Also recently decided to try quitting smoking using pharmaceutical assistance. With bupropion and methylphenidate, my nicotine and caffeine consumption have basically cratered spontaneously. Previously I would be drinking about 8 cups of coffee/day and smoking about a pack a day. I’m still working through it all, but it seems like I’d been using excessive nicotine and caffeine for a long long time for my undiagnosed ADHD.

                                                                              1. 1

                                                                                Bupropion is a nicotine receptor antagonist but you probably already know this.

                                                                                How’s your concentration and general well being going with this regiment?

                                                                                1. 2

                                                                                  Yup, the bupropion was specifically for smoking cessation, not for the ADHD. The timeline was kind of interesting… I had an annual check-up and I mentioned to my GP that I was interested in quitting smoking and being assessed for ADHD. Got a referral to a pharmacist for the smoking part, and to a psychiatrist for the ADHD part. I had the appointment with the pharmacist first who thought, based on previous smoking cessation attempts, that Zyban (bupropion) would be a good choice. The psyche consult came afterwards and he figured there shouldn’t be too much of a negative interaction between the bupropion and methylphenidate, but to temporarily stop the methylphenidate if I didn’t feel good. The bupropion should only be for another month or two (quit date is Friday!)

                                                                                  I’ve only been on the methylphenidate for about a week now, but it’s been really interesting concentration and focus-wise. It hasn’t been as acutely world-changing as some reports I’ve seen, but everything just feels… quieter. The smoking, coffee, and ADHD have some confounding effects but… I can sit and work on something for more than 45 minutes now. The confounding thing is that I would previously have to get up and either go for a smoke or pee out some of the 8 cups of coffee. Since all three are kind of being addressed at the same time, I’m not entirely sure what’s what, but I like how things are going!

                                                                                  Time will tell I suppose, but it’s pretty interesting to me that I’m 37 and only really discovered this now. Before COVID and the lockdown stuff around here, I figured I was just super scattered all the time because I was too busy doing too many things. COVID slowed everything down, and the feeling of completely scattered didn’t go away.

                                                                              2. 2

                                                                                It’s basically another caffeine. Addictive (eventually, for most people), quite useful as a stimulant, and super-poisonous in surprisingly-small doses that are nevertheless impossible to accidentally give yourself. But nicotine is also a pretty good bug-killer where caffeine is not, IIRC.

                                                                          1. 2

                                                                            This looks like a fantastic addition to the PCEngines lineup! I’ve got an APU2 that I love and has been rock solid. Also, not sure if they still do, but when you order from them they used to include some local chocolate in the box :)

                                                                            1. 1

                                                                              Ditto, my APU2 is doing really well. Next time I’m involved at small/med business network setup: I’m going to be recommending a PCengines + OpenWRT, they give me so much less headache than everything else and because they’re x86 they should have a really long software update lifetime. Either that or a SFF computer with multiple network cards and OpenWRT.

                                                                              (Fun semi-relevant story: recently got the bottom of a VOIP and long-lived TCP connection issue at a few clients’ sites. Traditional no-one-believes-the-bug-is-on-their-side problem that none of the existing companies knew how to investigate. Turns out it was a NAT implementation bug on the ADSL routers. Bug was fixed in a firmware update released a few months after the equipment was installed some ten years ago :P)

                                                                              1. 1

                                                                                Did it involve silently dropping entries from the NAT table without sending an RST to the affected endpoints? Because… I have run into that way too many times, and yet each time it’s completely baffling until I realize “ahhhhh dang it’s this nonsense again”

                                                                                1. 2

                                                                                  IIRC: The NAT tables in the router still looked OK. The end points still thought their connection were alive and packets would still flow LAN->WAN, but no longer the other way around. I knew this because SSH sessions would randomly “hang”, but if you manually reconnected then all of your typing into tmux during the hang period would still be there.

                                                                                  1. 1

                                                                                    Yikes! That’s even worse than I’d thought :(

                                                                                    1. 2

                                                                                      All the good networking problems are small and sinister :)

                                                                                      NAT is one of things I never expected to break, so it took me a long time to get to the bottom of the problem. It felt very good to finally get rid of the strange, unexplained & arbitrary networking problems it caused (eg logging/reporting appliances mysteriously going offline, phone outages at other sites using different phone systems but same ADSL router, web pages occasionally not loading properly).

                                                                                      There is nothing worse than the magical combination of “strange networking issues” and “parts of this network are controlled by other parties”, I lucked out and the issue was in something I was able to access & fix.

                                                                            1. 1

                                                                              This is a goldmine of good advices with C++.

                                                                              I’m wondering if it makes sense to ever have a method that accepts const X* p, tho (as opposed to const X* const p). What would we change that pointer to?

                                                                              1. 1

                                                                                I’m a little rusty on the details here but I think this is right: in the const X* p case, you get a copy of the pointer and could conceivably still do mutating pointer arithmetic on it. It won’t affect the caller’s copy of the pointer, but could readily result in you operating on memory you didn’t intend to. By adding the extra const, you’re putting up another guard rail that says “I’m only accessing the single object pointed at and don’t want to access other instances nearby in memory”

                                                                                I think…

                                                                                1. 1

                                                                                  Exactly that :) I’m trying to think of sane implementations of any method, where I would like to modify the pointer to a const object.

                                                                                  1. 1

                                                                                    I’m super late to the game, but if you were passed an array of objects as a pointer, you could iterate through them by incrementing. I guess.

                                                                                1. 5

                                                                                  Definitely laughing. They don’t pull any punches:

                                                                                  Understanding what these systems did right and how to improve them is more important than re-hashing existing ideas in new domains compared against only the poorest of prior work.

                                                                                  1. 3

                                                                                    I remember when this paper first came out and I was definitely laughing then! I was just wrapping up my MSc thesis and several members of my research group (distributed systems, natch) were not super impressed when people started asking them questions about how their research would fit in with this framework :D

                                                                                  2. 2

                                                                                    What is the significance of the image you linked?

                                                                                    1. 6

                                                                                      It’s a reference to the movie Joker (2019). Robert De Niro (pictured) is speaking to The Joker. The original line is “Two policemen are in critical condition and you’re laughing, you’re laughing!”

                                                                                  1. 1

                                                                                    Well, that’s pretty horrifying. I’ve spent the week doing vibration testing on a UAV payload trying to, ahem, shake out any leftover bugs. This doesn’t give me any degree of confidence that I’ve tested the hardware sufficiently :)

                                                                                    1. 2

                                                                                      If it’s any consolation I’ve never seen an engineer or programmer read this and not have at least one “ohhhhhhh shit” moment with respect to their own work :)

                                                                                    1. 2

                                                                                      If you’re interested in a survey of ultra-low-cost parts that mostly have dev boards available, Jay Carlson did a fantastic job of compiling all that: https://jaycarlson.net/microcontrollers/

                                                                                      1. 3

                                                                                        I don’t have a whole lot to comment on with respect to the whole article, but the “Aligned Autonomy” section really jumped out at me as a great example/counter-example of things I’ve seen all the time with my consulting clients. Teams that make autonomous decisions (great!) without sufficient context (not great!). The author has a different article that goes into it in more detail: https://blog.thepete.net/blog/2019/02/08/mission-command-enabling-autonomous-software-teams/

                                                                                        At this point in my career, I feel like I’ve been burned enough by not asking those questions that I’ve gotten reasonably good at trying to dig into the context of my work. I’d definitely recommend, especially to less experienced folks, to really start to ask themselves if they understand how their work fits into the bigger picture. If not, ask! And for people handing off work, preemptively share that context!

                                                                                        In Extreme Ownership, the author talks a fair bit about how, to him, a subordinate failing to accomplish a task (or successfully accomplishing the wrong task) is a management failure. Did the sub not understand the task? Did the understand what was asked but not the reasons why? Or did they not get how it fit into the bigger picture? etc. This feels like pretty much the same situation the author is talking about.

                                                                                        Anyway, end rant on a small part of the article!

                                                                                        1. 56

                                                                                          IMHO it’s hard to get much out of reading a codebase without necessity. Without a reason why, you won’t do it, or you won’t get much out of it without knowing what to look for.

                                                                                          1. 5

                                                                                            Yeah, this seems a bit like asking “What’s your favorite math problem?”

                                                                                            I dunno. Always liked 7+7=14 since I was a kid.

                                                                                            Codebases exist to do things. You read a codebase because you want to modify what that is or fix it because it’s not doing the thing its supposed to. Ideally, my favorite codebase is the one I get value out of constantly but never have to look at. CPU microcode, maybe?

                                                                                            1. 4

                                                                                              I often find myself reading codebases when looking for examples for using a library I am working with, or to understand how you are supposed to interact with some protocol. Open source codebases can help a lot there. It’s not so much 7 + 7 = 14, but rather 7 + x + y = 23, and I don’t know how to do x or y to get 23, but there are a few common components between the math problems. Maybe one solution can help me understand another?

                                                                                              1. 2

                                                                                                I completely agree. I do the same thing.

                                                                                                when I am solving a similar problem or I’m interested in a class of problems, sometimes I find reviewing a codebase very informative. In my mind, what I’m doing is walking through the various things I might want to do and then reviewing the code structure to see how they’re doing it. It’s also bidirectional: A lot of times I see things in the structure and then wonder what sorts of behavior I might be missing.

                                                                                                I’m not saying don’t review any codebases at all. I’m simply pointing out that without context, there’s no qualifiers for one way of coding to be viewed as better or worse than any other. You take the context to your codebase review, whether explicitly or completely inside your mind.

                                                                                                There’s a place for context-free codebase reviews, of course. It’s usually in an academic setting. Everybody should walk through the GoF and functional data structures. You should have experience in a generic fashion working through a message loop or queuing system and writing a compiler. I did and still do, but in the same way I read up on what’s going on in mRNA vaccinations: familiarity. There exists these sorts of things that might help when I need them. I do not necessarily have to learn or remember them, but I have to be able to get them when I want. I know these coding details at a much lower level than I do biology, after all, I’m the guy who’s going to use and code them if I need them. But the real work is matching the problem context up (gradually, of course) with the various implementation systems you might want to use.

                                                                                                There are folks who are great problem-solvers that can’t code. That sucks. There are other folks who can code like the wind but are always putting some obscure yet clever chunk of stuff out and plugging it in somewhere. That also sucks. Good coders should be able to work on both sides of that technical line and move back and forth freely. I review codebases to review how that problem-solving line changed over the years of development, thinking to myself “Where did these guys do too much coding? Too little? Why are these classes or modules set up the way they are (in relation to the problem and maintaining code)?”

                                                                                                That’s the huge value you bring from reviewing codebases: more information on the story of developing inside of that domain. The rest of the coding stuff should be rote: I have a queue, I have a stack, etc. If I want to dive down to that level, start reviewing object interface strategy, perhaps, I’m still doing it inside of some context: I’m solving this problem and decided I need X, here’s a great example of X. Now, start reading and go back to reviewing what they’ve done against the problem you’re solving. Don’t be the guy who brings 4,000 lines of code to a 1 line problem. They might be great lines of code, but you’re working backwards.

                                                                                                1. 1

                                                                                                  Yeah, I end up doing this a lot for i.e obscure system-specific APIs. Look at projects that’d use it/GH code search, chase the ifdefs.

                                                                                                2. 2

                                                                                                  Great Picard’s Theorem, obvs. I always imagined approaching an essential singularity and seeing all infinity unfold, like a fractal flower, endlessly repeated in every step.

                                                                                                  1. 1

                                                                                                    I’d disagree. While sure, one could argue you just feed a computer what to do, you could make a similar statement about for example architecture, where (very simplified) you draw what workers should do and they do it.

                                                                                                    Does that mean that architects don’t learn from the work of other architect? I really don’t think so.

                                                                                                    But I also don’t think that “just reading” code or copying some “pattern” or “style” from others is what makes you like it. It’s more that if you write some code only on your own or with a somewhat static, like-minded team your mental constructs don’t really change, while different code bases can challenge your mental model or give you insights in a different mental/architectural model that someone else came up with.

                                                                                                    For me that’s not so different from learning different programming languages - like really learning them, not just being able to figure out what it means or doing the same thing you did before with different syntax.

                                                                                                    I am sure it’s not the same for everyone, and it surely depends on different learning styles, but I assume that most people commenting here don’t read code like the read a calculation and I’d never recommend people to just “read some code”. It doesn’t work, just like you won’t be a programmer after just reading a book on programming.

                                                                                                    It can be a helpful way of reflecting on own programming, but very differently from most code-reviews (real ones, not some theoretical optimal code review).

                                                                                                    Another thing, more psychological maybe is that I think everyone has seen bad code, and be it some old self-written code from some years ago. Sometimes it helps for motivation to come across the opposite by reading a nice code base to be able to visualize a goal. The closer it is to practical the better in my opinion. I am not so much a fan of examples or example apps, because they might not work in real world code bases, but that’s another topic.

                                                                                                    I hope though that nobody feels like they need to read code, when they don’t feel like it and it gives them nothing. Minds work differently and forcing yourself to do something seems to often counter-act how much is actually learned.

                                                                                                  2. 4

                                                                                                    “Mathematics is not a spectator sport” - I think the same applies to coding.

                                                                                                    1. 4

                                                                                                      Well, it varies. Many contributions end up being a grep away and only make you look at a tiny bit of the codebase. Small codebases can be easier to grasp, as can those with implementation overviews (e.g. ARCHITECTURE.md)

                                                                                                      1. 1

                                                                                                        I have to agree with this; I’ve found the most improvement comes from contribution, and having my code critiqued by others. Maybe we can s/codebases to study/codebases to contribute to/?

                                                                                                        1. 2

                                                                                                          Even if you don’t have to modify something, reading something out of a necessity to understand it makes it stick better (and more interesting) than just reading it for the sake of reading. That’s how I know more about PHP than most people want to know.

                                                                                                          1. 1

                                                                                                            Years ago working on my MSc thesis I was working on a web app profiler. “How can I get the PHP interpreter to tell me every time it enters or exits a function in user code” led to likely a similar level of “I know more about the internals of PHP than I would like” :D

                                                                                                      1. 1

                                                                                                        Depending on what you’re up to, I’ve been working with both the Jetson Nano ($99) and the Xavier (uhh $700?) for a project I’m working on. They’re both quite solid units, although the Nano is pretty limited (4GB RAM, pokey CPU). The Xavier is a pretty capable unit, although compile times are a bit long; there’s an 8-core ARM CPU, but it’s not screaming fast. If you’re doing ML stuff though, the Xavier has a pretty capable GPU.

                                                                                                        1. 1

                                                                                                          Have done a fair bit of experimenting with AprilTags over the last 6 months and happy to answer any questions if I can!

                                                                                                          1. 5

                                                                                                            I started a job as a PhD student so currently reading up on scientific literature. It’s super interesting although it’s maddening that the notation is not really standardized so far and that makes it harder to read what’s actually being done in the paper.

                                                                                                            1. 2

                                                                                                              I’m laughing with you. I’ve been doing a fair bit of digging into existing academic work on drone flight control and the notation and variables are different all over the place. It’s wild!

                                                                                                            1. 28

                                                                                                              There’s an implied assumption in distros that old versions are stable and good. Unfortunately all packages are forced to conform to that, and it causes pain for packages that don’t fit this assumption.

                                                                                                              Not every project can afford to maintain multiple old release branches and make it possible to backport fixes.

                                                                                                              It’s super annoying for authors when users complain about bugs that have already been fixed years ago, and there’s no solution other than to tell users to stop using their distro.

                                                                                                              1. 16

                                                                                                                I wonder how much of this is the distro model being designed around being…. actually physical distributions. Debian in 1998 was for many an entire set of CDs, and all the little packages in it were assumed to be part of the operating system you were just slicing like a ham. It was both a freezing of the world in that state of time, and pretending it was all one big mass.

                                                                                                                Likewise, how much did internet distribution change the assumptions that made the distros in the first place? Are they still valid ones? I’m thinking a lot about this and what my answer would be.

                                                                                                                1. 4

                                                                                                                  I don’t think stable release cycles are tied to the physical distributions. It is assumed that users of a stable distro with a release cycle of 2 years just expect things to be stable for 2 years. If they need 5 years, they find a distribution with a 5 year release cycle. Distributions are often seen as responsible of distribution old software, but for most software, users just expect stability. People not interested in stability can use Arch or Debian Unstable.

                                                                                                                  The main problem is that for some piece of software, some users may want a more recent one. Debian answers this with backports (by Debian), Ubuntu with PPA (by random people). For desktop-type applications, there are distribution-agnostic methods, like Flatpak.

                                                                                                                  Releasing more often would be a pain as Debian (for example) would need to maintain several distributions in parallel. Currently, the maximum is two (during one year). With a release cycle of 6 months, this would mean 4 distributions simultaneously (old stable, stable, n-1 and n), like Ubuntu. We just don’t have the manpower for that (and no real demands either).

                                                                                                                  1. 2

                                                                                                                    Related to this, the package versions in a stable distro are known to work together. In the OpenTTD case this probably isn’t as big of a deal, but software packages in general are known to have problems when future versions of libraries are released. When you use, e.g. Debian stable, you’re assured that everything that worked yesterday will continue to work today.

                                                                                                                  2. 1

                                                                                                                    I don’t think so. I expect things on my LTS release to stay stable for some years and I know that they won’t be the latest and greatest. And games with multiplayer & co may just be unsuited for this. But they aren’t as relevant for stability as my file manager, login or displaymanager..

                                                                                                                  3. 5

                                                                                                                    This might be slightly off topic and might sound a bit like being a “fanboyism”, but I don’t mean it that way, because I hope that others will pick it up, so it isn’t a somewhat unique feature anymore.

                                                                                                                    The BSDs for historical reasons split base and ports/packages. But it kind of developed into a feature and great care is taken in all of them on what goes in and out of the base system. The base is of course supposed to be stable.

                                                                                                                    But then there is the ports which are not just “everything in there is rolling release”, but more fine grained. For projects where it makes sense there is different versions, for example different PostgreSQL versions. So one can freely choose.

                                                                                                                    But it goes further. OpenBSD and FreeBSD also have flavors so you also get to pick and choose for (just because it’s famous) Python.

                                                                                                                    And if you are self compiling you get to choose different variations, let’s say you wanna build an old Postgres, with LibreSSL, but with a new (supported) PostGIS, you can do so.

                                                                                                                    And on top of that for FreeBSD and NetBSD you get to choose if you want have the stable quarterly branches of the ports trees or the latest one, with the latest versions, which (I think largely cause they are usually not modified) very stable and fit for server usage. All because you have that stable base still

                                                                                                                    I think if I wouldn’t have used it for so long in very different production environments it would sound kind of messy to me, but it’s not like one somehow has to constantly make a decision. It works out very nice and one doesn’t usually stumble across issues (certainly less frequently than in Debian, where I think the main issue stems from packages being modified (heavily patched), split, etc.).

                                                                                                                    It would be great to see something similar being undertaken in the Linux world. There have been quite a lot of situations where I only went with FreeBSD because of the above.

                                                                                                                    There is no technical reason for this not existing, so it very much surprises me that it doesn’t exist. There have been people using pkgsrc on Linux, but it’s not so much the point to bring that into Linux, as it is to bring those concepts to Linux. I think bringing in pkgsrc can be hard, because a lot naturally is optimized for its main platforms and the pkgsrc distros at large simply were tiny one person shows that never reached enough mass.

                                                                                                                    So I am wondering if I’m the only one who’d sometimes really liked something like this existing. I think something like Gentoo (or pretty much anything else really) could still be used as a base for such an approach. Does such a project exist?

                                                                                                                    1. 1

                                                                                                                      I think it’s a bit more complicated WRT BSDs, because FreeBSD is unifying ports/packages UX wise but keeping the same release policy/separation. They also have the luxury of developing a stable base, whereas Linux components are disparate and and separately developed. I think it’s a good thing (and a proven strategy) in the case of say, FreeBSD, where they keep binary compatibility going, because it provides a stable base to build off of. Windows and macOS go further and just put more components you’d need to rely on like the GUI or audio as part of a stable, ABI compatible base.

                                                                                                                      1. 1

                                                                                                                        As a long time Debian user and developer, I’d love us to move to a base/ports-like model, and have a leaner Base than what “main” is today.

                                                                                                                      2. 3

                                                                                                                        This is a fairly serious educational problem I agree. Issues in a distro version should never be reported to upstream, but to the distro.

                                                                                                                        1. 3

                                                                                                                          As an end user, what do I get out of reporting it to the distro and not the upstream if something breaks that doesn’t seem like a downstream issue? The triage can be useful, but not enough I’d think to report there first.

                                                                                                                          Commercially, I do know what it’s like - I support a PHP distribution, but I think it has more merit than for say, the typical Linux distribution, because the proprietary platform we support it on isn’t well known by most PHP developers, there are necessary distribution differences, additional patches to make it work, etc. that means they get support from us - they usually pay for it though.

                                                                                                                          1. 3

                                                                                                                            You get the benefit of reporting against the version you actually run, and maybe getting the version you actually run fixed. Repoting to upstream in the best case causes the fix to go into a version you are not using for possibly a long time or ever, and worst/common case just annoys upstream.

                                                                                                                            1. 1

                                                                                                                              True, but how likely would I be able to get a fix in that case? If a bug is fixed in 0.9.3 and Debian ships 0.9.1, they don’t usually backport fixes like that unless it’s security, because it would break the entire point of stable.

                                                                                                                              1. 1

                                                                                                                                I suppose it depends if the maintainer agrees it is a bug. The point of stable is to work and not break, so if already broken a fix shouldn’t “break the point” but of course this will vary by maintainer

                                                                                                                          2. 1

                                                                                                                            It’s not an educational problem, IMO. That’s just shunting the problem to the user. It’s a UI problem; if there were some sort of standard bug-reporting platform that auto-included relevant info like distros, I don’t see why upstream devs couldn’t set an automatic rule like “bugs with Debian stable are automatically forwarded to the Debian packagers and the user is automatically sent a reply saying “hey your distro is old as eff and we recommend using something newer”.

                                                                                                                            1. 2

                                                                                                                              I mean, there is a standard bug reporting UI for debian (reportbug), can be run from either shell or as GUI. But I agree it needs to be more prominently featured in default desktop installs.

                                                                                                                          3. 1

                                                                                                                            and there’s no solution other than to tell users to stop using their distro.

                                                                                                                            Or distribute the game/software as a static binary and tell users to update manually/bundle an auto updater.

                                                                                                                            1. 4

                                                                                                                              Static binaries help until you need NSS (for auth/NS), or more realistically for a game, to get to libGL.

                                                                                                                          1. 15

                                                                                                                            I think I agree with the first person who wrote him a letter. There is a difference between finding more novel and varied examples and picking examples designed to goad your readers.

                                                                                                                            Please in the future, remember that we, the book buyers, are looking for information about using PL/SQL. I am as tired of the emp and dept tables as you are, but less distracting examples would have been more appropriate.

                                                                                                                            Everyone has a political view and sometimes that arises legitimately in technology but I think it’s just basic self-control to express your political view only where it really might help something.

                                                                                                                            1. 21

                                                                                                                              The dude’s point is that we all have a political perspective, and we’re expressing it, either explicitly or implicitly. He chose to express his explicitly through the examples in his textbook.

                                                                                                                              If you write a database text and fill it with department / employee type examples, shopping examples, and so forth, then you are implicitly promoting a capitalist world view, the same world view that does not bat an eye when using degrading terms like “human resources”. At least here in the US, this sort of thing goes unquestioned, because of the dominant ideology.

                                                                                                                              1. 5

                                                                                                                                Yes, it’s implicit, it’s unquestioned and nobody bats an eye - and that’s why it makes for better examples.

                                                                                                                                Examples require the use of social territory. That territory can be either unquestioned good or questioned territory. When choosing examples in questioned territory, you engage in active cultural participation; when choosing examples in unquestioned territory, you engage in passive cultural participation. Examples should engage in passive participation, because that way they are relatable to the greatest number of readers.

                                                                                                                                (You can also use unquestioned bad territory, such as defining a database schema to count Jews in the Holocaust for the Nazis, but then nobody will buy your book.)

                                                                                                                                1. 9

                                                                                                                                  I don’t see why “nobody bats an eye” is a desirable quality for examples or why “active cultural participation” is a bad thing.

                                                                                                                                  It’s not at all clear to me that the examples given are not relatable or that “relatable to the greatest number of readers” should even be a core value. Perhaps provocative examples engage readers more and cause them to think about the examples more.

                                                                                                                                  1. 5

                                                                                                                                    Would be curious how you’d feel if it were something sorting countries by iq or something.

                                                                                                                                    Would you be happy to be engaged, or be distracted by a thinking about testing methodology and things like that?

                                                                                                                                    1. 3

                                                                                                                                      I’d have to see it in context to find out how I’d react. IQ is strongly related to class and similarity to the people who devised the test, and such a table might be part of a demonstration of that.

                                                                                                                                      Certainly if an example just seemed pointlessly offensive I would think less of the author and maybe choose a different textbook.

                                                                                                                                      But I think equating a hypothetical very racist example with some examples that are a bit left of centre in the USA is unfair.

                                                                                                                                      1. 3

                                                                                                                                        A substantial amount of political dispute in the English speaking world is precisely about what speech counts as racist and therefore legitimately stigmatizable. Using data that implies that cognitive capacity is meaningfully different between different countries of the world in a programming example constitutes a political assertion that this idea is not stigmatizable; in the same way that the article’s example about a war criminal database constitutes a political assertion about how people should see Henry Kissinger.

                                                                                                                                  2. 6

                                                                                                                                    But now you’ve thought about it, so it has become active participation. From now on you are obliged to make sure your examples completely apolitical.

                                                                                                                                    Consider engineers have a code of ethics, https://en.wikipedia.org/wiki/Order_of_the_Engineer

                                                                                                                                    If your work includes producing examples they should “serve humanity”. I cannot conscientiously make examples that promote capitalism, but giving examples that might make people think about world affairs would be okay.

                                                                                                                                    1. 4

                                                                                                                                      Yes, it’s implicit, it’s unquestioned and nobody bats an eye - and that’s why it makes for better examples.

                                                                                                                                      That assumes a lot from the readership. For a mundane, apolitical example, I submit children to this discussion. For most my childhood due to various reasons, I only had access to a Pentium. It didn’t have a network connection, and I eventually installed Linux on it. Because Linux made it so easy to code, I would try to check out books from the library and learn how to write code, but all the examples were completely unrelatable to me as a pre-teen. Employee this, business that, I realized even at the time that the examples were meant to be highly relatable to practitioners, but I honestly found math much more interesting than these soulless books because I was unable to relate to them in any way. That was one of the big reasons I started out coding by trying to write games; game programming books felt much more relatable to me as a kid who read a lot of books and played video games than these soulless books about employee hierarchies and recipes.

                                                                                                                                      Also, it’s important to keep in mind that the conditions that make something unpolitical are pretty restricted in context. Someone growing up in a developing country or a country with a very different economic ideology will probably find these staid business examples just as unrelatable as children. International editions of textbooks frequently do change examples for exactly this reason.

                                                                                                                                      1. 1

                                                                                                                                        I would try to check out books from the library and learn how to write code, but all the examples were completely unrelatable to me as a pre-teen. Employee this, business that, I realized even at the time that the examples were meant to be highly relatable to practitioners, but I honestly found math much more interesting than these soulless books because I was unable to relate to them in any way.

                                                                                                                                        I also started learning at a young age, and I find this attitude so alien.

                                                                                                                                  3. 5

                                                                                                                                    Everyone has a political view and sometimes that arises legitimately in technology but I think it’s just basic self-control to express your political view only where it really might help something.

                                                                                                                                    I am totally with you on this, and do my best to keep my political perspectives away from the technology work I do as much as I can. I have worked on projects with ethical/political considerations (whether someone might consider a few of these projects ethical depends on their personal political leanings.) Definitely a touchy subject.

                                                                                                                                    That being said, I have a really hard time empathizing with the readers who wrote in to complain that the examples are too distracting. I believe a database book aught to have concrete examples while teaching the abstract concepts (e.g. it’s a book about writing databases in general, not “how to keep track of war criminals”). My own personal reaction to the examples talked about are “ok, whether I agree with the premise or not, these examples have interesting abstract concepts that they’re illustrating.” There are lots of systems that exist in this world whose existence I fundamentally disagree with, but where I’d also love to pop the hood and figure out how they work!

                                                                                                                                    In fact, as I sat here thinking about this, I started wondering if, for me, this style of examples might actually help cement specific concepts with easy mental look-up keys; I can imagine coming to a database design problem and thinking “oh, this is like the Kissinger problem.”

                                                                                                                                  1. 5

                                                                                                                                    This is a fantastic talk! The idea that robust systems are inherently distributed systems is such a simple and obvious idea in hindsight. Distributed systems are difficult, and I have had upper managers claim that we need “more robust” software and less downtime, yet refuse to invest in projects which involve distributed algorithms or systems (have to keep that MVP!). I think Armstrong was right that in order to really build a robust system we need to design for millions of users, even if we only expect thousands (to start), otherwise the design is going to be wrong. Of course this is counter-intuitive to modern Scrum and MVPs.

                                                                                                                                    Additionally, there is so much about Erlang/OTP/BEAM that seem so cutting-edge yet the technology has been around for a while. It will always be a wonder to me that Kubernetes has caught on (and the absolutely crazy technology stack surrounding it) yet Erlang has withered (despite having more features), although Elixir has definitely been gaining steam recently. Having used kubernetes at the past two companies I’ve been at, it has been nothing but complicated and error-prone, but I guess that is just much of modern development.

                                                                                                                                    I have also been learning TLA+ on the side (partially to just have a leg to stand on when arguing that a quick and sloppy design is going to have faults when we scale up, and we can’t just patch them out), and I think there are so many ideas that Lamport has in the writing of the TLA+ Book that mirror Armstrong’s thoughts. It is really unfortunate that software has figured out all of these things already but for some reason nobody is using any of this knowledge really. It is rare to find systems that are actually designed rather than just thrown together, and that will never lead to robust systems.

                                                                                                                                    Finally, I think this is where one of Rust’s main features is an under-appreciated super-power. Distributed systems are hard, because consistency is hard. Rust being able to have compile-time checks for data-races is huge in this respect because it allows us to develop small-scale distributed systems with ease. I think some of the projects bringing OTP ideas to Rust (Bastion and Ludicrous are two that come to mind) have the potential to build completely bullet-proof solutions, with the error-robustness of Erlang and the individual-component robustness of Rust.

                                                                                                                                    1. 4

                                                                                                                                      No. Rust prevents data races, not race conditions. It is very important to note that rust will not protect you from the general race condition case. In distributed systems, you’ll be battling race conditions, which are incredibly hard to identify and debug. It is an open question if the complexity of rust will get in the way of debugging a race condition (erlang and elixir are fantastic for debugging race conditions because they are simple, and there is very little to get in your way of understanding and debugging them).

                                                                                                                                      1. 2

                                                                                                                                        The parent post says rust has compile time checks for data races and makes no claim about race conditions. Did I miss something?

                                                                                                                                        1. 2

                                                                                                                                          When you are working with distributed systems, it’s race conditions you worry about, not data races. Misunderstanding the distinction is common.

                                                                                                                                          Distributed systems are hard, because consistency is hard. Rust being able to have compile-time checks for data-races is huge in this respect because it allows us to develop small-scale distributed systems with ease.

                                                                                                                                        2. 1

                                                                                                                                          Yes, Rust prevents data races which is (as mentioned by another poster) what I wrote. However, Rust’s type system and ownership system does makes race conditions more rare in my experience, since it requires the data passed between threads to be explicitly wrapped in an Arc and potentially Mutex. It is also generally easier to use a library such as Rayon or Crossbeam to handle simple multithreaded cases, or to just use message-passing.

                                                                                                                                          Additionally most race conditions are caused by data races, so… yes, Rust does prevent a certain subsection of race conditions but not all of them. It is no less a superpower.

                                                                                                                                          It is an open question if the complexity of rust will get in the way of debugging a race condition (erlang and elixir are fantastic for debugging race conditions because they are simple, and there is very little to get in your way of understanding and debugging them).

                                                                                                                                          I don’t understand this point. Rust can behave just like Erlang and Elixir (in a single-server use-case, which is what I was talking about) via message passing primitives. Do you have any sources for Rust’s complexity being an open question in this case? I am unaware of the arguments for Rust’s affine type system is cause for concern in this situation – in fact it is usually the opposite.

                                                                                                                                          1. 2

                                                                                                                                            “most race conditions are caused by data races”

                                                                                                                                            What definition of “most” are you using here?

                                                                                                                                            Many people writing distributed system are using copy or copy on write systems and will never encounter a data race.

                                                                                                                                            Do I have any sources? Yes. I debug distributed systems, I know what tools I use, and ninjaing them into and out of rust is not going to be ergonomic.

                                                                                                                                            1. 5

                                                                                                                                              Just some quick feedback/level-setting, I feel like this conversation is far more hostile and debate-like than I am interested in/was hoping for. You seem to have very strong opinions, and specifically anti-Rust opinions, so lets just say I said Ada + Spark (or whatever language with an Affine type system you don’t have a grudge against).

                                                                                                                                              The point I was making is that an affine type system can prevent data-races at compile-time, which are common in multi-threaded code. OTP avoids data-races by using message-passing, but this is not a proper fit for all problems. So I think an extremely powerful solution would be an affine-type powered system for code on the server (no data-races) with an OTP layer for server-to-server communication (distributed system). This potentially gets the best of both worlds – flexibility to have shared memory on the server, while OTP robustness in the large-scale system.

                                                                                                                                              I think this is a cool idea and concept, and you may disagree. That is fine, but lets keep things civil and avoid just attacking random things (especially attacking points that I am not making!)

                                                                                                                                              1. 2

                                                                                                                                                Not the parent:

                                                                                                                                                In the context of a message-passing system, I do not think affine|linear types hurt you very much, but a tracing GC does help you, since you can share immutable references without worrying about who has to free them. Linear languages can do this with reference-counted objects—maintaining ref. transparency because the objects have to be immutable, so no semantics issues—but reference counting is slow.

                                                                                                                                                Since the context is distributed systems, the network is already going to be unreliable, so the latency hit from the GC is not a liability.

                                                                                                                                                1. 1

                                                                                                                                                  Interesting point although I don’t know if I necessarily agree. I think affine/linear types and GC are actually orthogonal to each other; I imagine its possible for a language to have both (although I am unaware of any that exist!) I don’t fully understand the idea that affine/linear types would hurt you in a multi-threaded context, as I have found them to be just the opposite.

                                                                                                                                                  I think you are right that reference counted immutable objects will be slightly slower than tracing GC, but I imagine the overhead will be quickly made up for. And you’re right – since its a distributed system the actual performance of each individual component is less important, and I think a language like Rust is mainly useful in this context in terms of correctness.

                                                                                                                                                2. 1

                                                                                                                                                  Can you give an example of a problem where message passing is not well suited? My personal experience has been that systems either move toward a message passing architecture or become unwieldy to maintain, but I readily admit that I work in a peculiar domain (fintech).

                                                                                                                                                  1. 2

                                                                                                                                                    I have one, although only half way. I work on a system that does relatively high bandwidth/low latency live image processing on a semi-embedded system (nVidia Xavier). We’re talking say 500MB/s throughput. Image comes in from the camera, gets distributed to multiple systems that process it in parallel, and the output from those either goes down the chain for further processing or persistence. What we settled on was message passing but heap allocation for the actual image buffers. The metadata structs get copied into the mailbox queues for each processor, but it just has a std::shared_ptr to the actual buffer (ref counted and auto freed).

                                                                                                                                                    In Erlang/Elixir, there’s no real shared heap. If we wanted to build a similar system there, the images would be getting copied into each process’s heap and our memory bandwidth usage would go way way up. I thought about it because I absolutely love Elixir, but ended up duplicating “bare minimum OTP” for C++ for the performance.

                                                                                                                                                    1. 2

                                                                                                                                                      Binaries over 64 bytes in size are allocated to the VM heap and instead have a reference copied around: https://medium.com/@mentels/a-short-guide-to-refc-binaries-f13f9029f6e2

                                                                                                                                                      1. 2

                                                                                                                                                        Hey, that’s really cool! I had no idea those were a thing! Thanks!

                                                                                                                                                      2. 1

                                                                                                                                                        You could have created a reference and stashed the binary once in an ets table, and passed the reference around.

                                                                                                                                                      3. 1

                                                                                                                                                        It is a little tricky because message passing and shared memory can simulate each other, so there isn’t a situation where only one can be used. However, from my understanding shared memory is in general faster and with lower overhead, and in certain situations this is desirable. (although there was a recent article about shared memory actually being slower due to the cache misses, as every update each CPU has to refresh its L1 cache).

                                                                                                                                                        One instance that I have had recently was a parallel computation context where shared memory was used for caching the output. Since the individual jobs were long-lived, there was low chance of contention, and the shared cache was used for memoization. This could have been done using message-passing, but shared memory was much simpler to implement.

                                                                                                                                                        I agree in general that message passing should be preferred (especially in languages without affine types). Shared memory is more of a niche solution (although unfortunately more widely used in my experience, since not everyone is on the message passing boat).

                                                                                                                                              2. 4

                                                                                                                                                I think a good explanation is that K8s allows you to take concepts and languages you’re already familiar with and build a distributed system out of that, while Erlang is distributed programming built from first principles. While I would argue that the latter is superior in many ways (although I’m heavily biased, I really like Erlang) I also see that “forget Python and have your engineering staff learn this Swedish programming language from the 80ies” is a hard sell

                                                                                                                                                1. 2

                                                                                                                                                  You’re right, and the ideas behind K8s I think make sense. I mainly take issue with the sheer complexity of it all. Erlang/OTP has done it right by making building distributed systems extremely accessible (barring learning Erlang or Elixir), while K8s has so much complexity and bloat it makes the problems seem much more complicated than I think they are.

                                                                                                                                                  I always think of the WhatsApp situation, where it was something like 35 (?) engineers with millions of users. K8s is nowhere close to replicating this per-engineer efficiency, you basically need 10 engineers just to run and configure K8s!

                                                                                                                                              1. 3

                                                                                                                                                I saw the headline and started thinking about it some before reading the article… only to discover that I’ve used strace on all the problems they’ve listed. Fantastic list!