Threads for ignacy

  1. 3

    This is definitely a rant, but I think it is something a lot of people have felt to some degree.

    I don’t buy all of what they are selling though. I happen to like JSON and the term API seems to mean something where I am.

    1. 6

      Yeah, I’m wondering what the author’s prescriptions would be based on these critiques. Does the author think we should use CSV or XML instead of JSON? Or do they like JSON but want to criticize it for being a “glorified CSV” anyways? I don’t understand the point.

      And they say APIs are for people who are “idiots” and “cannot code”. Does the author think we should somehow stop using “APIs”? What the fuck does that even mean?

      I think you could write a well-reasoned argument for the problem with managers and APIs; maybe you could write about how managers may think using an external web API is a panacea, how they may be unknowledgeable about the short-term costs in terms of development time it might take to integrate with a web API or the long-term costs of making the product depend on a third party’s services. You could even fit that argument in a rant format if you wanted. But “you are an idiot, you cannot code, here, use this line here, and when it runs, it will retrieve something from a server somewhere” is so far away from that it’s not even funny. This strikes me as a really bad rant by someone who doesn’t actually understand what they’re criticizing.

      1. 2

        The article is obviously a rant and some of the wording is a bit too much, with that said:

        I think the point is that maybe we didn’t need JSON and XML was just fine or at least that JSON did not fix all UX issues and in some cases, it introduced new ones. Now that we have JSON we might as well use it - that is not what the article is about.

        If we look at the history of configuration languages, let’s say it goes XML -> JSON -> YAML -> TOML (the exact transitions don’t matter - I want to show there was a lot of them) I think it is fair to say that they are all bad in some way. For example, XML was very wordy, but since it was all S-expression-like and editors were smart about it, it was easy to get it right. YAML is sometimes hard to read because (at least for me) even with editor guides it is difficult to figure out the indentation, etc.

        So if all configuration languages have UX problems why have new ones? The article claims most of this work is done for the benefit of developers who were able to spend their time coding new standards, new libraries, etc. instead of solving the original problems.

        How should the file format problem be solved? I don’t know to be honest, but I agree that with new formats what we needed was proof that they actually make all the things better before they were applied everywhere with a huge cost.

        In the same way, the author obviously didn’t say we should not use APIs. The thing is that before APIs programs could communicate through interfaces as well. Most of them could be called APIs maybe some were, it does not matter. What matters is at some point APIs became a “word” and it was used by business types as a nice abstract idea that sells and by developers again as an abstract idea that is easy to use: “we need to provide this API”, “we need to support this API”.

        I think the argument in the article was that I often see release notes similar to: “new export format” or “full support for new Gesture API”. Maybe the end-user does not care which format we use to export their data if it is not faster than it was 2 years ago? maybe they do not care if we support the latest Swipe Gesture API if the app loads slower than it has 1 year ago?

      2. 3

        but I think it is something a lot of people have felt to some degree.

        I agree, but I’m not sure that’s a sufficient bar to be worth posting in itself.

        For instance, if I focus specifically on the Firefox-related part of the rant, I understand the frustration (and it’s a common sentiment) but there’s zero actual investigation into why the extensions system had to be replaced. And frankly, the reason is underwhelming: the XUL-based extensions were about as secure as permitting users to install arbitrary kernel modules, and as such were insecure by design.

        If we agree that people ought to be able to do banking over the web in Firefox, then it’s a foregone conclusion that XUL-based extensions had to go.

        Also, based on this:

        When I go to a restaurant, I don’t want to know what the staff is doing with my food. I’m paying for the service and ignorance. And I expect the same from software. I’m paying, I’m the boss. It’s time the software industry started serving its boss, the user.

        Cheers.

        I expect that if banking wasn’t available on Firefox (or was notoriously insecure), this guy would be outraged at why such a primal need wasn’t fulfilled. So honestly I don’t see any solution here when this guy 1) doesn’t want to hear what the actual problems are, and 2) isn’t satisfied with any of his options. There is literally no way of satisfying this person.


        But circling back to your comment: if it’s something that a lot of people have felt, that’s a topic worth discussing if only to check if there’s a method of fixing it. I think a more useful approach would be if someone were to more explicitly start a discussion in the form of “A lot of people feel X. What is the cause of this, and what could be done about it?” Instead of trying to discuss the drivel in OP’s link.

        1. 2

          This reminds me of the thing chrome was trying to do with extensions specifically involving ad block a few years ago. I don’t really know what came of that. But the argument for it I think was made on the basis of performance and security.

          You could make the argument that letting a website do whatever it wants with javascript isn’t a good idea.

      1. 5

        In general, this was an interesting read but because I think it blames the technology a bit too much I’d like to point out, that:

        Before extracting Quiz Engine MySQL queries from our Rails service, we first needed to know where those queries were being made. As we discussed above this wasn’t obvious from reading the code.

        This is probably not what most people would do here as any kind of APM tool would clearly show you which query is executed where (among other things). For deeper investigations, there are things like rack-mini-profiler, etc.

        To find the MySQL queries themself, we built some tooling: we monkey-patched ActiveRecord to warn whenever an unknown read or write was made against one of the tables containing Quiz Engine data.

        Even if you don’t use APM, there is no need for any monkey-patching, you can simply subscribe to the feed with instrumentation API https://guides.rubyonrails.org/active_support_instrumentation.html#active-record

        Quiz Engine data from single-database MySQL to something horizontally scalable

        You don’t provide the numbers, so I can’t say anything to the scale you are dealing with, but in general, I wouldn’t want anyone reading this to think MySQL isn’t very scalable because it is horizontally and in other directions as well.

        1. 7

          For anyone else wondering, APM is an abbreviation for Application Performance Monitoring (or Management). It’s a generic term, not Ruby-specific.

          1. 6

            I think some of the “blame the tooling” comes from how starkly different it feels for us to use Haskell versus Ruby + Rails.

            With Rails, it’s particularly easy to get off the ground – batteries included! But as the codebase grows in complexity, it becomes harder and harder to feel confident that our changes all fit together correctly. Refactors are something we do very carefully. We need information from tests, APM, etc. Which means legacy code becomes increasingly hard to maintain.

            With Haskell, it’s particularly hard to get off the ground (we had to make so many decisions! which libraries should we use? How do we fit them together? How do we deploy them? Etc). But as our codebase has grown, it’s remained relatively straightforward and safe to do refactors, even in code where we have less familiarity. We have a high degree of confidence, before we deploy, that the code does the thing we want it to do. As the project grows in complexity, the APIs tend to be easier to massage into the direction we want, rather than avoiding improvements because of some kind of brittleness / fear of regressions / fighting with the test suite.

            For those that haven’t written a lot of code in statically-typed ml languages like elm, f#, or haskell, the experience of, “if it compiles it works” feels unreal. My experience with compiled languages before Elm was with C++ and Java, neither of whose compilers felt helpful. It’s been a learning experience adopting & embracing Elm, then Haskell.

            1. 2

              This is probably not what most people would do here as any kind of APM tool would clearly show you which query is executed where (among other things). For deeper investigations, there are things like rack-mini-profiler, etc.

              I agree this information can also be found while monitoring, and we did rely on our APM quite a bit through the work (though this is not mentioned in the blog post), for example to see whether certain code paths were dead.

              A benefit of the monkey patch approach I think, was that it was maybe easier to interact with programmatically. For example: We made our test suite fail if it ran a query against a Quiz Engine table, and send a warning to our bug tracker (Bugsnag) if such a query ran in staging and production (later we would throw in that case too).

              Didn’t know about the AR feed. That looks like it would have been a great alternative to the monkey-patch.

              Regardless, our criticism here isn’t really related to Rails tooling available to dig for information, rather that we would have liked not needing to dig so much to know where queries were happening, i.e. that being clearer from reading code.

              1. 2

                What APM tools did you use to give what info/data and what didn’t they provide that you needed to use other tools to fill the gap for?

                1. 1

                  What APM tools did you use to give what info/data and what didn’t they provide that you needed to use other tools to fill the gap for?

                  We primarily use NewRelic in our Rails codebase and Honeycomb in our haskell codebase.

                  NewRelic is a huge product, and I bet we could have gotten more use from NQRL to find liveness / deadness, but we didn’t.

                  We used NewRelic extensively to find dead code paths by instrumenting code paths we thought was dead and seeing if we saw any usage in production.

                  For finding every last query, we wanted some clear documentation in the code of where queries were and where queries weren’t. NewRelic likely could have provided the “where” but our ruby tooling let us track progress of slicing out queries. The workflow looked like this:

                  • Disable queries in a part of the site in dev (this would usually be at the root of an Action)
                  • Ensure a test hits the part of the site with the disabled queries
                  • Decorate all the allowed/known queries to get the test passing
                  • Deploy, and see if we saw any queries being run in the disabled scope
                    • if we do, write another test to ensure we hit the un-covered code path. Decorate some more.

                  It looked something like this:

                  SideEffects.deny do
                    # queries here are denied
                     data = SideEffects.allow { Quiz.find(id) # this query is allowed }
                  end
                  
            1. 8

              I recently moved from cold north-europe to Thailand and while I was big dark theme advocate and used it everywhere I did a complete 180° here. It’s just so bright here and it’s so nice to work in brighter environments. I’m even waking up early at 6am to enjoy working outside in the balcony. Even so early in the morning and with modern screens it’s still very hard to clearly see and read with a dark theme. Clearly a light theme is the way to go here in Thailand.

              What I’m getting to with my anecdote is that this seems be an issue of environments. I think all of the pros/cons can flip right over when the screen is moved somewhere else. Even with my love of light solarized theme I’d probably switch right back to dark theme if I’d have to move back to cold, north-european winter.

              1. 4

                Clearly a light theme is the way to go here in Thailand.

                Conversely, due to the heat (edit:) and humidity it’s entirely possible someone in Thailand will work inside with less natural light.

                Geographic location likely has very little impact on whether a light or dark background produces less eye strain. The work area level environment has a lot of impact on it. You could be working in a cloudy city and I’d imagine that working from a balcony would still be too bright for dark mode to be “better”.

                1. 3

                  What I’m getting to with my anecdote is that this seems be an issue of environments.

                  This seems to be my experience as well. In winter months, I’m perfectly happy with dark themes but in the summer, especially in the morning, or when I’m working from coffee shops I just need the light mode. One other thing that seems to be a factor here is what I’m working on: for example if I have dark theme in the editor and the terminal, but no way of changing it in other windows that I also need to use (like in the browser or PDF reader) then it’s awkward to switch from dark to light and I just prefer to use light mode everywhere.