Threads for adamshaylor

    1. 9

      I would be genuinely curious to know if anyone here knows of a decent European cloud provider. I’m not going to name names, but there is a prominent one that frequently takes down whole regions for days and doesn’t count it toward their downtime. Another prominent one lost a whole datacenter to a fire because it had inadequate fire suppression.

      1. 12

        I keep hearing that Lidl (Schwarz) is building a more serious Cloud, but I have never tried nor met anyone who has https://www.stackit.de

        1. 9

          You may be interested in the author’s thoughts. In general, Europe has nothing like an Amazon Web Services; Europe does have “Scaleway, OVH, Hetzner, Leaseweb, Contabo and ionos”, but those are not really competitors to the big clouds.

          That said, I strongly feel that hosting a mail server isn’t actually beyond the collective power of the European continent.

          1. 6

            Hetzner is a long time name in the server / VPS provider, but they are not exactly in the “cloud” space. Same for OVH, probably.

          2. 1

            Full disclosure: Although I did not write this article, I do work at Lobelia on projects that use ARCO data, including Copernicus MyOcean and WEkEO.

            1. 3

              These days if I wanted an ML-like language that compiles to JS, the first language I’d consider is Gleam, which also targets Erlang/BEAM. It is not as mature as ReScript, but on the balance offers access to an interesting backend alternative to NodeJS, an a refreshing approach to async/the monad situation. I think ReasonML was more interesting for the same reason, cross compiling between JS and OCaml, but I think ReScript no longer targets OCaml at all.

              1. 3

                I listened to most of this interview and did not at all realize that ReScript is, as you point out, a rebranded, modified version of Reason. When the host, Kris Jenkins, described it as sort of a CoffeeScript for 2025, I figured it was a completely new language. When asked if ReScript is a functional language, Nordeborn basically said (if you’ll forgive the paraphrasing), “Look, forget about functional programming. This is basically JavaScript, plus pattern matching, minus some bad design choices. It’s a nicer JavaScript.” From a branding/communications point of view, they do seem to be making a hard break away from ML.

                1. 1

                  yeah, I was a bit taken aback when they decided to diverge from ocaml, but thinking about it some more the javascript-first approach has definite advantages. there’s always js_of_ocaml and melange if you want full ocaml in the browser, and it’s interesting to see what optimisations and quality of life improvements you can make if you drop that compatibility. in other words, there’s room in the ecosystem for the niche rescript is targeting.

                2. 6

                  “Lock all the nerds in a room and don’t let them leave until they figure it out” is something I have actually heard working many years in ecommerce, particularly on Black Friday. It strikes me as management theater. If the pandemic lockdown taught me anything it’s that a few people who know and care about the affected systems working in an environment of their choosing can work much more quickly than an under-ventilated roomful of random people who don’t know each other and would understandably rather be anywhere else.

                  Also…

                  Unfortunately, production (many many machines) was running the last release which had been cut WELL before that point.

                  I have been burnt by this many times, especially in deployment workflows where there is no way to tell for sure what’s running on prod. (I wonder in this case whether the release was tagged. In some shops the “release” is just whatever the commit was at deployment time.) It’s now my first question in a prod crisis: “What version is running on prod?” (And if I have any reason to doubt it, “How do we know?”)

                  It seems like collectively CI workflows are converging on using a branch (i.e. main) as a mirror of what’s on production, but there is still some scattered resistance, to which I would argue that knowing what’s on prod and being able to roll it back quickly is almost always more important than whatever safety people think they’re buying themselves by making prod deployments convoluted and opaque.

                  1. 2

                    What do other people’s experience tell them about the relative costs of building unneeded features vs. writing clean-ish code? Surely I can’t be the only one who has seen more projects fail because of lack of focus (building too much of the wrong features at the expense of the features users actually need) than those that failed because the author(s) took a few extra minutes here and there to keep things tidy.

                    To be clear, I’m not advocating premature abstraction, 3NF-ing everything to death, using the most complicated features of a language or framework, or reinventing wheels. I’m also not suggesting that ugly code can’t be useful to users. I’ve seen that it can be. But I’ve also seen clean-ish code (e.g. decent variable names, better state management than just throwing things in globals, clear separation of concerns, slightly more descriptive commit messages than “WIP”) written reasonably quickly that can also be readily extend with more features because it wasn’t one big ossified mess from the start.

                    1. 17

                      TBH I don’t think you’ve defined OOP specifically enough for this question to have a meaningful answer. OOP “done right” will mean wildly different things to different people, to the point where I don’t actually believe it’s worth using the term to have meaningful conversations any more. It’s much less confusing to focus on specific aspects.

                      You mentioned polymorphism, so that’s a great start. When are methods worth it? I wrote a big long post about different approaches to methods or method-like functions here: https://technomancy.us/197 Even just narrowing it to the question of methods, I’ve listed four different ways to approach it, (plus avoiding methods altogether) and they all have trade-offs. In my own opinion, tying methods to classes accomplishes virtually nothing other than making Java or Java-like programmers more comfortable, but beyond that different approaches trade off reloadability, transparency, and encapsulation in ways that don’t generalize that well across languages.

                      You could ask the same question about message passing, which is what some people claim is “OOP done right”, and it’s a different discussion almost completely unrelated to the above.

                      You could ask the same thing about encapsulation, but in that case the answer is going to be “it’s always worth it unless you plan to throw away the code next week”. You could ask the same thing about inheritance, but I would say it’s only useful when removed from the concept of classes altogether where you can allow data to be inherited; for example the way Emacs allows scheme-mode to inherit from programming-mode which inherits from text-mode–not a class in sight.

                      1. 7

                        Yeah, this is what I’ve run into. Critics of OOP will define it by the features and patterns that set “OOP languages” like Java and C# apart from other languages (e.g., inheritance or Armstrong’s observed banana-gorilla-jungle “pattern”) while proponents will rarely define OOP, often preferring to define it by what it’s not (“it’s NOT about inheritance!”) and when they’re willing to offer an affirmative definition, it’s usually around some pedestrian feature like “encapsulation” or “message passing” that exists prominently in nearly every paradigm. It’s also worth noting that proponents of OOP including authors of any number of textbooks about it in the 1990s and 2000s agreed with critics that inheritance was largely the key identifying feature of OOP, although many of today’s OOP proponents will point out that Alan Kay invented the term and he described it as being (very vaguely) about “message passing”.

                        It seems like “OOP” is just a terrible term that means wildly different things to different people. My feeling was that despite Alan Kay’s original definition, we generally used to agree on its definition in the 1990s and 2000s, but that has since changed dramatically as we all came to agree that inheritance as the default reuse/abstraction mechanism was a bad idea.

                        1. 9

                          On top of all this, Alan Kay didn’t actually invent the term! The first attested use of it for programming languages was actually by Barbara Liskov: https://ieeexplore.ieee.org/document/1702384

                          1. 4

                            Here’s a conference abstract from earlier in 1976 where Jones and Liskov use the term https://dl.acm.org/doi/10.5555/800253.807680

                            And an open access paper with a similar title from 1978; tho by then they were talking about strongly-typed languages instead of object-oriented languages https://dl.acm.org/doi/10.1145/359488.359493

                          2. 7

                            I think Alan Kay’s “OOP is message passing” was a retcon he attempted to do in the late 90s, 20 years too late. Or he’s angry that we as an industry failed to read his mind back in the 70s.

                          3. 4

                            Much of the struggle is wading through many terminological conflicts. Alan Kay’s quotation about message passing is possibly the safest, if we care about the sign/symbol “object-oriented”, so I ended with it.

                            But I don’t care about the label, rather the chosen concept set(s). If people give different arguments for different definitions, that would seem even more insightful. That’s what I was trying to get at with Van Roy or Go vs. Rust vs. Java etc.’ approaches; how can we better model a given domain by adding so and so features?

                            But when the Meta Object Protocol lets you switch between implementations at will, how do you decide what kind of object you want?

                            1. 5

                              Fair enough! I think message passing and methods are certainly the most interesting concepts to unpack here; most of the other concepts are kind of one-dimensional. I don’t know Golang or Rust, but I’ve found in the languages I’ve used (Clojure, Scheme, and Lua/Fennel) what feels like an inherent tension between encapsulation and repl-friendly transparency.

                              For example, hiding your data in a closure (cf the old “closures are a poor man’s objects; objects are a poor man’s closures” adage) makes it nice and tidy; you can ensure it isn’t exposed to code that shouldn’t have access to it, but that also means hiding it from yourself in the repl when you’re debugging, and that kinda sucks! I don’t know of any approaches that have managed to untangle that particular gordian knot.

                              1. 4

                                but that also means hiding it from yourself in the repl when you’re debugging

                                Both JavaScript and Julia let you access the captures of a closure, in the REPL or debugger. Is that an unusual feature?

                                1. 2

                                  Having it in a debugger, sure; that’s pretty common. But I have learned about 15 different languages, and this is the first I’ve heard of it working in a REPL! How does it work? Does it look like a data structure field access on the function, or what?

                                  1. 4

                                    In Julia, each closure is desugared into a (callable) struct. The fields of the struct are simply the names of variables captured, so you can just go f.x. So you can access it programmatically - not just as a special trick in the REPL.

                                    (All this happens as a purely syntactical transformation in the first codepass prior to any semantic analysis, called “lowering”, where other things like desugaring x += 1 into x = x + 1 happen).

                                    (In Javascript I may have been mistaken and was thinking of the debugger).

                                    1. 3

                                      Fascinating; thanks. I’ve never heard of anything like this.

                                      However, it is a bit concerning that it offers programmatic access as “closures for privacy” is a pretty important encapsulation technique in many languages, and this kind of … demolishes that concept. Like … that barrier could be annoying at times, but it’s often a load-bearing barrier.

                                      I guess Julia must have to use alternate measures to enable information hiding?

                                      1. 2

                                        No, not really. It’s a language written for scientific code (simulation, optimization, etc); hiding things isn’t necessarily helpful in that domain.

                                        Note Julia does have a reasonable split between mutable and immutable data, and closures are immutable so they are read only and strongly typed (the captures may have interior mutability though).

                                        But yes you can easily hand-edit the internals of a hash map at the REPL and get it to crash. Doing so by accident is, thankfully, not so easy. You tend to interact with things outside your module through interface functions/methods, so encapsulation does tend to happen in practice.

                                2. 3

                                  There’s also the autonomous actors approach, which I’ve seen described in glowing colors like growing a program and having it just solve the problem for you. Alas, I’ve never been able to grok that zen (and can’t find such descriptions now.) Once, I thought I had it and tried search through the living classes/objects and otherwise organize them through Prologian logic, but no dice. I’m not sure how OOP it is, but some do cool stuff along those lines:

                                  we pick the features that make Erlang into an actor programming language, and on top of these we define the concept of a pengine – a programming abstraction in the form of a special kind of actor which closely mirrors the behaviour of a Prolog

                                3. 3

                                  Using Alan Kay’s as you have, object-oriented programming is not so much a set of language features as an architectural pattern that is more or less difficult to implement depending on the language/framework paradigm you’re working in. Your question, “What does it have to offer?” is therefore almost impossible to answer. That’s not your fault, of course. Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all. With a bit of squinting, The Elm Architecture (model view update, an excellent example of functional GUI programming architecture) could even be said to be an exemplar of object oriented programming by Kay’s definition. But that flies in the face of what I assume people mean when they use this term.

                                  Judging informally from decades of exposure to the ecosystems of Java, C#, JavaScript, and a bit of Python, what people really mean by OOP is tightly coupling data and logic with classes. These classes often inherit from other classes and, in a statically typed context, implement interfaces. “No, no!,” some adherents will object, “Composition over inheritance!” But then you look at the code people actually write and…

                                  1. 3

                                    Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all.

                                    I feel like that was sort of the point. Joe Armstrong called Erlang the only object oriented programming language, and here’s Alan Kay saying that Armstrong may in fact have been correct saying so. The definition isn’t loose, Kay simply narrowed down on what exactly he thought was the important part of OOP up to that point. Kay isn’t the king of what terms mean, so he can’t tell anyone what OOP must mean, but I think his definition is quite sensible.

                                    1. 2

                                      Alan Kay’s definition is so loose, it could be used to describe languages (or at least ways of programming in languages) that don’t have classes at all.

                                      You appear to be confusing the C++ and Kay definitions of OOP and either not realizing these are two separate things, or not addressing them as such.

                                      1. 3

                                        I see the distinction and am saying most programmers neither know nor care about Alan Kay’s definition. I may personally find it a very sensible definition by which to organize and orchestrate a complex program, but what good is it if most people hear “OOP” and think classes?

                                        1. 1

                                          I see, so instead of dealing with with ambiguity head-on, you choose to deny it exists and absolutize the definition of your choice. I don’t think that’s a particularly helpful approach here.

                                          For instance, a cursory search of OOP articles on lobsters will show you that this comes up a lot.

                                          1. 1

                                            I think you misunderstand. I acknowledge the ambiguity and I actually share your preference for Alan Kay’s definition. The popular definition is not my choice. I am merely recognizing that the English language tends to towards descriptiveness rather than prescriptiveness. That means that if most people mean classes when they say OOP (“data in the form of fields (often known as attributes or properties), and code in the form of procedures (often known as methods)”), that is what it means, regardless of what you or I prefer.

                                            1. 2

                                              Rather than repeat my earlier criticism, I will simply point out that the most popular OOP language of all time is prototype-based, not class-based.

                                4. 2

                                  This is the part where your neighborhood Nushell zealot mentions how nice editing its path config is. The out-of-the box config comes with a tidy list of piped prepend commands ending in a uniq command to remove accidental duplicates.

                                  1. 2

                                    this was almost me 🤐 either that or i could mention how windows is a simple global list that applies to all processes/shells/terminals/whatever!

                                  2. 5

                                    Avoid features that add disproportionate cost

                                    How does one quantify the cost of a feature before it’s built?

                                    1. 5

                                      I tend to think the opposite — that the act of building a feature gives very little information about its cost. I’d break down the cost of the feature into three buckets:

                                      • A) fundamental costs obvious from first-principles (e.g.,this features needs information available on the server and adds a network round-trip of latency)
                                      • B) costs related to implementation (we know big-O of the algorithm from the outset, but we can only reliably figure the constant out once we build&measure)
                                      • C) opportunity/interaction costs — how this feature interacts with different (past and future) features (adding one features is fine, but adding three means we run out of menu space and need to introduce a hamburger, which requires JS)

                                      Implementation work gives you information about B, but, it seems to me, its A&C where the heavy costs typically are.

                                      1. 2

                                        Your points about estimation are interesting and helpful as guidelines, but I’m confused by your initial claim:

                                        The act of building a feature gives very little information about its cost.

                                        If you have a version of some software before a feature was added and another version of it after, is it not trivial to compare metrics between them (e.g. CPU and memory use; and for a web app, things like total page size and TTI)?

                                        1. 4

                                          By the way of analogy, hitting your thumb with a hammer doesn’t give you much information about how painful it is, because it is bloody obvious from the outset that this’ll hurt, no need for experiments there.

                                    2. 4

                                      There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems.

                                      My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother.

                                      FWIW, the testing story in Nushell is pretty nice, and the structured data types make mistakes less costly and easier to diagnose. Not prod-ready yet, but IMO so much nicer than POSIX.

                                      1. 2

                                        I’d love to see a modern terminal implementation. Using non standard keys shouldn’t be this hard. Some, like Kitty, have tried but those are more of an addon instead of a redesign.

                                        If you want to better manage your command history by project with docs/comments take a look at tome playbooks (by me). It also helps with multiline pasting.

                                        https://github.com/laktak/tome

                                        Extrakto, also for tmux, makes it easier to select and copy (also mine):

                                        https://github.com/laktak/extrakto

                                        1. 1

                                          Curious if you’ve tried Ghostty. Seems promising. I’ll need history search and better scrollback UX before I switch full time myself, but I believe the keybindings are configurable and the speed is impressive.

                                          1. 1

                                            I meant the keys for apps running in the terminal. For example mapping different actions for Tab and Ctrl+I in vim can be difficult, even more so with tmux in the middle.

                                          2. 1

                                            I’ve thought about this too. I think it’s kinda gotten to the point where you’d kinda need to be willing to throw away old bad ideas and accept that it will break some programs. Make an experimental rework of some of the core pieces, wait for/help libs like ncurses and termbox to use that functionality, and then help other terminals implement it. Even just a standard feature-detection mechanism would be really nice.

                                            1. 2

                                              Not sure you’d have to break anything if you add a translation layer. Isn’t that what tmux and vim with its integrated terminal are doing already?

                                            2. 1

                                              Some, like Kitty, have tried but those are more of an addon instead of a redesign.

                                              “Fully rewrite the terminal” is not the path forward. Kitty’s design is appropriate, IMU it’s basically a switch that enables a full keyboard protocol that any app can opt into. Just like Kitty was able to add a full graphics protocol to the terminal, it can be extended, people just need to agree on extensions.

                                              1. 2

                                                I’d rather have a new terminal protocol and a legacy layer. That would encourage more developers to support it IMO and make it easier for users as well to find out what works with what.

                                                1. 2

                                                  I’d rather have a new terminal protocol and a legacy layer.

                                                  Cool, go for it.

                                            3. 2

                                              There were some strong differences of opinion about the music played on the office Sonos at one place I worked. To avoid debate, some people would just anonymously start cutting each other off after they felt a suitable time playing someone else’s playlist had passed. As the frequency of these abrupt switches between playlists increased, the appropriateness of the music declined. Things came to a head. I suggested we get rid of the Sonos, but I was in the minority. So I offered to write a Slack bot that would be the office DJ. If you wanted to play anything, you would go to a specific channel and tell it. By default the bot would queue requests, but there was a command to clear the playlist. My idea was that everyone could see who was playing what and we could debate each other directly instead of complaining at the rest of the office in general.

                                              Writing the bot was fun. I used an old laptop and left it running on my desk. It’s been some years and I can’t remember which API I used—it might have been a third-party one that put a REST layer over something lower level—but it turned out to not be not very reliable. My colleagues were good natured enough to try my bot, but it didn’t work well at all. Imagine trying to ask Ash, the synthetic from the first Alien movie, to DJ for you while he was flopping around: Some requests would be fulfilled but the API did not always respond with an acknowledgement. Or the API would fulfill a request several minutes after it was made. Some requests were neither acknowledged nor fulfilled. Often, the queue returned by the API did not reflect what was actually in the queue. I think the Sonos even crashed. This confused the Slack half of my bot considerably and my colleagues even more.

                                              The only positive to come out of it was that my feckless bot broke the tension amongst the humans. After I gave up on it, most of us just asked one guy most of us agreed had good taste to put something nice on.

                                              1. 2

                                                My friends’ program DisOrder was written to be an office jukebox https://www.greenend.org.uk/rjk/disorder/

                                                1. 1

                                                  As the frequency of these switched increased, the appropriateness of the music declined.

                                                  You must have been tempted to write a bot that switched at times on the order of a millisecond, and then the switching itself could have played a nice tune.

                                                2. 10

                                                  I’ve been burnt a few too many times wearing the scrappy hat and told that since the work is basically done, it’s been promised to important people that it will go to prod in X weeks. Then that scrappy code becomes a sad captain’s code very quickly. There are lightyears between code that’s good enough for the happy path of a demo and code that can be trusted to actually work for users. It’s fine to cut corners in some places when building PoCs and prototypes. It’s desirable even to not optimize prematurely. But I’ve learned the hard way to choose dependencies, design foundational architecture, and generally write my code like I’m going to have to live with it for the next five years—because I have, even when I didn’t expect to.

                                                  1. 5

                                                    Interesting to think about yourself as a crew – you may be wearing the scrappy hat, but you report to your future self, who is wearing the captain’s hat.

                                                    1. 1

                                                      I feel that. But I’ve also put too much care and polish into projects which I’ve later thrown away. It’s hard to say for sure, of course, but some of that code might have survived longer by being scrappier and more ambitious earlier in its lifecycle. So, not disagreeing, but I think there’s a balance to be struck.

                                                    2. 21

                                                      This seems to be related to the problem of “why do people adopt worse systems with better docs and discussion instead of better systems without those things”.

                                                      I’ve seen at least one community seeming to shift out of discord and Slack to publicly-crawlable fora for exactly this reason, to make it easier for knowledge to get hoovered up by the bots–and I kinda don’t hate it.

                                                      1. 7

                                                        There’s so many forms of this. I’ve dealt with a team at $bigcorp adopting a worse system without docs, over a better system with docs, and then rebuilding (in a worse way) the features of the better system slowly. They picked it because the better system was written by a department they saw as competing with them.

                                                        The point is that if the decisions are based on no technical factors, then whether or not LLMs support the thing well is also not going to factor in.

                                                        1. 3

                                                          This. I don’t think this title makes a ton of sense tbh - innovation doesn’t necessarily mean using the new thing, it can also mean old things adding new things because they get so popular they get the resources to do it.

                                                          1. 1

                                                            Not just “worse systems with better docs” but also “systems with more libraries”. For better or worse LLMs increase the increasing returns to scale that benefit larger programming and technology communities.

                                                            Which probably means we’re in for an even longer period in which innovation is not actually that innovative - or perhaps highly incremental - while we wait for new, disruptive innovations that are so powerful they overcome all the disadvantages of novelty. But really that’s just life - as we eat more of the low hanging intellectual and technological fruit, we get more to a state like any other field of equilibrium periods punctuated by revolutions. In some ways this is good - if you want to do something new you have to do something really new and figure out how to launch it.

                                                            1. 1
                                                              1. 3

                                                                Over in Elixir, some of the projects–now, it could simply be they were sick of losing things to Slack backscroll, but I recall somebody mentioning they wanted to have their framework better supported. I don’t intend this to be fake news, of course, but memory is a fallible thing.

                                                              2. 1

                                                                Interesting. I’m curious for an example.

                                                              3. 1

                                                                It surprises me that the creator of Zod, which embeds native errors in Zod’s error result type, would agree to omit them here. Returning native errors in result objects gives the benefit of both the explicit error handling of strongly typed FP languages and the call stack and nested causes of native JavaScript.

                                                                1. 4

                                                                  I have been playing a bit role over the last couple of years bridging the gap between old ways of working with climate data and new ones. (The new way is often put under the umbrella term “ARCO”, or Analysis-Ready, Cloud-Optimized). I’m relatively late to the climate tech / GIS / web cartography party, having previously worked in ecommerce, so I’ve been spending a lot of time reading and implementing standards specifications and reading the docs for various other tools along the way. It’s an amazingly rich ecosystem, albeit occasionally for a newcomer like me a bit like drinking from a fire hydrant.

                                                                  Some of the old-ish tools that are still in common use include:

                                                                  Some classics that have aged pretty well include:

                                                                  • D3: I hadn’t used it much in previous jobs, but it’s pretty much mandatory knowledge for any non-trivial JavaScript data visualization. Ameila Wattenburger has the best write-up I’ve read so far for binding D3 to React components. The basic idea is to use your component library to declaratively render the SVG elements and D3 for the math.
                                                                  • PostGIS

                                                                  Some of the new-ish ones include:

                                                                  1. 2

                                                                    GeoJSON and TopoJSON are both pretty old though? GeoJSON pre-dates D3 by a few years. TopoJSON has been around for a little over a decade at this point as well.

                                                                    1. 1

                                                                      Good catch. Looking it up, it seems GeoJSON’s initial specification was in 2008 and D3’s initial release was 2011. Somehow I had it in my head that GeoJSON was newer than that and D3 older. In that case, I would move GeoJSON and TopoJSON to the “classics” category, with an emphasis on TopoJSON if one is working with regions that have shared boundaries.

                                                                    2. 2

                                                                      I’m interested in how such disparate systems deal with geographic data. What are the pros and cons of GeoJSON vs. TopoJSON or STAC?

                                                                      I have a lot of geo data (currently just using my own naive approach then drawing over google maps) but I’ve not explored prior art at all.

                                                                      1. 3

                                                                        For GeoJSON vs. TopoJSON, TopoJSON has advantages in storage size when geometries are clean, and like @adamshaylor said above, have shared boundaries. Otherwise, I don’t use it much.

                                                                        Back in 2013, for the software I helped build that now pays my bills, we decided on GeoJSON because, well, it’s just JSON. It’s easy enough to push JSON to a REST API endpoint for pushing geometries. TopoJSON could work but support for editing it isn’t really as robust. I can’t think of any of the popular frontend map editing libraries (geoman, leaflet-draw) that support editing it directly.

                                                                        There are a bunch of other formats that are prevalent. The shapefile is still pretty dominant, with KML and geodatabase (ugh) being popular choices as well. Really wish more folks would adopt geopackage, which itself has been around for a bit, but don’t see many people actually using it in the wild.

                                                                    3. 6

                                                                      Here’s a timely reminder that Amazon is not responsible for supporting third-party S3 stuff, so maybe it’s not such a great idea to pretend like it’s some kind of open standard and bake it into a runtime.

                                                                      1. 10

                                                                        I forced myself to have a pet project using tailwind a few years ago. I hated Tailwind’s concept, but so many people appreciated so I had to try. I suggest to give a try even if it’s against your believes. It has pros and cons.

                                                                        I’m also not convinced anymore about having a purely semantic HTML tree and only using CSS stylesheets for presentation, it takes time and no one cares. It makes cool demos like CSS Zen Garden, but most of the time the HTML and the CSS are tightly coupled anyway.

                                                                        1. 11

                                                                          The irony of the CSS Zen Garden is that it used the name of a non-dualistic religion to propound a dualism of semantic content and its presentation. A lot of people, myself included, tried to build real-world websites this way. It did not work very well.

                                                                          1. 6

                                                                            I share a similar experience. Initially skeptical (and dubious about the concept of property more or less equal to class), I’m now convinced by their approach.

                                                                            I still enjoy pure CSS for minimal web pages / web apps, but Tailwind is my tool for more complex one.

                                                                          2. 15

                                                                            People are rightly hard on tailwind.

                                                                            And yet… for someone who sucks at front end responsive design I can’t deny it didn’t take me long to get a decent website up and running.

                                                                            1. 13

                                                                              IME, the critics of Tailwind CSS are often CSS experts. So to them, “CSS isn’t hard”. The Tailwind abstractions just seem an extra step to them.

                                                                              To me, though, they’re very useful and portable.

                                                                              1. 9

                                                                                This is an okay stance to take & on the other end I can agree that CSS isn’t hard & don’t want to memorize a billion classname conventions, but what grinds my gears is when a tech lead or back-end team has mandated it on a front-end team that would prefer to not have that decision made for them—as can be said about most tools put on teams by folks not on those teams.

                                                                                To me, if I want something that follows a system, the variables in Open Props covers what I need to have a consistent layer that a team can use—which is just CSS variables with a standardlized naming scheme other Open Prop modules can use. It is lighter weight, doesn’t need a compile step to be lean, & lets you structure your CSS or your HTML as you want without classname soup.

                                                                                1. 7

                                                                                  I can agree that CSS isn’t hard

                                                                                  May not hard to write, but certainly it is hard to maintain. CSS makes it SO EASY to make a mess. In no time you’ll be facing selector specificity hell. If you have a team with juniors or just some backend folks trying to do some UI, that’s very common.

                                                                                  “But what about BEM?”. I like BEM! But, again it’s an extra step and another thing to learn (and you’re choosing not to deal with CSS specificiy to avoid its pitfalls).

                                                                                  IME, the BEM components I wrote were more effective and portable the smaller they were. I ended up with things like text text--small text--italic, which were basically in-house versions of Tailwind (before I knew what it was).

                                                                                  So, to paraphrase Adam, I rather have static CSS and change HTML than the reverse.

                                                                                  1. 1

                                                                                    You can use utility classes & Open Prop names & still not use exclusively utility classes. No one has said Tailwind can’t be used for utility when needed, but in practice I see almost all name go away. There is nothing to select on that works for testing, or scraping, or filter lists.

                                                                                    Having a system is difficult since you have to make it stringly-typed one way or another, but that doesn’t discount the semantics or considering the balance needed. Often the UI & its architecture are low-priority or an afterthought since matching the design as quickly as possible tends to trump anything resembling maintainability & the same can happen in any code base if no standards are put in place & spaghetti is allowed to pass review.

                                                                                    It really is just the same old tired arguments on both sides tho really here. This isn’t the first time I have seen them & that post doesn’t really convince me given it has an agenda & some of the design choices seem intentionally obtuse without use of “modern” CSS from like the last 4–5 years.

                                                                                  2. 2

                                                                                    but what grinds my gears is when a tech lead or back-end team has mandated it on a front-end team that would prefer to not have that decision made for them

                                                                                    Is this something that happened to you? Why would the back-end team decide on what technology the front-end team should use?

                                                                                    1. 2

                                                                                      On multiple occasions have I seen a CTO, tech lead, or back-end team choose the stack for the front-end either before it was even started or purely based on a some proof-of-concept the back-end devs built & did not want the stack to change… just that it was entirely rewritten/refactored.

                                                                                2. 8

                                                                                  It raises the question that maybe the abstractions that were adopted by CSS are not the right ones in the long term, as other solutions are more team-friendly, easier to reason about.

                                                                                  1. 2

                                                                                    I find the original tailwind blog post really enlightening: https://adamwathan.me/css-utility-classes-and-separation-of-concerns/

                                                                                    Separation of presentation and content makes a lot of sense for a document format, and has been widely successful in that segment (think LaTex). But for how the web is used today (think landing websites or webapps), the presentation is often part of the content. So the classic “Zen garden” model of CSS falls apart.

                                                                                    1. 1

                                                                                      I’m in the same boat. I was appalled by it the first time I saw it.

                                                                                      But it fits my backend brain and makes building things so much easier. I like it in spite of myself. I’ve just never been able to bend my brain to automatically think in terms of the cascade, and reducing that to utilities makes it so much more workable for me, and lets me try things out faster. I’m excited about this release.

                                                                                      1. 1

                                                                                        I am happy that you got a decent website up with Tailwind. I’m sad that you had a hard time with CSS and the conclusion you reached was that you were the one that sucked.

                                                                                        1. 0

                                                                                          I really can’t be bothered to learn CSS or deal with “web standards”.

                                                                                        2. 4

                                                                                          As the graph in the article shows, the decline in activity on SO began way back, like in 2018. Maybe LLMs accelerated the decline in late 2022, but clearly it’s been in descent for a while. I suspect a lot of this has to do with the rise of web-based VCS hosts with bug-tracking and discussion features and possibly author-designated forums and chat platforms like Discord. I would be curious to hear what popular OSS authors think, but I have heard from some of them that they use their community’s Q&A forums (wherever they may be) as a form of feedback. If lots of people get confused by the same thing, the author can recognize it as a design problem rather than the confusion of an isolated individual. But if everyone uses LLMs or Copilot to resolve issues, the feedback loop is severed.

                                                                                          1. 3

                                                                                            this is a really good point, I barely ever used SO even when I was starting out (mainly because I had no idea it existed and the language I was using was obscure enough that SO would have been useless anyway) I and the community surrounding that language just used a forum which imo was better suited as it was close to the source so to speak.

                                                                                            but this notion of LLMs making access to those questions impossible is something I hadn’t considered. questions and confusion is such good feedback for designing better error messages, better APIs/CLIs/UIs, improving the experience of a tool/library is invaluable and if part of that is disappearing into private chats with LLMs that’s going to really affect the feedback loop.

                                                                                            getting this data out of LLMs I think is going to start to come in the form of Google Search Console style tools, pushed by the SEO world - now that searching is shifting to LLM models, I and others in the marketing/seo/adjacent spaces will need to know what people are asking and I foresee companies like Perplexity (who’s primary business is probably going to be ads/search) building these tools for data access. Whether or not that ends up being a net good or not we’ll see but it’s certain that understanding what people are asking LLMs is very demand-sided right now with virtually zero supply.

                                                                                          2. 9

                                                                                            Our code sits in a monorepo which is further divided into folders. Every folder is independent of each other and can be tested, built, and deployed separately.

                                                                                            So what’s the point of using a single repository then? The code clearly is independent of each other, and they want it to be tested independently of each other.

                                                                                            1. 20

                                                                                              Why would you split it? Maintaining separate repositories just seems like extra bookkeeping and toil.

                                                                                              1. 4

                                                                                                It is. So much overhead.

                                                                                                Multirepo in the same repo is the way to go.

                                                                                                1. 3

                                                                                                  Multirepo in the same repo is the way to go.

                                                                                                  My understanding of the term “multi-repo” is that it refers an architecture wherein the codebase is split out into separate repositories. Your use seems to mean something different. Are you referring to Git submodules?

                                                                                                  1. 2

                                                                                                    Many people consider a monorepo a situation where all the things in the repo have a coherence when it comes to dependencies or build process. For me a monorepo is also worth it if you just put fully independent things in separate subfolders in the same repository.

                                                                                                    Are you referring to Git submodules

                                                                                                    I would never. git submodules are bad.

                                                                                                2. 3

                                                                                                  access control, reducing the amount of data developers have to clone, sharing specific repositories with outside organisations, avoiding problems exactly like this blog post outlines, etc.

                                                                                                  Now I know you’re going to say “well, we’ve got tooling that reads a code owners file for the first, some tooling on top of git to achieve the second, and an automated sync job with a separate history for the third” but all of that sounds like additional tooling and complexity you wouldn’t need if you didn’t do this. I think the monorepo here is the extra bookkeeping and toil.

                                                                                                  1. 8

                                                                                                    We tried this too, releasing loosely coupled software in a monorepo all with the same version numbers. In this case semantic versioning doesn’t make sense since a breaking change in one package would cause a major version bump. But another package might not have any changes at all between those major versions. In this case the only versioning scheme that would make sense is date(time) based versioning. But that can be achieved without using a monorepo. I agree with ~fratti, the benefit of a monrepo is not obvious.

                                                                                                    1. 4

                                                                                                      Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                                                                                                      1. 2

                                                                                                        In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist. The fact that mono-repo tools and the people who use them encourage throwing away semver is evidence to me that the modularity pendulum has swung from micro-everything to mono-everything in far too extreme a way.

                                                                                                        1. 2

                                                                                                          In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist.

                                                                                                          Why do you care? The entire point of a monorepo is saying “Everything in the repo at this point works together, so we release it at that commit ID”. In every monorepo I’ve used, the only identifier we ever used for a version was the commit hash when the release of the software and all its in-repo dependencies was cut.

                                                                                                          It seems very strange to talk about versions in a monorepo – the entire point of a monorepo is to step away from that.

                                                                                                          1. 1

                                                                                                            I think there are some folks who are missing what you describe as the point of monorepos. It sounds like the context(s) in which you use them are basically atomic applications. The parts of the application may be deployed in multiple contexts, but they are not intended to be used separately. I can see the appeal of monorepos there. Unfortunately, my experience has been considerably messier. Where the line gets crossed is where the pieces of such applications become public. Libraries get published as packages to registries. Web services get public docs. Now I don’t just have application users, I have users of the pieces of the application. This is where I start to care about versioning, because the users of these pieces care. Mileage clearly varies, but the tendency of people to treat monorepos as the default choice has, for me, resulted in inheriting monorepos that might have started as atomic applications but are no longer so. The benefit has been a few saved git clone commands and some deployment coordination/ceremony. The loss in time to tooling issues has been considerably more than that.

                                                                                                        2. 2

                                                                                                          Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                                                                                                          Are you asking me or about the original article?

                                                                                                          We release several of the loosely coupled software pieces within the company. In that sense not all is in the same commit (or even same repo), downstream/outside users aren’t, so we need to use version numbers. So in my mind a monorepo really only make sense if you’re okay with datetime-based versioning or if you’re working on tightly coupled pieces of software that you test and release together.

                                                                                                          About the original article I don’t know why they care or if they even do.

                                                                                                    2. 4

                                                                                                      The code clearly is independent of each other

                                                                                                      I’ve never used a monorepo, nor do I have any strong feelings for/against them. But I have seen them. This is kind of just how they usually end up, I don’t think this defeats the purpose of a monorepo though.

                                                                                                      1. 1

                                                                                                        It can be tested, built, and deployed separately - it can also be done together and without juggling versions, repos, dependencies, rollouts…