1. 10

    “Scrum is the opposite of Agile”

    I can’t disagree with that. Instead of the word “agile”, many times I use the phrase “dynamic process tailoring”, because that really captures how it works more than anything else. There is no standard for agile. It’s a marketing term. It labels the place in the bookstore you go to find current cool dev practices. Not having a standard is not a bad thing, actually, because there’s no one way to do everything.

    [paraphrasing] “I’ve been in projects where I could see where the design was headed in three months, and instead of taking a week or two and coding that, the team spent three, six months and still never got completely there”

    This is one of the reasons I’m spending so much time on “how to talk about the solution” Good teams have good conversations and Scrum “goes away”. The code “goes away”. Instead they just solve the problem quickly and move on to the next project. Bad or average teams get hung up on the details, whether in the tooling or in the process they’re using, and the more they get hung up the longer everything is going to take.

    While true, my statement above is not descriptive enough for people to grok it who’ve never seen it work in practice. More work needs to be done.

    1. 3

      I’m intrigued but you’re right that “just solve the problem and move on” sounds to me more like a description of what is wrong with many software teams. Getting hung up is of course not good, but constantly keeping one eye on possible improvements in tooling and process is in my experience necessary in order to not get stuck with diminishing returns. Sure, balance is needed.

      1. 1

        Yup, for such a simple thing there’s a lot of depth here that I don’t think the industry has adequately described, either to current practitioners or new folks entering the field. I know some of the advice I got really early on led me in the wrong direction. So many things in tech seem impenetrable/silly, until one day you get it, then it’s so trivial you don’t bother (or can’t) explain it to the next guy. The real meaning of OO was like that for me, or the beauty of well-designed databases. This is another one of those things.

        You have to stay on top of the tooling infrastructure changes or you’d still be coding with stone knives and bearskins (Star Trek joke). While the result is perhaps one we can all agree on, getting there is something else entirely. I wrote a book on this, took a chapter and wrote a blog on it the other day, and plan on following up with another book on infrastructure/architecture. Shameless link: https://danielbmarkham.com/for-the-love-of-all-thats-holy-use-ccl-to-control-complexity-in-your-systems/

    1. 1

      Some really neat ideas, but also, a lot of headshaking around things like the XLA/Tensorflow dependency. The serious numerical story in Elixir doesn’t exist because core doesn’t care and, frankly, most Elixir folks are basically just doing web stuff as Ruby + Rails refugees.

      Nice agitprop though if you want to boost your language’s popularity by riding the AI/ML gravy train.

      1. 17

        The serious numerical story in Elixir doesn’t exist because core doesn’t care and, frankly, most Elixir folks are basically just doing web stuff

        Same applied to Python way back. That the Elixir devs are attempting to broaden their ecosystem is good.

        1. 6

          BEAM would not be my first choice for anything where I had to do some serious number crunching. I wonder who will even use this, and if it’ll die out like the Swift extensions for it.

          I use Elixir because it’s good at networking, concurrency, and parsing, not anything AI.

          1. 3

            I had the same initial reaction and would rather bet on Julia for the future of ML. But at the very least, breaking Python’s monopoly is good and, of course, there is no good reason for Python’s ascent in this field. So, there’s no reason other ecosystems that have certain advantages over Python shouldn’t try to make a dent here.

            A lot of ML projects use Ray (ray.io) for orchestration and workflows (or so I hear), basically implementing the actor model, and Elixir/BEAM is probably a more natural fit for that side of the problem.

            1. 3

              On the other hand, not everybody needs serious number crunching. This could let people in the elixir ecosystem stay within it for longer.

              1. 1

                Seems like they are using multistage programming to make this rather fast - I’m guessing this stuff wouldn’t be running on the BEAM directly? Could be wrong though.

              2. 5

                The serious numerical story in Elixir doesn’t exist because core doesn’t care

                Ehhhh. Clearly you don’t mean “Elixir core”, as Jose is the one writing this. Even if you mean the Erlang core team, they might not care enough to lead implementation but they’re open to additions that support it: https://github.com/erlang/otp/pull/2890

                Anyway, BEAM isn’t the logical execution environment for this stuff, it needs GPU to fly. For this use case, BEAM is a great place to write a compiler, and will function adequately as a test environment.

                1. 1

                  ¯\_(ツ)_/¯

                2. 5

                  FWIW, the compiler backend of Nx is pluggable, so other compilers aside from XLA can also be integrated with. XLA happened to be the first one they’ve gone with thus far.

                  What do you mean with “the core doesn’t care”?

                  1. 5

                    Paraphrasing a colleague’s gripes…Elixir has a bunch of math functionality inherited from Erlang, and hasn’t bothered to fix any of the problems around it.

                    In erlang:

                    2> math:exp(344).
                    2.4963287283217065e149
                    3> math:exp(3444).
                    ** exception error: an error occurred when evaluating an arithmetic expression
                         in function  math:exp/1
                            called as math:exp(3444)
                    

                    In Elixir:

                    iex(3)> :math.exp(344) 
                    2.4963287283217065e149
                    iex(3)> :math.exp(3444)  
                    ** (ArithmeticError) bad argument in arithmetic expression
                        (stdlib 3.13.2) :math.exp(3444)
                    

                    Contrast with JS:

                    > Math.exp(344)
                    2.4963287283217065e+149
                    > Math.exp(3444)
                    Infinity
                    

                    What’s going on here is that Erlang uses doubles internally (except for integers, but that’s a different kettle of fish), but does not do IEEE754 support for special values like NaN or Infinite. This is certainly a choice they could make, but it rears its ugly head when you implement something mathematically well-behaved like the sigmoid function:

                    iex(5)> sigmoid = fn (x) -> 1.0 / (1.0 + :math.exp(-x)) end
                    #Function<44.97283095/1 in :erl_eval.expr/5>
                    iex(6)> sigmoid.(-1000)                                    
                    ** (ArithmeticError) bad argument in arithmetic expression
                        (stdlib 3.13.2) :math.exp(1000)
                    iex(6)> sigmoid.(-100) 
                    3.7200759760208356e-44
                    

                    In JS, we’d get a negative in the denominator, it’d go to Inf, and the function would work fine. In Elixir, it explodes.

                    Core has had the opportunity to fix this for a long time, but has been spending cycles elsewhere.

                    1. 5

                      Elixir has a bunch of math functionality inherited from Erlang, and hasn’t bothered to fix any of the problems around it.

                      I found no tickets in the Erlang (where BEAM issues belong) or Elixir bug trackers discussing infinity.

                      1. 2

                        For all the supposed number crunching going on, nobody’s requesting working IEEE doubles?

                        1. 2

                          Not really, as most of “interesting” part of IEEE is already there. What is not supported are qNaN (sNaN is supported as it’s behaviour is exactly the same as what Erlang does - just throw exception) and infinites. All other values are fully supported in Erlang. And while it can be irritating to have exception in infinity, I think that in most situations you do not mind, as that would mean that the result of computation would be probably garbage to you anyway. So throwing up early is probably what will provide clearer information, especially with Erlang error isolation approach.

                  2. 2

                    The serious numerical story in Elixir doesn’t exist because core doesn’t care and, frankly, most Elixir folks are basically just doing web stuff

                    That perfectly describes how I got into Elixir. Phoenix got me into the ecosystem, but I branched out pretty quickly once I understood the power of the OTP model. About two years ago I started looking into writing drone flight control software in Elixir and ran into a huge hurdle with the numerical processing portion of it; I was fiddling around with NIFs and things, but it felt like I was just compromising the robustness of the system by having it call out to my own C code.

                    The Nx project has me very very excited that the project I put on the shelf might be viable now!

                  1. 4

                    I’m happy the tech stack is working for the author, but in my experience that tech stack has way too many components for my one-man projects. Especially the devops pieces. I’ve given up on using AWS in my own projects, even though I’m experienced with AWS and write CloudFormation configs regularly, it’s just not worth it. So much easier to just deploy to a vanilla Linux server, using a Makefile for automation. I stopped using Docker for similar reasons. I do like GitHub Actions.

                    To each his own. My hypothesis is that the complexity of a lot of tech stacks today hurts more than helps.

                    I’ll complain about Django some other time.

                    1. 1

                      How many pieces would typically go into one of these deployments? Db, web processes, message bus…? What do you use for monitoring and crash recovery and such? (Just curious :) )

                    1. 2

                      Very interesting! Congratulations on getting it this far.

                      I think there is a clear case for light-weight pre-processor-style SQL wrappers, a bit like Sass for CSS or various templating languages for HTML. This is clearly more ambitious - if I were to use Preql, I would likely output SQL files for manual inspection and then install that code as views or functions in the db itself.

                      since the functions (and the rest of the code) are part of the client, and not the server, you can use version control, such as git, to keep history, code review, accept pull-requests, and so on.

                      I think it’s very important to keep SQL code under version control, and I know the (perceived) difficulty of doing so is a common argument against keeping logic in the database - but there is a rather simple and elegant solution, namely that pioneered by subZero/PostgREST: https://docs.subzero.cloud/managing-migrations/

                      I’m very curious about what the description “relational programming language” means to you. That term is often taken to mean languages in the Prolog tradition (ie logic programming). SQL and relational algebra/calculus is similar, but more restricted. IMO a common failure of many attempts to wrap or improve on SQL is that they discourage “relational thinking”, ie in terms of sets and relations between sets. Any thoughts on this and how Preql fits into the picture?

                      1. 1

                        if I were to use Preql, I would likely output SQL files for manual inspection and then install that code as views or functions in the db itself.

                        I understand your reluctance to trust the magic. When I was using C in the early days, I always wanted to see exactly what Assembly code it produced, to make sure that it’s correct and efficient. But as the compilers got better, and I got busier, I stopped that altogether.

                        I’m very curious about what the description “relational programming language” means to you

                        I think the magic of SQL, which makes many people love it despite all of its downsides, is its heavy reliance relational algebra, i.e. working with sets and using joins. I took care to put this kind of algebra front and center, so there is a special operator for selection (i.e. where), projection (i.e. select), for set operations (&, |), and so on. I also put a lot of effort to make joins convenient, and I plan to keep improving them.

                        … Prolog tradition (ie logic programming). SQL and relational algebra/calculus is similar, but more restricted.

                        Do you think it’s technically possible to implement real-world logic programming using SQL? Or is it too restricted for it? (of course, anything can be done with a Turing machine, but I mean natively in the language)

                        1. 1

                          I understand your reluctance to trust the magic.

                          Partly that, but mostly because I prefer to store logic in the db (as I hinted above).

                          I think the magic of SQL, which makes many people love it despite all of its downsides, is its heavy reliance relational algebra,

                          Agree!

                          I took care to put this kind of algebra front and center, so there is a special operator for selection

                          Would you say then the resulting language is mostly a sugared SQL, or a separate language in the relational model tradition, which “compiles down” to SQL? I think both are interesting and worthwhile! But it seems to me that all the quirks of SQL makes the latter approach pretty hard, if we want to make full use of SQL-as-it-exists. And I understand that’s why you provided an escape hatch to write arbitrary SQL - which seems useful but makes me wonder again if we can actually go beyond “more succinct SQL” as long as we’re working with SQL databases.

                          Do you think it’s technically possible to implement real-world logic programming using SQL?

                          With recursive CTEs you can at least theoretically see how it could be done, but a) I don’t think it could be made efficient, b) Prolog is not purely declarative, the procedural interpretation is integral to the language, and that would be very hard to model. Datalog on the other hand is precisely a logic language intended for database programming. I have been looking at the Datalog Educational System as a practical way to compare SQL, relational algebra, relational calculus and Datalog. The pyDatalog project also claims to be able to run Datalog queries on SQL databases, but I haven’t tried it or looked closely at it.

                          There have also been some database languages based on set/list comprehensions as they are known from Haskell and Python (and before that Miranda etc). The most successful one seems to have been CPL (Collection Programming Language; here’s a paper that describes it). I think set building notation might be the most intuitive entry into all these paradigms, and SQL, Prolog etc could be described in relation to this foundation. Just something I’ve been thinking about lately.

                          1. 1

                            Would you say then the resulting language is mostly a sugared SQL,

                            I guess that depends, would you say that C is mostly sugared Assembly? There is nothing that you can do with C, that you can’t do with an assembler. And yet, the comparatively high-level nature of C, and its expressiveness, end up making C programming better in almost every way.

                            For example, here’s code that you can write in Preql:

                            func apply_twice(f, x) = f(f(x))
                            func add1(x) = x + 1
                            print [1, 2, 3]{apply_twice(add1, item)}
                            // result is [3, 4, 5]
                            

                            This type of coding just isn’t possible in SQL, and opens up a lot of options for code-reuse, and having a real standard library, like Python does.

                            all the quirks of SQL makes the latter approach pretty hard

                            They definitely make it harder than it should be, but don’t entirely prevent it.

                            With recursive CTEs you can at least theoretically see how it could be done

                            Recursive CTEs have a very glaring deficiency. They can only UNION on the entire tuple, which means that if you want to keep track of depth in your search (and you usually would), you can’t prevent nodes from being visited twice.

                            I think most major databases offer alternative ways to do graph search, but CTEs have very limited utility.

                            Thanks for the links, and for your thoughtful response!

                      1. 4

                        Reading the Relational Model was interesting, because while the relational model had been so ingrained in us over the years, it wasn’t patently obvious then, but:

                        • SQL is arguably a bastardization of what Codd intended. It isn’t close to RMv1, let alone RMv2. The biggest axe he had to grind was duplicate rows in SQL as being a violation of the model (data as identity). Codd had some… interesting ideas I’m not sure I agree with, like four-valued logic. That only makes sense to me in the context of manual data entry - perhaps there’s a lost opportunity for types instead.
                        • There were other data models vying for attention. I forget what they were, but he does address them. Many seem like alternate twists on relational/hierarchical models; not quite NoSQL.
                        • He is dated/grounded by his assumption of users at terminals on mainframes interacting with the system that way. Unless you have a background in IBM mainframes, the entire indicators section seems puzzling.

                        I need to get around to reading Date. I’ve had some ideas floating in the back of my head around types and relational data for a while now…

                        1. 2

                          The Third Manifesto might be my favorite CS book (that or Project Oberon or maybe Clause and Effect).

                          1. 1

                            There is a weird discrepancy in SQL between the elegance and solidity of its foundations and the actual technology as it stands.

                            1. 1

                              Sadly, we’re so stuck in SQL land it’s very hard to get out. (And no, most replacements like NoSQL are worse).

                              I strongly recommend https://www.oreilly.com/library/view/sql-and-relational/9781491941164/

                              “SQL is full of difficulties and traps for the unwary. You can avoid them if you understand relational theory, but only if you know how to put that theory into practice. In this book, Chris Date explains relational theory in depth

                              Some of his advice seems extreme… but extreme benefits arise from sticking to it.

                            1. 9

                              I would love to read a thread where someone explains why the app is so gigantic to begin with. It does not do a whole lot, so what is all this code doing?

                              1. 19

                                The author explains this in a reply here: https://mobile.twitter.com/StanTwinB/status/1337055778256130062

                                The app looks simple from the user perspective. But on the global scale the business rules are insanely complicated. Every region has custom rules/regulations. Different products have different workflows. Some regions have cash payments. Every airport has different pick rules…

                                See also the people replying to that discussion, they talk about how unreliable networking means they can’t do this stuff on the backend.

                                1. 3

                                  they talk about how unreliable networking means they can’t do this stuff on the backend.

                                  Uber is infamous for having over 9000 microservices. They even had to “re-invent” distributed tracing, because NIH syndrome or something (Jaeger, now open-tracing). I really, really doubt they do it all on the client. They have to make sure the transaction is captured anyway and without network, how can you even inform drivers?

                                  1. 4

                                    Presumably they verify everything on the backend anyway because you can’t trust the client. My guess is that the network is reliable enough to submit a new trip, but not reliable enough to make every aspect of the UI call out to the backend. Imagine opening the app, waiting for it to locate you, then waiting again while the app calls out to the network to assess regulations for where you are. Same thing picking a destination. That latency adds up, especially perceptually.

                                2. 12

                                  Yeah that’s the part that gets me, it’s like the dril tweets about refusing to spend less on candles. Just write less code! Don’t make your app the Homer Simpson car! What is the 1mb/week of new shit your app does good for anyways? It’s easy, just play Doom instead of typing in Xcode.

                                  I don’t have hypergrowth mentality though, I guess that’s why Uber rakes in all the profits.

                                  1. 6

                                    I guess that’s why Uber rakes in all the venture capital dollars to subsidize their product

                                    Fixed it for ya ;-)

                                    1. 1

                                      I would love to see Chuck Moore’s (RIP) take on 1 MB/week. He might just have a seizure.

                                      1. 5

                                        Thankfully, Chuck Moore is alive.

                                        1. 6

                                          Ah, man, well I have egg on my face. I feel quite silly. I’m going to slink into a corner somewhere.

                                    2. 5

                                      I recall a few years ago someone disassembled Facebook’s (I think) Android app and found it had some stupendously huge number of classes in it. Which led to a joke about “I guess that’s why their interviews make you implement binary trees and everything else from scratch, every developer there is required to build their own personal implementation every time they work on something”.

                                      Which is cynical, but probably not too far off the mark – large organizations tend to be terrible at standardizing on a single implementation of certain things, and instead every department/team/project builds their own that’s done just the way they want it, and when the end product is a single executable, you can see it in the size.

                                      1. 4

                                        I wonder if there could also be some exponential copy-paste going on, where instead of refactoring some existing module to be more general so it satisfies your need, you copy-paste the files and change the bits you need. Then somebody else comes along and does the same to the package that contains both your changed copy and the original package, so there are now 4 copies of that code. The cancerous death of poorly managed software projects.

                                        1. 2

                                          Scary… and probably very true.

                                    1. 13

                                      I don’t agree. I’ve been using Rails for 10+ years (at least 8 with some confidence), and am still actively using it on a daily basis on one project. I’ve sometimes used the standard conventions, other times I’ve modified (or rather extended) them pretty extensively. Clearly Rails has got a lot of things going for it, and many of the shiny novelties that have been touted as better alternatives weren’t. Still, I think Rails has big shortcomings and articles like this one fail to imagine how, or even that, they could be improved upon.

                                      The main argument presented here seems to be:

                                      With Rails, you don’t have to make any of the decisions above.

                                      The author doesn’t try to argue that Rails makes good decisions, only that it’s great that we don’t have to make them ourselves.

                                      The second and final point is:

                                      Rails Helps Maintainability

                                      The argument being that libraries tend to be maintained over long periods of time, which is true in many cases, and I agree that the ecosystem is Rails’ very strongest point. (This is mostly true for hands-on, everyday things like “connect to Stripe” or “upload to S3”. Other ecosystems are better at other things.)

                                      Then the author comes back to the point about strong conventions which he claims help “fewer parts of an app to get crufty as time goes by”. I don’t agree. I think Rails’ conventions actively encourage bad practices, which harms maintainability and indeed leads to a crufty codebase.

                                      These problems stem first from ActiveRecord which is a weak abstraction and almost invariably leads newcomers down the wrong path, after which they must learn a whole lot of “best practices” to implement decent database access pattern. Secondly, Rails does nothing to encourage users to isolate domain logic and keep it sanely structured. (An unstructured collection of service classes is a small but insufficient step towards rectifying this problem.) The result is that, after a short honeymoon of “getting up and running in no time”, it becomes harder to focus on the app domain and “the problems [we] need to solve”, because the domain logic tends to be spread out all over the place.

                                      These arguments have been rehashed for almost as long as Rails has been around. There’s nothing remotely new in what I just wrote. People’s tastes and experiences vary. I’m reluctant to say there is no objectively assessable dimension to this issue, but on the whole we’ll do best to agree to disagree.

                                      1. 7

                                        I’m with you. Although I will concede that most businesses fail, and Rails does allow small teams to move quickly toward usable features for customers. Perhaps marginally improving the company’s chances of success.

                                        The problems all come with success and larger teams. The monster ActiveRecord models containing business logic, without domain boundaries. The fact that dynamically typed Ruby underlies it all. The lack of established modularity patterns … These all culminate, making for inefficient, large teams after the honeymoon years of startupdom are over. All those readymade, but poor Rails decisions that made starting up a success really come back to bite you.

                                        1. 4

                                          What do you prefer these days?

                                          1. 2

                                            I am not a subOP, but I have moved from Rails to Phoenix and never looked back.

                                          2. 2

                                            I’m interested to hear which frameworks solve those problems for you!

                                            1. 4

                                              For me Phoenix with Elixir solve most of these problems. It is still mostly opinionated, but with ease you can change that behaviours, it encourages proper separation of the application logic, it uses repo-pattern instead of the ActiveRecord, etc. Of course, it is less popular, so finding hires is a little bit harder, but if you look within community (which author is strongly suggesting) then there should be no problem with finding people.

                                              1. 1

                                                I love Phoenix too! I need more experience with it, but as far as I’m concerned that was a great answer to my question 😄 Cheers!

                                              2. 1

                                                The two main problems I pointed out were AR being a weak abstraction and Rails not doing enough to encourage domain logic to be written in an isolated and structured manner.

                                                For the first problem, my conclusion is that database abstractions are not worth the trouble. There are several that are better than AR, but SQL databases are actually pretty hard to abstract over, at least if you want to full advantage of what they can do. Instead, treat the database as your framework. (And by “database”, I mostly mean Postgres.) This approach to writing applications is still pretty common in large enterprises (from what I gather) and “modern” devs like to make fun of it, but I think it’s more relevant than ever. If you put PostgREST or PostGraphile on top of that to expose a REST/GraphQL API, this setup fits right into a modern stack.

                                                For the second problem, I think domain-driven design goes in the right direction (although it’s been obfuscated by being lumped together with particular technologies and bloated architectures, not least microservices). I see DDD as an offshoot of the bigger field of (abstract) data modeling or concept modeling, which was to a big extent ruined by heavy UML standards, consultants, frameworks… But I think it’s essential, and I would measure a language or framework by how clearly and directly it lets you express the domain model. (Domain Modeling Made Functional is one approach to this.) Yes Rails tries to make some things more “declarative”, which is good, but again, it has little to offer when a system grows more complex.

                                                I would use Erlang/Elixir (or Gleam!) if I ever had to create a system of the kind that the BEAM was meant for, but not for a regular CRUDdy web app.

                                                So the short answer is: Postgres, because it helps me focus on and express the domain model!

                                                (@squarism this was in answer to your question too. More concretely, my preferred current stack is PostGraphile + Elm.)

                                            1. 3

                                              I’d almost forgotten about the Alf language and extremely pleased to discover from this post that it has a production-capable successor. Kudos to the author for landing a job with it!

                                              1. 2

                                                Thanks! I thought at first maybe you meant the Algebraic Logic Functional programming language, a predecessor of the Curry language of which I’m also a fan. But then I saw that you linked to the actual Alf in point. Happy to hear you know about it! I’m planning to write more about my experience with bmg and how to use in in production systems, in due time.

                                              1. 3

                                                Nice thread and post! This issue has several dimensions to it, and it’s a bit odd that there is no straight-forward answer considering that these are problems that every web developer on the planet faces on a daily basis. I’ll just rattle off a few solutions that I think are or can be part of the solution. CouchDb + PouchDb are great for sharing data between backend and clients in a conceptually very simple way, which is the first problem mentioned in the post.

                                                However, the next insight is that SQL is really much better than a lot of people realize and having to transform data in a middle layer (often in a web framework, or on the client to produce a workable model for the display logic) takes up an unreasonable amount of work in many projects. Like the author points out here, we’re constantly throwing away and re-building information. For me, the best solution for 80% of web applications is to use s database schema-driven GraphQL layer like PostgREST or Postgraphile on the backend, and preferably something with strong typing on the client (TypeScript, Elm…). That setup wipes out a big class of bugs and encourages solid data modelling in the database were it belongs. You can get an awful lot done in SQL (with proper use of views and functions), and GraphQL ensures the client gets just enough freedom to control which data it gets.

                                                On top of this, you can use GraphQL subscriptions or live queries (not yet in the official spec) to get an end-to-end reactive system with minimal risk of meddlesome in-between code messing things up. This architecture is also operationally really simple and not harder to deploy then your regular MVC framework or what have you.

                                                Like one of the commenters in the thread pointed out, having centralized authoritative state on a server is not something we can completely abstract away, unless there really is no expectation that the app be able to retain data, or for users to log in on different devices etc. Overly optimistic UI updates (which pretend the data has been written before it’s actually stored server-side) also tend to be more confusing than helpful. Explicit calls to write data to the backend combined with data push from the server to the client strikes a good balance in my opinion, and is much better than callback-based UI updates.

                                                For unusually demanding apps and systems, I would consider a backend based on Kafka and ksqlDB to build up views and push data to the browser. I think these are exciting and useful tools, but overkill for most projects. I wish someone would maintain PipelineDB to do similar things directly in Postgres.

                                                And finally, DataScript is a nice answer to the part of the problem which is how to make rich queries over data that is already in memory on the client. Unfortunately, I don’t see any obvious way of integrating it with the stack I’ve outlined above. But I would definitely reach for it if I was building an app that does complicated things with a lot of in-memory data.

                                                1. 5

                                                  Not sure if I’m misunderstanding something, but sounds like you’re planning on re-implementing OTP in Gleam? Which would be impressive, but also sounds like users will not be able to make use of the awesomeness of Erlang OTP. Or will you be able to bridge the Gleam abstractions to Erlang’s OTP?

                                                  1. 1

                                                    We have reimplemented OTP in Gleam, yes.

                                                    I tried to explain in the article that it maintains full compatibility with Erlang’s OTP (in the “primary goals” section) so Gleam users can use Erlang OTP and Erlang users can use Gleam OTP transparently and without problems. Sorry if I didn’t make that clear!

                                                    1. 2

                                                      On re-reading, yes it is clear! I was probably just too surprised to fully take it in. Again — impressive!

                                                  1. 2

                                                    The authors of the page made it unusable in my case. There was no way to scroll the page down to read the actual article. There’s no scroll bar, so I couldn’t use my mouse (that does not have a scroll wheel), and the keyboard doesn’t work in Firefox or Safari (but does in Chrome). I’m having flashbacks to the late 90s/early 2000s with “This site works best in … “ nonsense.

                                                    1. 1

                                                      Totally agree they made a bad choice in implementing this anti-web page. That said, cursor keys do work for me on Firefox.

                                                      1. 1

                                                        It’s annoying, but you have to put your cursor on the article and not the margin.

                                                      1. 2

                                                        One of the better attempts at explaining what monads are, actually, really, in essence, that I’ve seen, by way of fairly easy-to-understand concepts from abstract algebra (rather than jumping straight into category theory). And I especially enjoyed the historical background.

                                                        1. 7

                                                          Continue personal learning on database internals. I started about 6 months ago and thought it would only take a few months. Now I’m realizing that there is so much I do not know. It’s incredible how much there is to learn! It’s been incredibly helpful for work, as I’ve been able to diagnose seemingly arcane issues with our postgres performance.

                                                          I found a german professor who uploads youtube videos, and I’ve been using that to supplement my knowledge as I go through a book.

                                                          1. 2

                                                            Would you care to share a link to said professor? Sounds interesting!

                                                              1. 1

                                                                Thanks!

                                                          1. 1

                                                            Have had my eye on Neuron for a while and very much sympathize with its philosophy. My #1 criterion is using any editor to work with plain-text files (preferably md) and get a thin layer of extra functionality on top, and Neuron is (surprisingly) the only real fit so far. This release finally pushed me to install Neuron.

                                                            I have one question: I like to keep my notes in a (shallow) hierarchical directory structure, so that they are easy to navigate using any regular file browser, cli or gui. Does Neuron only work with a completely flat set of files, or can it pick up notes from subdirectories as well? If not, is that something you’d consider supporting @srid?

                                                            1. 2

                                                              I’d be open to it. But we will have to first decide on the exact semantics. (for example, if you have foo/bar/qux.md what is the unique ID of that note? “qux” or “foo/bar/qux”? The ID is used to link to it; so would it be [[qux]] or [[foo/bar/qux]]?).

                                                              We can discuss this here: https://github.com/srid/neuron/issues/309

                                                              1. 1

                                                                Thanks, I’ll join in the discussion!

                                                            1. 40

                                                              I want a Wikipedia for time and space. I want people to collaborate on a globe, marking territories of states, migrations of peoples, important events, temperatures, crop yields, trade flows. I want to pick which layers I want to see at one time. This sort of thing would give us a clearer, more integrated view of history, and show how connected we are all. A volcano eruption in Mexico led to crops failing around the world. New World crops led to an explosion of people in southern Chinap, a previously less inhabited place. A Viking woman met natives in present-day Canada and also visited Rome on pilgrimage.

                                                              1. 4

                                                                Excellent idea. I have an idea for something similar but less… featured. My idea is about time and space tagged news.

                                                                1. 3

                                                                  Also thinking about one from time to time - in my view kinda like Wikimapia for history, usually thinking how country borders and armies could be represented.

                                                                  1. 3

                                                                    The Seshat global history databank is a bit similar to this (great) idea.

                                                                    “systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time”

                                                                  1. 36

                                                                    On the interwebs the only articles I find are biased

                                                                    Everyone is biased. I don’t think it’s useful to seek an “objective opinion” - opinions are by their nature subjective.

                                                                    As someone who has only used Kubernetes professionally, my thoughts basically boil down to:

                                                                    • YAML is not a very good configuration format, and is the norm, leading to a lot of unnecessary friction
                                                                    • The system itself is very complex, even in its most simple incarnations, but it is also very powerful
                                                                    • If you need the kind of scale, reliability, and automation Kubernetes provides, you need Kubernetes or something else like it - at that point it’s down to preference and what integrates well with your stack
                                                                    1. 11

                                                                      YAML is not a very good configuration format, and is the norm, leading to a lot of unnecessary friction

                                                                      And aside from YAML as a configuration format, templated YAML (for example with Helm Charts, but many methods exist) is a really hard to deal with, and IMO a strong sign that this kind of declarative configuration is not a good fit for a tool of k8s’ complexity.

                                                                      “Lets do declarative configuration so you don’t need to program and it’s easier, but then let’s programmatically generate the declarative configuration” … hmm, okay…

                                                                      I’m not a huge fan of YAML as such, but I think the misapplication of a “one size fits all” declarative configuration format is probably the biggest issue.

                                                                      1. 2

                                                                        Ehh, I think templating is fine in moderation. I don’t particularly like Helm, you can get 90% of the functionality with plain POSIX sh and heredocs. Having spent just shy of a year designing a company-wide OpenShift (RH K8s) deployment setup, I can say that Yaml is terrible format for configuration, templates are great in moderation, and jesus christ build your own images nightly. K8s is insanely complex, it’s effectively a distributed operating system, and so it has all the complexities of one plus the complexities of being distributed.

                                                                        I also agree, YAML isn’t the worst in small doses, but when literally everything is a .yaml, it’s nothing but spaghetti garbage using strings/ids to tie together different disconnected things potentially across many different files.

                                                                        1. 1

                                                                          And aside from YAML as a configuration format, templated YAML (for example with Helm Charts, but many methods exist) is a really hard to deal with, and IMO a strong sign that this kind of declarative configuration is not a good fit for a tool of k8s’ complexity.

                                                                          In my opinion templating yaml has been approached in the wrong way, or at least in an incomplete way.

                                                                          Referring to the helm charts example: as yaml (which is json) are a textual representation of object, often times what would be better is not templating the text representation, but attaching a generated subtree to an otherwise leaf node.

                                                                          Example: when including a yaml file into a chart template you have to “manually” call the indent macro to make sure the yaml parser is happy. But if we were reasoning in terms of trees of object, we could just say: this placeholder object here shall be replaced in its entirety with this other tree there (that you get by parsing that file) which will become a subtree of the initial tree.

                                                                          Indentation would then be “automatic” because the next logical step would be to re-serialize the object and feed it to the kubernetes apiserver.

                                                                        2. 5

                                                                          I agree there is no escaping subjectivity, but would add: Not everyone is equally biased or honest with regard to any particular issue. Some opinions are well-informed, some aren’t. Some people’s advice is worth hearing on some particular topic, because they have overcome at least some of their biases through diligent, open-minded work, or because their biases are similar to one’s own (yes, that can be very useful - “I’d like to know if people who, like me, for some reason like X also would recommend Y”). Some people’s advice is irrelevant on a given topic. Getting advice from a range of people on here is likely to be more valuable than getting marketing disguised as technical blogging, for example.

                                                                        1. 8

                                                                          As somewhat of a fanboy, let me point out that Pop!_os has stayed clear of Snap since the start, although they haven’t spelled out their reasons very clearly (I think they just consider it clunky).

                                                                          1. 1

                                                                            I read this and thought: Materialize sounds a lot like Noria*! Well, there’s an orangesite comment from a Materialize engineer that goes into more detail.

                                                                            * Research DB from MIT, previously on lobste.rs

                                                                            1. 2

                                                                              Also similar to PipelineDB (https://github.com/pipelinedb/pipelinedb), which got acquired by Confluent.

                                                                            1. 1

                                                                              Great footnote:

                                                                              It is interesting to note that most of the central personalities first met through an unofficial reading group formed by an enthusiastic amateur named Mervin Pragnell, who recruited people he found reading about topics like logic at bookstores or libraries. The group included Strachey, Landin, Rod Burstall, and Milner, and they would read about topics like combinatory logic, category theory, and Markov algorithms at a borrowed seminar room at Birkbeck College, London. All were self-taught amateurs, although Burstall would later get a PhD in operations research at Birmingham University.

                                                                              Also worth mentioning that Donald Michie, who worked with Turing at Bletchley Park, was responsible for bringing several of these people together in Edinburgh.

                                                                              1. 2

                                                                                I found it fascinating in itself that HOPL is so infrequent:

                                                                                Past conferences were held in 1978, 1993, and 2007. The fourth conference was originally intended to take place in June 2020, but has been postponed. [1]

                                                                                It’s a twice-in-a-career opportunity.

                                                                                [1] https://en.wikipedia.org/wiki/History_of_Programming_Languages

                                                                                1. 1

                                                                                  It doesn’t sound so bad yet. According to the SIGPLAN website:

                                                                                  We are working with SIGPLAN to identify a new time and place for the physical HOPL meeting, probably in the first half of 2021.