Threads for kixiron

  1. 28

    For those looking for the lies, they’re listed at the end.

    While I don’t disagree with most of the article, I think it’s worth stating the contrapositive case: golang is a language optimized for cranking out web services at Google with moderate to low business logic complexity where correctness is generally pretty ill defined. Tasks very similar to that are also suitably cranked out in golang. It’s not perfect, and certainly there are certain kinds of error that you’d like to be able to prevent or predict (for example with integrated modeling). I think of it as a compiled python2, with the additional goal that there really actually should only be one obvious way to do it, and readability is achieved by explicitness even when that is very, very verbose. The further you get from that kind of work (need for rich data structures, non-networked boundary, crappy network, complicated concurrency, real-time and high performance work, numerical work or any other domain where expressiveness is a huge performance or productivity win), the worse golang will fit your needs.

    I don’t consider golang a joy, but I think it is largely successful at fulfilling that mission. Not optimal, certainly, but successful. And there’s a lot of code to be written that largely slots into the golang shaped hole.

    Somewhat ironically, it looks like Python is overtaking golang on the safety story with a much more expressive optional type system.

    As a practical marker, I think the bugginess of kubernetes shows that writing kube bumps up and slightly over the complexity level that golang is fitted for.

    1. 16

      This is largely what my view on Go has evolved to over the years since it fully addresses the problem domain that it’s designed for: make a language that’s as simple and explicit as possible with the goal of trivializing the individual programmer’s impact on the project. If any given Go programmer is just as useful as any other and the language is easy to pick up in the first place, people are entirely expendable. It addresses Google’s internal needs perfectly, it just so happens that people outside of Google also use it. However this doesn’t free the language from criticism as it’s still rather… shoddy in the design realm, but any of those criticisms would fall on deaf ears. A much more relevant (and concerning, in my opinion) criticism is on Google’s approach to people and the use of a language to commoditize them.

      1. 10

        A much more relevant (and concerning, in my opinion) criticism is on Google’s approach to people and the use of a language to commoditize them.

        This has been a corporate fever-dream for decades, at least. They tried to do it with Java, too.

        I had a professor who taught a software engineering course. This was a dude who had worked in industry. He claimed that eventually, programming would be similar to a food service job. A designer would make some UML models and stuff, and then hand them off to highschool kids who would write the code for minimum wage. Among the guy’s other ludicrous claims: eventually we’ll be writing programs in XML! I thought he was kind of a silly assclown. He did teach an excellent course on databases however.

        1. 18

          Among the guy’s other ludicrous claims: eventually we’ll be writing programs in XML!

          We kinda do! The actual technology is XMLs easier-to-type cousin YAML, but we absolutely write programs in a data structure language.

          1. 5

            Also XAML.

            1. 3

              but we absolutely write programs in a data structure language.

              Yeah, and we absolutely hate it (looking at you, Ansible)

              1. 2

                Indeed we do (looking at you, CI files)

                1. 3

                  Shit, that, too. Not only is it programming with yaml, is programming a sort of state machine you can’t test anywhere other than in production.

                  I don’t think I have a truly love-hate relationship with anything as much as I do with CI

              2. 3

                We kinda do! The actual technology is XMLs easier-to-type cousin YAML, but we absolutely write programs in a data structure language.

                Point well taken. I’ve written my share of ansible config. In another sense, the “equivalence of code and data” sense, a data representation language is just code for a very limited kind of machine. We hope it’s limited, anyway! TCP packets are programs that run on the machine of a TCP stack.

                I don’t think that’s what he had in mind. I think he was imagining something more like C++ but with XML tags and attributes to represent the syntax.

            2. 9

              Replacing scarce and hard-to-train programmers with easy-to-learn languages or tools has been a theme for as long as I have been in the commercial software development business (late 90s). The craze for ML-assisted development is just the latest iteration of that.

              1. 3

                The opposite of this is if every programmer feels like an amazing magician and writes their own DSL in Common Lisp and uses macros everywhere.

                This can be good for the self esteem of inidividual programmers, but is horrible for hireability, teamwork and being able to tell what a screenfull of code does without extensive digging.

                1. 1

                  This was always my suspicion about Go. The commoditization. On the positive end, that could be a strength for open source projects as it lowers the bar for participation. I can only imagine how CoPilot and the likes will amplify this over the next coming decades.

                  My strong intuition within a generation is that most ‘coders’ will only submit pull requests (or issues) for features, fixes, etc., along with some bots, and the new paradigm of social-driven automation (hybrid of human coders and AI bots) will update the PR by generating the code and even the tests. It will be as much an art as a science in submitting good issues or PRs that get the results you want. Whomever ‘fulfills’ the issue or PR (updating it to completion) will be rewarded for it, sorta like a bounty or something.

                2. 9

                  web services […] with moderate to low business logic complexity where correctness is generally pretty ill defined

                  For better or worse, this describes a sizable percentage of software projects.

                  1. 8

                    golang is a language optimized for cranking out web services at Google with moderate to low business logic complexity where correctness is generally pretty ill defined

                    Sounds like the same pitch as PHP and look where the public opinion of that language went, fractal of bad design and all that.

                    Sometimes it feels to me like Go is the place where people can follow all their old PHP practice at slightly better performance and legitimized by the fact that it’s “the language Google uses internally”

                    1. 3

                      PHP has it’s reputation, and still runs the majority of the web. For better or worse.

                      I wonder if in ten years people will look at Go like we look at PHP now.

                    2. 6

                      I really don’t like go. For the reasons the OP lays out and more. But your contrapositive case is dead on.

                      When you have to crap out web services with stringy typing, json, and lack of abstraction - go is fine.

                      And for completeness - the go linker and it’s ability to ship static binaries trivially is fantastic.

                    1. 2

                      When you write this code:

                      OperatorEdges(src, dest) :-
                          OperatesEvent(src),
                          ChannelsEvent(src_id, dest_id, scope_addr),
                          [..scope_addr, src_id] == src,
                          OperatesEvent(dest),
                          [..scope_addr, dest_id] == dest.
                      

                      Is this intended as pseudo-code, or is this real code for a language within Timely Dataflow, or is this a differential-datalog?

                      1. 1

                        It’s pretty much ddlog code, I just added the splat patterns ([..scope_addr]) since it’s easier to write and understand

                      1. 6

                        I spent the last two weeks reading about incremental computing, and how to apply that to incremental data pipelines and somehow I totally missed Differential Dataflows and DDLog. The graph example is spot-on, and this article breaks down every bit in a very clear way. Looking forward to the next one!

                        1. 5

                          I’ve been reading about incremental computing in the past a week or two as well! One thing I cannot quite put my head to it is how “incremental” is done. The “tracking of which part changed and recompute” is pretty straightforward. But cannot find good material to explain “incremental” part to me.

                          For example, in this article, it talks about scores.average(), so in “incremental” fashion, if a new student score added, it somehow can derive that I just need to compute previous_average_scores * previous_count / new_count + new_added_store / new_count or (previous_total_scores + new_added_score) / new_count. How system decide which intermediate value to keep track of and do the compute?

                          In more complicated cases, for example, if we do scores.top(10).average(), how a incremental computing system can derive that the minimal work should be previous_average_scores - lowest_score_so_far / 10 +new_added_score / 10 and keep track of lowest_score_so_far in the system?

                          Somehow I believe these issues are solved problems in incremental computing paradigm, but the material on the net is sparse on the details.

                          1. 3

                            I think there’s a variety of degrees to which systems are incremental. Some, like Adapton, are based on having an explicit dataflow and reusing previous computation results, thus avoiding to redo computations unaffected by the change – that’s essentially the same concept as build systems.

                            A truly incremental system, however, would consume a delta (a change) on the input (starting with ∅) and produce a delta on the output for maximum efficiency. In other words, an incremental system would ideally be Δinput→ Δoutput instead of input→output, with the key operations of the pipeline all consuming and producing deltas. That seems to be the idea behind differential dataflow.

                            But as you point out, you sometimes (actually often) need to reuse the previous state to calculate the output (whether it’s a delta or not), and in some cases you need the entire input anyway (like doing a SHA1 sum on the inputs). I haven’t seen this spectrum of being incremental articulated clearly in the literature so far, but the idea of differential is certainly that the computations should operate on deltas (differences) as opposed to complete values.

                            The How to recalculate a spreadsheet article has a bunch of links that you might find interesting on the topic. The Build systems à la carte paper might be an interesting read for you as well.

                            1. 2

                              Thanks! Equipped with what you said, I re-read the Differential Dataflow paper, and now it is much clearer. Their difference operators are on deltas and the so-called generic conversion from normal operators to difference operators just simply do \delta output = f(input_t + \delta input) - output_t. To make it work efficiently in LINQ settings, a few difference operators (especially for aggregation, such as sum and count) implemented manually because these can work with \delta only.

                              It is still very interesting, but much less generic / magic than I initially thought it could be.

                            2. 2

                              The biggest thing for incrementality is tracing data interdependence. If relation B draws in data from A and C, changes to those relations trigger a chain reaction, propagating said changes. For aggregates (the .group_by() clause of the post) things are pretty coarse-grained, any change to the input relation(s) will trigger a recomputation of the aggregate, so changes to Test or Student will cause scores to be recomputed. While in a reasonably trivial example like the scores one this seems stupid and/or wasteful, for more complex rules things get exponentially more difficult since worst-case joins are commonplace within ddlog code

                          1. 4

                            I’ve been messing around with implementing async in my microkernel, Rust really makes things a joy to program