1. 3

    Thanks for posting this bug report - you may have saved me some grief. I administer an affected system.

    1. 1

      If it can help, there are some links with tu books to fix the issue by rolling back to a previous version of Grub.

    1. 8

      Daniel Berlin, the author of Google’s AGPL policy, has responded to this rant on Hacker News.

      1. 2

        I’d like to see how he validates a UTF-8 string.

        1. 2

          He wrote a blog post on validating UTF-8 in 2018.

        1. 6

          GnuTLS has had 20 high severity CVEs in the last 3 years.
          OpenSSL. by comparison, has had 2 high severity CVEs, 8 medium severity CVEs, and 13 low severity CVEs.

          1. 3

            If security is the concern, then libressl should be mentioned.

            1. 4

              Where is TLS used when security isn’t a concern? I really wish everyone would just switch from OpenSSL to LibreSSL. Both in downstream projects and in where people choose to send funding.

          1. 2

            Branch predictors and compilers have both improved quite a bit since 2012.
            I tried Godbolt and current versions of Clang and G++ use conditional moves at -O2.
            I think ICC uses a compare but could be misinterpreting the listing.
            Clang and G++ vectorize the summation loop at -O3.

            1. 2

              ICC looks to be doing the same trick mentioned in the SO answer, where it’s recognized that the outer loop and the inner loop can be swapped. This effectively makes data[c] a constant value for most iterations.

            1. 1

              Applications of Randomness in System Performance Measurement, Trevor Blackwell’s Phd thesis, makes a compelling argument for adding randomness to otherwise deterministic systems.

              Here are some excerpts from the abstract.

              This thesis presents and analyzes a simple principle for building systems: that there should be a random component in all arbitrary decisions. If no randomness is used, system performance can vary widely and unpredictably due to small changes in the system workload or configuration. This makes measurements hard to reproduce and less meaningful as predictors of performance that could be expected in similar situations. …
              We show how to choose reasonable amounts of randomness based on measuring configuration sensitivity, and propose specific recipes for randomizing TCP/IP and memory systems. Substantial reductions in the configuration sensitivity are demonstrated, making measurements much more robust and meaningful. The accuracy of the results increases with the number of runs and thus is limited only by the available computing resources.

              1. 5

                The whole premise of this effort is not credible. There are far more reasonable approaches than rewriting everything in Rust™, for instance, in this case, just providing modern Fortran wrappers around the (assumed) FORTRAN 77 source. Then, implementing ISO C binding interfaces to these function in order to call the Fortran code from C, Python, Lua, … But then, it would’t have been Rust, right?

                1. 3

                  Frankly I think the analysis of whether the rewrite is justified is pretty impressive for an undergrad. It seems perfectly reasonable for a couple-semester research project or something of that scope, and sounds like they learned a lot of useful optimization info about the problem space of the program totally unrelated to the implementation language. How would providing language wrappers around the existing code have helped with threading, anyway?

                  1. 2

                    The wrappers may not be useful for threading, but instead, for providing a user interface, as the author didnʼt want to call ncurses from Fortran. Due to the original version beeing written in Fortran 90, parallelisation could have been added easily with Fortran 2003 intrinsic concurrency, OpenMP, or CoArrays.

                  2. 2

                    I noticed that you had some extra spaces in your comment. I can’t read it. Have you considered rewriting your comment in Rust? It would detect that sort of problem for you at posting time.

                    1. 1

                      Now you can simply provide a c-binding too.. And their job was to rewrite it, so I wouldn’t even bother explaining why you do it.

                      1. 1

                        Then, Fortran 2018 would have been a more logical choice for a port/rewrite. I mean, the field is still numerical simulations.

                        1. 1

                          I wouldn’t have rewritten it in Rust either; the author started with a Fortran 90 program. It has global variables but otherwise doesn’t seem that hard to work with.

                          1. 2

                            Depends, for example if you want to have rusts memory constraints or want new students to add features, you may want to move away from fortran. Even more in research, if I want to give less experienced people the job to add things.

                    1. 28

                      gRPC is Protocol Buffers running over HTTP/2. It’s a got a “g” at the beginning of the name to remind you that the only time it’s acceptable to use is when you are actually working for Google inside a Google-owned building eating Google-branded food and breathing Google-branded air.

                      Have worked with protobufs, can confirm this is true.

                      1. 5

                        What’s so bad about protobufs?

                        1. 20

                          The code generated is ridiculously complex for what it does. The implementation is ridiculously complex for what it does. The generated code in all languages except C++ (and maybe Java) is unidiomatic, verbose, and fits poorly with the code written in the language. The GRPCv3 code is a regression over GRPCv2, since you can no longer tell if values are present or not: all values are optional, but you can’t tell if they were filled in or not unless you wrap each one of them in their own message.

                          And then there’s GRPC, which takes this complexity to a new level, and adds lots of edge cases and strange failure modes because of the complexity.

                          And to top it off, while they’re a bit faster than json, they’re pretty slow for a binary protocol.

                          1. 6

                            Protobufs certainly has its dusty corners but there is a rationale for dropping required fields.

                            1. 5

                              My compliant wasn’t about dropping required fields. I agree with that: Required is a pain for compatibility. My complaint was that they broke optional on top of that.

                              message Foo {
                                    int32 x = 1;
                               }
                              

                              In proto2, you could check if x was set:

                              if(foo.has_x()) { use(x) }
                              

                              In proto3, there’s no has_x(), so an unset x is indistinguishable from x=0. You need to write:

                              message Integer {
                                  int32 val = 1;
                              }
                              
                              message Foo {
                                  Integer x = 1;
                              }
                              

                              And then check:

                              if(foo.get_x() != null) { use(foo.get_x().get_val()) }
                              

                              Note that in addition to being just plain clunky, and the potential to forget setting ‘val’ within the wrapper message, it’s inefficient – in languages with value types, like C++, Rust, Go, …, you’re now adding an extra allocation.

                              1. 1

                                That does seem annoying but they may be re-adding optional in version 3.13.

                                1. 2

                                  Which is kind of telling…

                          2. 9

                            The footnote links to https://reasonablypolymorphic.com/blog/protos-are-wrong/index.html which gos into that

                            1. 5

                              Kenton Varda’s response to this rant is worth reading.

                              1. 9

                                I stopped reading at

                                This article appears to be written by a programming language design theorist who, unfortunately, does not understand (or, perhaps, does not value) practical software engineering.

                                Typical Googler stuff.


                                The comment in the original article is so on point:

                                I now consider it to be a serious negative on someone’s resume to have worked at Google.

                                1. 6

                                  While it is often perfectly valid to opt for a solution which works over one which is elegant, I get the impression that the words like “pragmatic” are increasingly being used as an anti-intellectual “excuse” for not doing something properly and ignoring well studied solutions, simply because they “weren’t invented here”, or are proposed by people who the developer disagrees with or simply doesn’t associate with.

                                  1. 3

                                    Yep.

                                    I come from an environment where “pragmatic” is only used sarcastically, and that’s honestly quite refreshing.

                                    If someone says “the software is pragmatic”, I assume it’s buggy as hell.

                              2. 4

                                This article is typical FP hardliner complaining that something isn’t “correct enough” because it doesn’t use a Haskell-like type system. The last section is kind of good, though.

                          1. 9

                            This is a great idea for a post that I’ve wanted to write myself. Leaving aside trees, hash tables, and stacks, I probably used less than one “interesting” data structure/algorithm PER YEAR of programming. Some notable counterexamples are two kinds of reservoir sampling, regular languages/automata, random number generation, some crypto, and some info retrieval algorithms.

                            One thing that sparked it is a obscure but long-running debate over whether dynamic programming interview questions are valid.

                            I don’t think they’re valid. It’s mostly a proxy for “I got a CS degree at a certain school” (which I did, I learned dynamic programming in my algorithms class and never used it again in ~20 years.)

                            I challenge anyone to name ANY piece of open source software that uses dynamic programming. Or to name an example in your own work – open source or not.

                            I’ve tried this in the past and nobody has been able to point to a concrete instance. I think the best I’ve heard is someone heard about a professor who heard about some proprietary software once that used it.


                            Related: algorithms used in real software (although this is certainly not representative, since compiler engineering is a subfield with its own body of knowledge):

                            https://old.reddit.com/r/ProgrammingLanguages/comments/b22tw6/papers_and_algorithms_in_llvms_source_code/

                            https://github.com/oilshell/blog-code/blob/master/grep-for-papers/llvm.txt

                            Linux kernel algorithms:

                            https://github.com/oilshell/blog-code/blob/master/grep-for-papers/linux.txt

                            1. 10

                              I challenge anyone to name ANY piece of open source software that uses dynamic programming.

                              Git, or most reasonable implementations of “diff”, will contain an implementation of the Myers Algorithm for longest-common-subsequence, which is very dynamic-programmy.

                              No concrete example for this one, but I know that bioinformatics code is full of dynamic programming algorithms for the task of sequence alignment, which is similar to diff — identifying a way to align two or more base sequences so that they coincide with the minimal number of changes/additions/deletions required to make them identical.

                              1. 1

                                Hm I’m familiar with that algorithm but I never thought of it as dynamic programming.

                                Wikipedia does say it’s an instance of dynamic programming. Although when I click on the paper, it appears to contrast itself with “the basic O(MN) dynamic programming algorithm” (section 4).

                              2. 8

                                Since you mentioned dynamic programming, it’s worth pointing out that the name “dynamic programming” was chosen for political reasons, as pointed out in the history section of the Wikipedia article on dynamic programming. So I think it’s a really bad name.

                                1. 1

                                  That’s funny, I remembered xkcd’s “dynamic entropy” comic, and it quotes the same passage:

                                  https://xkcd.com/2318/

                                  It also has a very interesting property as an adjective, and that is it’s impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible.

                                  LOL

                                  Agree it’s a terrible name… I would say it was chosen for “marketing” reasons

                                2. 7

                                  I have thought about whether dynamic programming questions are fair to ask, and I ended up where you are: they are not.

                                  Dynamic programming was the one I struggled most in understanding and implementing correctly. And while there are semi-practical examples (like the backpack problem), I have not found any practical, day-to-day use cases on this.

                                  I had an argument with my colleague who asked this kind of problem, saying it’s basic knowledge. Turns out he did competitive programming and there, it is table stakes. But in practice, it just filtered for anyone who has learned and practiced this approach.

                                  I stay away from asking this, problems that need dynamic programming to solve.

                                  1. 4

                                    I’m familiar with dynamic programming mostly from high-school competitive programming as well. Otherwise I can’t say I’ve encountered real-life problems where it occurred to me to use the technique.

                                  2. 8

                                    I challenge anyone to name ANY piece of open source software that uses dynamic programming. Or to name an example in your own work – open source or not.

                                    I’m literally about to implement something that could be classified as dynamic programming at work, which can be sumarrized as “computing a few simple statistics such as number of bytes changed for each directory in a large filesystem”. Dynamic programming is such a general term that it applies regularly if you want to use it.

                                    1. 4

                                      I’d like to see it. I don’t think dynamic programming is a general term.

                                      In fact one time I participated in this argument (which was years ago so my memory is fuzzy), a former CS professor I worked with explained the difference between memoization and dynamic programming. A bunch of 10-20+ year programmers like myself went “ah OK”. Unfortunately I don’t even remember the difference, but the point is that most programmers don’t, because dynamic programming is very uncommon.

                                      What you’re describing sounds like an obvious algorithm that anyone could implement, which is not the same as dynamic programming interview questions, or dynamic programming in competitive programing.

                                      As other posters mentioned, competitive programming is the main place you see it outside of a CS class.

                                      1. 2

                                        It’s absolutely an obvious algorithm, so is most dynamic programming. That was sort of my point.

                                        Can’t share the code unfortunately, but it’s just iterate over sorted list of file changes in reverse order and collect statistics as we go. Dynamic part comes from the fact that we can just look at the subdirectories of a dir (that we already have numbers for) instead of recursing into it.

                                        1. 2

                                          What you’re talking about could be called memoization, or it probably doesn’t even deserve that name. It’s just what a “normal” programmer would come up with.

                                          That’s not the type of thing that’s asked in interview questions or competitive programming. The wikipedia page gives some examples.

                                          Dynamic programming usually changes the computational complexity of an algorithm in some non-obvious way. There’s very little you can do recursing over a directory tree that doesn’t have a clear O(n) way to code it (e.g. computing statistics).

                                          1. 7

                                            I like Erik Demaine’s explanation, that problems where dynamic programming can be applied are ones where their subproblems and their dependencies can be modeled as a directed acyclic graph [1]. Up to you if you’d like to tackle that with a top down approach where you look at a node and calculate its solution based on the solutions of its ancestors, or a bottom up approach starting from the nodes in the DAG with no dependencies and propagate the solutions in topological order.

                                            My colleague and I used it for a generalization of matrix chain multiplication (for tensors) [2].

                                            [1] https://youtu.be/OQ5jsbhAv_M?t=2736

                                            [2] https://github.com/TensorCon/ratcon/blob/release/opt/gencon/pathopt.py#L198

                                            edit: by the definition above, even naive memoization can count as DP, if you’re caching solutions to subproblems of that structure. Doesn’t have to be at the difficulty level of competition to count as DP in that case.

                                            1. 1

                                              Hm interesting, searching through my personal wiki, I found a related definition which is different. I haven’t really thought about it enough to form an opinion.

                                              Either way, it doesn’t change my overall point: that there are certain kinds of algorithms problems that appear on coding interviews, and in competititve programming, that do not show up in 99% of programming jobs. They are easy to pose and have cute solutions, but aren’t testing very much.

                                              I think the “DP tables” part is key but again I haven’t thought about it enough …

                                              https://blog.racket-lang.org/2012/08/dynamic-programming-versus-memoization.html

                                              Memoization is fundamentally a top-down computation and DP is fundamentally bottom-up. In memoization, we observe that a computational tree can actually be represented as a computational DAG

                                              In DP, we make the same observation, but construct the DAG from the bottom-up. That means we have to rewrite the computation to express the delta from each computational tree/DAG node to its parents. We also need a means for addressing/naming those parents (which we did not need in the top-down case, since this was implicit in the recursive call stack). This leads to inventions like DP tables, but people often fail to understand why they exist: it’s primarily as a naming mechanism (and while we’re at it, why not make it efficient to find a named element, ergo arrays and matrices).

                                              This bottom-up / top-down distinction might have been the same as what the aforementioned professor said 5+ years ago, but I don’t remember exactly.

                                              1. 1

                                                So, is memorization of factorial top-down, or bottom-up?

                                                1. 1

                                                  I would probably say neither… Either factorial or Fibonacci is so trivial that it doesn’t help to be thinking about it that way.

                                                  Though I think the quote hints at a clear test for whether it’s top-down or bottom-up: if you need extra state outside the call stack. I’m getting curious enough to try this out, but right now all I can do is quote what other people say.

                                                  In any case it’s clear to me that there’s some controversy over what dynamic programming really is. I think the issue is that a lot of algorithms could be framed that way but were not discovered that way, and not taught and learned that way.

                                                  1. 1

                                                    I would probably say neither… Either factorial or Fibonacci is so trivial that it doesn’t help to be thinking about it that way.

                                                    I think that the triviality is actually helpful here. If it’s actually true that memoization and dynamic programming are different (and there’s clearly debate on this), can 2 implementations of a trivial function, that everyone can understand highlight the differences?

                                            2. 1

                                              On the contrary the stupidly naive way (recurse on every directory) is O(n^2).

                                              Dynamic programming is just solving a series of problems while using the answers of shared subproblems multiple times. Memoization is a common way to implement this.

                                              Yes there are some very clever algorithms that use dynamic programming, this doesn’t make obvious algorithms that use dynamic programming not also fit under the definition.

                                              1. 3

                                                Why would recursing into every directory be O(n^2)? You’re still only visiting every directory/file once? It seems like something missing?

                                                1. 1

                                                  Say you have a directory structure with a single file in it, /a/b/c/d/e

                                                  To get the number of bytes changed in e you need to visit e, then to get the number of bytes changed in d you need to visit d and then e, then for c you need to visit c, d, and e, and so on.

                                                  Like I said, it takes a really naive solution, but if you don’t remember the values you calculate anywhere for some reason it’s sum over inodes (depth of inode)… which is O(n^2) (for bad directory structures).

                                                  Note that I need these values for every directory, not just for one particular directory.

                                                  1. 2

                                                    That’s still linear complexity space. Unless you’re hardlinking directories (which you then have to deal with potential recursion), it’s still O(n). If you add a file at /a/b/c/file you only visit 1 more file and no more dirs, not an exponential. O(n + n + n) or O(n + 3) still simplifies to O(n).

                                                    1. 1

                                                      If you add /a/b/c/file you add 4 more visits, not 1. Going from n= 3 /a/b/file to n=4 /a/b/c/file adds 4 more visits. In other words this worst case example takes time O(sum from 1 to n of i) = O(n(n-1)) = O(n^2).

                                                      N is the number of inodes in a arbitrary tree not a number of files in a fixed tree.

                                                      1. 1

                                                        That’s still adding a linear number of operations for each file, the depth could technically be considered a different variable, say m. So for each file (n+1) you add, you also add the number of directory traversals (m) resulting in O(m+n), which simplifies again to O(n), but in reality folders are files too, so are part of n in the first place, so again O(n). Ultimately your n space is the total number of inodes, which both files and folders have.

                                                        Abstractly, you’re just traversing a tree structure (or a directed graph if using links), which is well understood to be O(n) (maybe O(n^2) worst case if all folders are links, resulting in a fully connected graph), because you only visit each node once. If it were O(n^2), you would visit each node n times.

                                                        Remember, Big O notation is about scaling, not the actual concrete number of operations, which is why you drop any constants or variables other than n.

                                                        1. 1

                                                          It’s O(mn) not O(m+n) (in the insanely naive algorithm that recalculate things every time).

                                                          It’s not a single tree traversal but #internal nodes tree traversals.

                                                          1. 1

                                                            Even if it was O(mn) (it’s not), that still simplifies to O(n). An ‘internal nodes’ tree traversal is still O(n), n is just smaller, but again, your problem is not an internal nodes traversal, it’s a full traversal because you have to look at the blocks attached to the file (leaf) inodes, which means you need to read all inodes of all files and of all folders one time each. n = # of files + # of folders = O(n)

                                                            1. 1

                                                              I supposed an extremely naive solution could be to fully traverse each sub tree for every folder visited, which would be… O(log n)? But even that isn’t O(n^2), as the total repeated space shrinks the deeper you get.

                                                              1. 1

                                                                You’re assuming a balanced tree, which is not guaranteed. Depth of tree is O(n) in pathological cases (and average case O(sqrt(n)) is typical for randomly generated trees)

                                                                1. 1

                                                                  Ah yeah, I think it would be O(n log n) not O(log n), because you traverse the tree once for each node, and a subset of of the tree for almost every n (except leafs), at least in the worst case. Still not O(n^2), and the solution for a O(n) is almost easier to conceptualize than the completely naive solution :)

                                                                  1. 1

                                                                    and the solution for a O(n) is almost easier to conceptualize than the completely naive solution :)

                                                                    No argument here…

                                                                    I think it would be O(n log n)

                                                                    We agree it’s O(n) * O(time tree search) now right? And you’re trying to call the tree search time log(n)? Because trees are height log(n)? Then see the post you replied to, that’s true in a balanced tree, it’s not true in a random tree (where it is sqrt(n)) and it’s definitely not tree in a pathological worst case (where a tree is just a n length linked list).

                                                                    1. 2

                                                                      Yeah, the part I was hung up on before was that you’re naive solution traverses the entire subtree below a node for each node visit, I was stuck in the simple optimal solution. For the pathological case, basically just a bunch of folders in folders with a single file at the bottom, the depth of the tree is n, and the file inode at the bottom would be accessed n times, so O(n^2). For the common case it would be about O(n log n) where you can skip traversing larger and larger parts of the tree the deeper you get on each ‘path.’ Thanks for the discussion, I enjoyed it :)

                                            3. 1

                                              I think comparing memoization to dynamic programming is a category mistake: they are different kinds of things.

                                              ‘Dynamic programming’ is a description of a style of algorithm. It’s divide-and-conquer, usually with overlapping subproblems, making it possible to reuse intermediate results.

                                              Memoization is a programming technique to remember intermediate results, by remembering the results of function calls. You can e.g. also store the intermediate results somewhere explicitly, usually in a matrix, in which case you don’t memoize the result ‘transparently inside the function’, but use a lookup table ‘external to the function that computed the result’.

                                              1. 1

                                                I dunno I find that in addition to the Racket language resource I gave elsewhere in the thread, lots of people compare them:

                                                https://medium.com/@afshanf/cracking-the-coding-interview-questions-part-7-a7c8f834d63d

                                                A note on terminology: Some people call top-down dynamic programming “memoization” and only use “dynamic programming” to refer to bottom-up work. We do not make such a distinction here. We call both dynamic programming.

                                                There does seem to be disagreement on what dynamic programming is. And many algorithms that were not derived with dynamic programming techniques could be described as dynamic programming.

                                                But it seems that most people agree it’s related to memoization.

                                          2. 4

                                            GCC uses dynamic programming to split IA-64 instructions into bundles.

                                            1. 2

                                              Thanks, nice example and link! Still I would say it’s a niche skill, especially to come up with from scratch in an interview.

                                            2. 4

                                              I challenge anyone to name ANY piece of open source software that uses dynamic programming. Or to name an example in your own work – open source or not.

                                              Ever do video encoding or transcoding with anything built on FFmpeg or x264? Encode images with MozJPEG? Encode an AV1 video or AVIF image with libaom? Trellis quantization in advanced lossy compression encoders is a dynamic programming algorithm.

                                              1. 3

                                                Hm very interesting! I was not aware of that algorithm. Paper I found:

                                                https://www.mp3-tech.org/programmer/docs/e00_9.pdf

                                                I would still say it’s a bad interview topic, but it’s cool to see real world usages of it.

                                                1. 2

                                                  Oh, no disagreement there! Even after coding it up myself, I’d hate to have someone ask me to whiteboard a working implementation of trellis quantization in 40 minutes or whatever (though I’m pretty sure I could sketch out an explanation now).

                                                  In general I’m not a fan of whiteboard coding exercises at all. Whenever I’ve interviewed candidates I’ve always preferred the old-fashioned method of just reading their resume well ahead of time, looking up what ever piques my interest on it, and then having a friendly conversation about that. Usually that provides plenty of esoteric material for me to quiz them on and it also lets them show me their strengths and enthusiasm.

                                                  1. 1

                                                    My current company doesn’t do a whiteboard exercise, but my previous one did… but the thing is, the usual task was to implement a basic “grep”. That is, read a file and print all of the lines that contain a user-specified string, in a language of your choice, with whatever libraries make you happy (it’s not a trick, you’re not supposed to implement Boyer-Moore on your own). Assuming you succeeded at that, we would ask you to implement a few more features, like a -v flag (only print lines that don’t match), and -A and -B flags (print context lines before and after the matching line), until you got stuck or the time for that segment was up. It wasn’t graded on minutiae like correct semicolon placement, it was just an evaluation of whether a candidate could do a trivial task, how they handled additional requirements, whether they asked sensible questions and got clarification when needed, etc. I found it pretty reasonable.

                                              2. 4

                                                I challenge anyone to name ANY piece of open source software that uses dynamic programming. Or to name an example in your own work – open source or not.

                                                I used Warshall’s algorithm (which is dynamic programming) to compute the transitive closure of a graph for a typechecker. This is, in my experience, a very common algorithm.

                                                In high school, I wrote a program for my professor that places students into groups of 4 such that their meyers-briggs personalities are as different as possible. This used dynamic programming.

                                                A professor of mine (who taught the subject to me) used dynamic programming for some kind of RNA sequencing problem in a paper he published. One of our homework assignments had us arrive at a watered down version of his (and his co-authors’) algorithm.

                                                I’m fairly certain that at least some fuzzy string matching algorithms use string distance, which is also solved using dynamic programming.

                                                These are all diverse applications of DP. In my personal, subjective experience, the idea that DP is in any way obscure or dated is absurd.

                                                Edit:

                                                To be more concrete, the “transitive closure of a graph” is for the graph of dependencies, computing the set of all functions that a particular function depends on. This is as described in the Haskell Report.

                                                For fuzzy string matching, I have in mind something like fzf, though I cannot say with certainty that it uses string distance (I’m unfamiliar with its implementation).

                                                Here’s the paper that I think I’m referencing: Statistical Mechanics of Helix Bundles using a Dynamic Programming Approach

                                                1. 2

                                                  Thanks for the examples. The claim is definitely not that it’s outdated or obscure; the claim is that it’s not a good interview question because it doesn’t show up much at work. Although there were lots of people here who pointed out interesting uses of dynamic programming, that’s not incompatible with the idea that you could have a 10 or 20 year programming career and never use it.

                                                  Side note: I’m familiar with the Floyd Warshall algorithm but I never thought of it as dynamic programming. I think part of the issue is that I may have a more narrow definition of it than others. (I think people even say the linear time fibonacci is an example of dynamic programming, which I find silly. I just call that the obvious algorithm. I guess it can be used to illustrate a principle.)

                                                  Even so, I definitely think it’s more popular in universities, and certain domains like bioinformatics. In contrast to what people on this site typically do “for work”.

                                                2. 3

                                                  I challenge anyone to name ANY piece of open source software that uses dynamic programming. Or to name an example in your own work – open source or not.

                                                  I do a lot of work with text processing – computing the edit distance between two strings is something I do very often. That’s a classic dynamic programming algorithm. There are probably hundreds of open source packages that do this or some variation thereof.

                                                  1. 3

                                                    Just to add to the list of responses clearly demonstrating Cunningham’s Law:

                                                    I believe the Knuth-Plass line-breaking algorithm used in LaTeX to lay out text “optimally” uses dynamic programming. This was done for efficiency, as opposed to using some kind of general global optimization routine. It’s also the reason why LaTeX doesn’t support “backtracking”.

                                                    1. 2

                                                      It’s also the reason why LaTeX doesn’t support “backtracking”.

                                                      Sile uses a variant of the same dynamic programming algorithm to lay out paragraphs on a page. The original paper describing the algorithm says that TeX wanted to use it like that, but it would require more than one entire megabyte of state for a large document, which was infeasible.

                                                      1. 1

                                                        Definitely an instance of Cunningham’s law at work :) I should make another go for my pet problems:

                                                        • it’s impossible to make a zsh-like interactive interface on top of GNU readline
                                                        • you can’t make a constant-space linear-time model of computation that’s more powerful than regular languages, and that can parse shell/JS/Python/C++
                                                        • you can’t make an extended glob to regex translator in less than a week (https://github.com/oilshell/oil/issues/192)

                                                        Thanks for the example. If there were more specific links I would make a blog post out of this :)

                                                        And although my comment was a tangent, it did motivate me to get out the “Algorithm Design Manual” and flip to the part on dynamic programming. Though I remember the applications in that book being interesting but seemingly far removed from what programmers do day-to-day. It seemed to be by a professor who consulted on algorithms for various companies, which is an interesting job!

                                                      2. 1

                                                        The Grasshopper routing library uses contraction hierarchies, which are implemented using Dijkstra’s shortest path algorithm and A* search, which are special cases of dynamic programming.

                                                        I have to agree it’s not something most people will use every day, but it never hurts to have a general idea how it works.

                                                        1. 1

                                                          Here is a concrete example of Dynamic Programming that you use every day: Word Wrap. Knuth has an algorithm that is often used for maximizing the number of words per line.

                                                          Also the field of Bioinformatics often uses the Levenshtein distance when matching two dna strands.

                                                          Also I would like to mention the single most important thing I learned from Dynamic Progrmming: Start at the end case, and figure out what constraints can work from there. For example, think about the last recursion call, and what constraints it needs, and go backwards from there.

                                                          1. 13

                                                            Finish reading 1984. I’m halfway through and man it feels like it was written by an oracle.

                                                            1. 3

                                                              His nonfiction books The Road to Wigan Pier and Down and Out in Paris and London are also excellent.

                                                              1. 1

                                                                Homage to Catalonia is also essential. Perhaps the best work of long-form journalism.

                                                              2. 1

                                                                I have recently started reading his book Why I Write. It pulled the curtain back a little on books like 1984 and lead me to think the problems he writes about aren’t new, they just seem to have a fresh coat of paint in the modern world.

                                                                Also one piece of advice given with little context to avoid any spoilers. Don’t skip reading the appendix. For anyone not worried about spoilers or without a copy on hand: Spoiler

                                                                1. 2

                                                                  Which is of course funny because we use one orwellian term in our daily lives these days: social distancing.

                                                                  Misusing language seems popular.

                                                                2. 1

                                                                  If you want another prophetic story J.G. Ballard’s The Intensive Care Unit is about social distancing.

                                                                  It’s available in The Complete Stories of J.G. Ballard.

                                                                  The article Why we are living in J.G. Ballard’s world discusses this story and some of his other works.

                                                                  1. 1

                                                                    Yeah, it’s a good one. I’d suggest reading some of Orwell’s other stuff too, or if you like a nice pairing go read Brave New World as a chaser…two vastly different approaches to dystopia.

                                                                    1. 1

                                                                      Thanks got the recommendation - will check it out

                                                                      1. 1

                                                                        Two vastly different approaches to describe our modem day lives.

                                                                        1. 1

                                                                          I recently read Homage to Catalonia and was very impressed - Orwell talks about his experience as a soldier in the Spanish civil war and you can see how that experience so directly influenced his thoughts and writing.

                                                                      1. 2

                                                                        Today: I am doing laundry, filing my taxes, and finishing Waylander.
                                                                        Tomorrow: I will be having lunch with my girlfriend and dinner with my parents.

                                                                        1. 8

                                                                          Work: I have emerged from the seven circles of ASN.1 implementation hell. I am now in the 49 circles of ASN.1 testing hell. I believe I have seen Virgil’s ghost wandering around here but not even he wanted to have anything to do with me.

                                                                          Non-work: I’m trying my hand at fuzzing some Modbus implementations. I expect this is going to come up at work at some point, too, but in a pretty different context, and it looks fun enough that I figured trying my hand at it now can’t hurt. Also the COVID-19 cases are ramping up again around here so I guess I have a lot of extra time to spend indoors anyway…

                                                                          1. 4

                                                                            Work: I have emerged from the seven circles of ASN.1 implementation hell. I am now in the 49 circles of ASN.1 testing hell. I believe I have seen Virgil’s ghost wandering around here but not even he wanted to have anything to do with me.

                                                                            You could not pay me enough.

                                                                            1. 4

                                                                              Well… it’s not that bad. I mean okay yeah it’s an absolutely awful encoding and it wasn’t exactly fun to code but it’s programming. It’s one of those unglamorous things that nobody (myself included) wants to do but all quality software has some of that. No good program consists only of parts that are fun to write.

                                                                              Besides… it’s programming! Last year in June I decided to do freelancing/consulting full time, not necessarily because I wanted to but because I was pretty sure that my technical career would otherwise be over within five years. The local job market is heavily skewed towards outsourcing/services and legacy projects, with practically nothing to learn after the initial onboarding and zero career perspectives if you’re not aiming for middle management (which I absolutely don’t). At one point I caved in and got a job in one of those companies, mainly for (temporary) financial reasons, and it’s been exactly as I expected: I spent four years getting a massive paycheck for intern-level work, learning practically nothing, and watching my skills rot and degrade day by day. At one point I counted these things, I went from December to June next year without writing a single loop. None of the code I wrote or fixed in that time was complex enough to warrant that.

                                                                              It’s been a whole year already and I’ve had no shortage of projects (in fact I’ve been turning down contracts since the beginning of the year), but even an entire year of pretty cool work is still far from compensating for the last four years of bullshit corporate jobs. I’d take fourty years of ASN.1 over that any time.

                                                                              1. 1

                                                                                I’m afraid of being the cause of yet another ASN.1 CVE. But, yes, freedom over bullshit ∞

                                                                                Where are you?

                                                                                1. 2

                                                                                  I’m afraid of being the cause of yet another ASN.1 CVE.

                                                                                  Oh, yeah, tell me about it. I’m scared shitless, there are several functions that I’ve reviewed so many times I must have learned them by heart already. Fortunately, the Higher Powers in charge of this project understand the risk very well. There are multiple rounds of code reviews scheduled. The next thing I’ll have to do after all this is done is come up with a way to fuzz this, which will be a part of our CI pipeline. And I expect that, for quite some time in the beginning (1-2 years?), there will be limited-scale deployment of this code, in well-controlled environments.

                                                                                  But, yes, freedom over bullshit ∞

                                                                                  I certainly don’t mind the freedom, but not being stuck doing brain-numbing pseudo-programming for 8 hours a day is what I really wanted. I was afraid that doing that for too long would take me past the point of no return, where there was no way I could do serious engineering again.

                                                                            2. 3

                                                                              You have my deepest and most sincere condolences. What horror sent you on this doomed quest?

                                                                              1. 2

                                                                                I’m implementing an obscure protocol that uses a subset of ASN.1 for some things, with its own encoding scheme (because being able to use an existing ASN.1 library just wouldn’t be any kind of bloody fun now would it). All I can say is that I expected it to be worse than it turned out to be. Still miserable tho’ :).

                                                                                1. 1

                                                                                  Are you implementing BACnet?

                                                                                  1. 2

                                                                                    *blinks twice*

                                                                                    Edit: I’m not sure if I can say what protocol it is in public but yeah, it’s one of those protocols, devised back in the 20th century when we thought the future of communications was bright and the mere thought of controlling things over HTTP made people chuckle (or cough nervously because they didn’t know what HTTP was, it was still pretty new…).

                                                                            1. 7

                                                                              I could read a million essays on demoscene code. It’s such a cool and unique constraint.

                                                                              1. 5

                                                                                ryg’s Debris - Opening the Box is a series of articles on demo coding.
                                                                                I’m particularly fond of Metaprogramming for Madmen.

                                                                              1. 1

                                                                                Code doesn’t even say what it does with 100% fidelity. It says what its programmer intends it to do, but if there’s a difference between the development and running system (the developer used Linux, and called killall make to stop processes with the name make. I’m running OpenIndiana, and…bad times) or between what the instruction claims to do and what it actually does (FDIV ought to do a floating point divide), then even that is going to be inaccurate. Not that that comes up frequently in practice, and is usually considered a bug when it does, but it’s more fuel to the fire that “my code is self-documenting” should be given an honourable send-off in.

                                                                                1. 1

                                                                                  Here is a HN post with an excellent example of a difference between development and running systems.

                                                                                1. 4

                                                                                  But if we actually read what economists have to say on how hiring markets work, they do not, in general, claim that markets are perfectly efficient or that discrimination does not occur in markets that might colloquially be called highly competitive.

                                                                                  If the bar is perfection, then there is no limit to the number of “fixes” that will be proposed.

                                                                                  We can fix this, if we stop assuming the market will fix it for us.

                                                                                  By enacting more laws, or how exactly?

                                                                                  1. 7

                                                                                    If the bar is perfection, then there is no limit to the number of “fixes” that will be proposed.

                                                                                    I guess that’s true, and I guess a good thing. Or should we not always seek to improve?

                                                                                    1. 2

                                                                                      I agree with this, but I don’t think our rhetoric is sustainable. It’s far too toxic and divisive and progress demands cooperation; I think this toxicity and steamrolling to create change is creating a debt of divisiveness that will prevent us from progressing in the future. We need to figure out how to advocate for progress without being utterly hateful–people need to stop using “fighting injustice” as a cover for their personal vendettas against individuals and groups (political parties, races, genders, etc).

                                                                                      1. 4

                                                                                        We need to figure out how to advocate for progress without being utterly hateful–people need to stop using “fighting injustice” as a cover for their personal vendettas against individuals and groups (political parties, races, genders, etc).

                                                                                        Taking your comment at face value, it is a distortion of a hallucinatory magnitude to characterize socially progressive movements – which I assume you are referring to through the fog of plausible deniability – as “divisive”, “hateful”, and having a “personal vendetta” against groups defined by “race” and “gender”. Taking for example the movement for racial justice in the US, compare a near half millennium of violent racial oppression to the overwhelmingly peaceful forms of protest across the US, and ask yourself whose toxicity, divisiveness, and vendettas against groups should be considered worthy of your critique.

                                                                                        1. 3

                                                                                          I can’t disagree more strongly. The canonized authors of the movement publicly write things like “White identity is inherently racist” (note that this is a quote from DiAngelo’s best selling book) and talk about racism incurable in white people (whites are irredeemable) and kafka trap people (“white fragility”–if you denounce the term you’re only doing so because of your own white fragility, same with “internalized racism” if the objector is a minority) and otherwise go to remarkable lengths to support the primacy of race. They redefine “racism” such that white people simply participating innocuously in their own culture is inherently racist and consequently evil and not only that but colorblindness–literally the opposite of ‘racism’ as the term has historically been defined–is also racist. I don’t see how this can be anything other than hateful and divisive and racist (per the traditional definition).

                                                                                          (And for the “So what? Words change meaning” crowd, the significance is that the term isn’t the thing that carries the moral weight; it’s the meaning–as a society we agreed that racial prejudice and hate were evil–they are evil whether “racism” as a term refers to those things or their opposites. If someone inverts the meaning of the term–and I think this is a fair characterization–and then calls themselves “antiracist”, they are necessarily opposed to justice).

                                                                                          Taking for example the movement for racial justice in the US, compare a near half millennium of violent racial oppression to the overwhelmingly peaceful forms of protest across the US, and ask yourself whose toxicity, divisiveness, and vendettas against groups should be considered worthy of your critique.

                                                                                          I’m concerned about the parallels between the former and this new ideology (which condemns the very liberalism that so significantly reduced racial oppression in so short a time). Notably the primacy of race over the individual, the eagerness to regulate speech and thought, the propensity to celebrate or rationalize political violence, the promotion of segregationist policies, the newspeak rhetorical devises, and so on. Liberalism condemns the far left and the far right at once.

                                                                                          1. 2

                                                                                            The canonized authors of the movement publicly write things like “White identity is inherently racist” (note that this is a quote from DiAngelo’s best selling book) and talk about racism incurable in white people (whites are irredeemable) and kafka trap people (“white fragility”–if you denounce the term you’re only doing so because of your own white fragility, same with “internalized racism” if the objector is a minority) and otherwise go to remarkable lengths to support the primacy of race. They redefine “racism” such that white people simply participating innocuously in their own culture is inherently racist and consequently evil and not only that but colorblindness–literally the opposite of ‘racism’ as the term has historically been defined–is also racist. I don’t see how this can be anything other than hateful and divisive and racist (per the traditional definition).

                                                                                            You realize that works like these are written from a critical perspective, right? They’re intellectual and/or sociological analyses, not literal dictates.

                                                                                            1. 2

                                                                                              They seem to be building a racial worldview that is seized upon en masse, so I take little comfort in their perspective or analytical nature.

                                                                                            2. 2

                                                                                              If you’re interested books on racial justice, Angela Davis and Michelle Alexander are good places to start. Verso books publishes a lot of on this subject too. I think you’ll find many thinkers who are less concerned with what you may perceive as a certain moralizing political correctness, but rather more interested in understanding and dismantling systems of violent injustice. Think policing, jailing, housing, education. This is what I’ve referred to above vaguely as “the movement for racial justice”. Not to be dismissive, but books like “white fragility” are IMO more like beach reading for guilty white folks. Serves a purpose and may be a nice book, but I don’t think anyone would refer to it as “canon”.

                                                                                              1. 2

                                                                                                I think we’re exposed to different angles of this movement. I see a lot of people (white and black) boosting Kendi and DiAngelo and their ilk, but you’re the first who has recommended Davis or Alexander. I’ll have a look. I don’t think Kendi or DiAngelo are actually interested in “racial justice” in any meaningful way, but rather trying to advance an abstract theoretical framework (or perhaps an incoherent word salad that aspires to look like a framework) that indicts groups and individuals they don’t like irrespective of whether or not those people have anything to do with the disparate outcomes. I fully believe there are lots of genuine people involved in the movement, but the part of the movement that is being seized upon by the media and corporations seems to strictly be the critical race theory part (they recommend the same authors, use the same jargon, etc).

                                                                                                1. 2

                                                                                                  trying to advance an abstract theoretical framework . . . that indicts groups and individuals . . . irrespective of whether or not those people have anything to do with the disparate outcomes

                                                                                                  But, like, this is entirely correct! Structural racism means people don’t have to actively do racist (little-r, micro scale) things to be part of a racist (big-r, macro scale) group.

                                                                                                  The conversation is about understanding this distinction and it’s effects so they can be corrected, to reduce suffering — not about amplifying the distinction so it can be weaponized, to increase suffering. I understand why you might think otherwise, but that conclusion requires believing the entire conversation is happening in bad faith, and there’s just no justifiable way to reach that conclusion, the evidence and Occam’s Razor don’t support it.

                                                                                                  1. 1

                                                                                                    Structural racism means people don’t have to actively do racist (little-r, micro scale) things to be part of a racist (big-r, macro scale) group.

                                                                                                    Then this definition of “racist” can’t carry any moral value, and anyone who assigns moral value to this definition is by definition a bigot and if the group in “part of a racist group” refers to a race then the person who assigns moral value to the term is precisely a racist in the original sense of the term—one who believes a person is guilty for no reason other than the color of their skin. This is hateful.

                                                                                                    1. 2

                                                                                                      I can’t help but be left with the overall impression that you’re trying to win a game of 7-dimensional chess which is suspended above our planet. I enjoyed introducing you to Davis and Alexander, and at this point I’d suggest that you pick up some of those books because there aren’t many other ways I think I can re-state the same point, one that @peterbourgon has also expressed, and which is hammered home in those works. I don’t mean to tell you what to think, but history makes the fact of systemic injustice pretty uncontroversial. It seems that rather than confront these systems, you’d prefer think of racism as a personal choice made by context-free actors. I don’t mean to shut down discussion, but I have said what I could and the information is out there for you to engage with if you choose to.

                                                                                          2. 4

                                                                                            There has been a lot of hate, frankly, disguised for decades as civil discourse and apparently ordinary polite society behaviour. While some of the apparent tone of recent movements is appreciably different, and approaches a fever pitch in cases where people are at the end of their rope due to continuing maltreatment, I think it is disingenuous to paint the pushing back as somehow more hateful than what’s being pushed back on.

                                                                                            1. 2

                                                                                              There has been a lot of hate, frankly, disguised for decades as civil discourse and apparently ordinary polite society behaviour.

                                                                                              There’s also a lot of people who call civil discourse ‘hate’ as a way to shut down differing points of view.

                                                                                              While some of the apparent tone of recent movements is appreciably different, and approaches a fever pitch in cases where people are at the end of their rope due to continuing maltreatment, I think it is disingenuous to paint the pushing back as somehow more hateful than what’s being pushed back on.

                                                                                              From my vantage point, I see a lot of people being slandered, canceled, and even assaulted for living their ordinary lives and failing to toe a particular ideological line. These aren’t hateful people and many of them are themselves minorities; if they knew what policy would fix various racial disparities, they’d support it (e.g., IIRC a majority of even Republicans support police reform–there’s clearly more good will/faith out there than we’re lead to believe). They simply don’t buy the far-left newspeak rhetoric and extreme and frankly idiotic solutions (“tear down capitalism!”, “defund police!”, etc).

                                                                                              Further, your characterization is that the bad actors are presumably minorities at the end of their ropes, but the people I see behaving the worst in this regard are very often white and almost uniformly privileged people, often with lofty positions in media or universities. Similarly there are many well-behaved people of all races and positions. There’s not a strong racial signal as far as I can tell (although you wouldn’t know it from the mainstream media).

                                                                                              I think you and I have different positions, and that’s fine. I’m sure mine is skewed, and I’m always trying to understand better. Hopefully in time people with views similar to yours and people with views similar to mine will come to a more mutual understanding, but of course that’s dependent on a healthy debate.

                                                                                              1. 4

                                                                                                There’s also a lot of people who call civil discourse ‘hate’ as a way to shut down differing points of view. From my vantage point, I see a lot of people being slandered, canceled, and even assaulted for living their ordinary lives and failing to toe a particular ideological line.

                                                                                                Something I have noticed from a lot of conservative-leaning Americans is the tendency to latch onto the idea that someone might address their behaviour as somehow being the gravest sin, far worse than the shitty behaviour itself. Free speech doesn’t mean free from consequences!

                                                                                                Communication is certainly a two way street, but the “differing views” stuff is frankly pretty exhausting. The paradox of tolerance is a pretty tangible phenomenon, as it turns out. If you have a pattern of poor behaviour, other people are welcome to – and should! – point it out, especially when the transgressors are powerful people. The religious right have been doing this in their own way for years, harassing advertisers and TV networks, and politicians; organising boycotts and making a racket, to shut down TV shows or laws or organisations that they believe are offensive.

                                                                                                In another thread, you said:

                                                                                                They redefine “racism” such that white people simply participating innocuously in their own culture is inherently racist and consequently evil

                                                                                                I am a white person from Australia, where we have our own unfortunate, regrettable history. There are aspects of what I imagine you’re defining as “white” culture (whatever that is) that are indeed quite horrid. By “simply participating” in modern US society you (and I) are utilising a pile of wealth that was created through colonialism, and slavery, and many other deeply awful, violent, destructive aspects of our shared societal history. On top of that, many societal institutions (like the police, or the fact that wage theft is not a crime) are structurally configured to protect certain groups of people at the often violent expense of other groups.

                                                                                                By pretending it isn’t something we all need to deal with urgently, that we have the luxury of waiting for someone to come along and debate us in a tone of which you approve, you are taking a fundamentally regressive stance. There is no move on this chess board that isn’t inherently a political decision, even though it might feel like your doing nothing and staying out of the way is somehow balanced or apolitical.

                                                                                                a majority of even Republicans support police reform–there’s clearly more good will/faith out there than we’re lead to believe

                                                                                                I will believe it when I see it! Republicans have periodically had concurrent control of the House and the Senate and the Executive and a lot of the courts and statehouses in this country. They have had ample opportunity to drive forward any “reform” that they felt would mean less people are oppressed, harmed, killed. It isn’t at all surprising to me that people are deeply unhappy with the status quo.

                                                                                                1. 1

                                                                                                  Something I have noticed from a lot of conservative-leaning Americans is the tendency to latch onto the idea that someone might address their behaviour as somehow being the gravest sin, far worse than the shitty behaviour itself. Free speech doesn’t mean free from consequences!

                                                                                                  Take comfort in that I’m not a conservative, but a moderate liberal. Your observation would indeed be frustrating, but that’s not what I’m doing here. My claim is that ordinary white folks and increasingly minority folks who are just going about their business are maligned by the progressive left. We’re not talking about “shitty behavior”, we’re talking about something much closer to “having white skin”. Note that this is not worse than what many minorities endure; however, it is unjust to slander, malign, and cancel people, and further it distracts from progress with respect to improving minority outcomes. There certainly is no dichotomy in which we much choose between anti-white racism and anti-minority racism; morally you can’t be opposed to racism with race-based caveats (e.g., “except against $race”) but practically you’ll very likely create a lot more anti-minority racists by perpetrating anti-white racism.

                                                                                                  Communication is certainly a two way street, but the “differing views” stuff is frankly pretty exhausting.

                                                                                                  Agreed; it is exhausting; however, to be clear, “differing views” means we need to work to understand each other. It’s not some morally relativistic argument that you must accept my viewpoint.

                                                                                                  The paradox of tolerance is a pretty tangible phenomenon, as it turns out. If you have a pattern of poor behaviour, other people are welcome to – and should! – point it out, especially when the transgressors are powerful people.

                                                                                                  It may be a tangible phenomenon, but without a criteria for what is “tolerant” and “intolerant” the principle is easily abused to give anyone license to abuse any other party simply by defining intolerance such that they are intolerant. This is exactly what we see with many progressives (especially intellectuals) who seek to change the definition of racism because it doesn’t target the right people (white folks, including “colorblind” white folks, mostly). So we’re not talking about “transgressors” except insofar as the transgression is having the wrong skin color.

                                                                                                  The religious right have been doing this in their own way for years, harassing advertisers and TV networks, and politicians; organising boycotts and making a racket, to shut down TV shows or laws or organisations that they believe are offensive.

                                                                                                  And we eventually collectively shut them down in the name of liberalism. The religious right lost every significant fight–abortion, sex and violence on TV, gay marriage, etc (and while it did get Hobby Lobby, it’s not a significant issue and it won it because of liberalism).

                                                                                                  I am a white person from Australia, where we have our own unfortunate, regrettable history. There are aspects of what I imagine you’re defining as “white” culture (whatever that is) that are indeed quite horrid.

                                                                                                  I don’t think there are, but we should talk in concrete detail about what they might be. I think there certainly were elements of “white culture” which were horrid, but I think we largely expunged them (or to be precise, there are still individual racists, but they aren’t allowed to express their views in society).

                                                                                                  By “simply participating” in modern US society you (and I) are utilising a pile of wealth that was created through colonialism, and slavery, and many other deeply awful, violent, destructive aspects of our shared societal history.

                                                                                                  Granted, but this is true of all Americans (including minorities) to widely varying degrees. This has left white people with more money on average than black people (whites are more likely to have wealth privilege), and I’m pretty open to reparations or some other ideas for addressing this. I genuinely don’t know what the right answer is, nor do the overwhelming majority of white folks; however, I don’t think that gives progressives (or leftists or whichever term you prefer) any license to abuse them for their race.

                                                                                                  On top of that, many societal institutions (like the police, or the fact that wage theft is not a crime) are structurally configured to protect certain groups of people at the often violent expense of other groups.

                                                                                                  Yeah, this is real white privilege. We need to understand the extent to which this is a problem and understand its dynamics in order to solve it properly. We need trustworthy academic and media apparatuses (even if they are on “the right side”, Americans need to be able to trust that these institutions are asking questions on their behalf)–which means we need to reform or retire our concept of activist journalists and academics.

                                                                                                  By pretending it isn’t something we all need to deal with urgently, that we have the luxury of waiting for someone to come along and debate us in a tone of which you approve, you are taking a fundamentally regressive stance.

                                                                                                  It’s precisely because the problem is urgent that we can’t afford to use the moment as an opportunity to exercise our personal hatred toward whites or conservatives or otherwise stroke our own enlightened egos. We need to build coalitions and simply wishing that people would take our abuse and still be on our side is naive at best (the less-than-charitable view is that those who heap the abuse don’t actually want circumstances to improve; they like the moral landscape because they feel it gives them moral license to heap their abuse without actually suffering social consequences for doing so). People who sabotage this coalition building are obstacles that we can’t afford to ignore.

                                                                                                  will believe it when I see it!

                                                                                                  Behold

                                                                                                  You can debate whether it will be effective or whatever, but they have a proposal and I can’t seem to find the poll, but an overwhelming majority of Republican voters support police reform (this being the motivating factor for the proposal).

                                                                                                  Republicans have periodically had concurrent control of the House and the Senate and the Executive and a lot of the courts and statehouses in this country. They have had ample opportunity to drive forward any “reform” that they felt would mean less people are oppressed, harmed, killed. It isn’t at all surprising to me that people are deeply unhappy with the status quo.

                                                                                                  As have Democrats. Biden’s ‘94 crime bill wasn’t exactly friendly toward black Americans, nor was Hillary’s famous thinly veiled “superpredator” remark. But we can’t let partisan bickering distract us.

                                                                                                2. 2

                                                                                                  There’s also a lot of people who call civil discourse ‘hate’ as a way to shut down differing points of view.

                                                                                                  “Different points of view” is a phrase that can be weaponized to paper over categorically different things.

                                                                                                  Whether or not the Laffer Curve is an effective means to guide tax policy are different points of view, and we can have a civil discourse on the subject.

                                                                                                  Whether or not transsexual women are mentally sick men are not merely different points of view. One position is hateful; civil discourse is impossible because that position is in its nature un-civil.

                                                                                                  Bringing it back on-topic: whether fewer women exist in programming occupations because of hiring bias, or because of some innate, gender-based distaste for science or math or whatever, is a gray area. It’s possible to have this discussion in a civil way, but it requires considerable attention to context. For example, not every forum is appropriate for the discussion. It would be inappropriate to have this kind of debate on a mailing list of a technology company, where women engineers at the company would be justifiably aggrieved by colleagues arguing for the “innate” position. Insisting on having this discussion in that context is hateful, to some degree.

                                                                                                  (n.b. the correlation of “differing points of view” that are actually hateful to a specific American political stance/party isn’t evidence of political censorship, it’s evidence of regressive and hateful politics.)

                                                                                                  1. 0

                                                                                                    I strongly disagree. For instance, it’s not a gray area at all to debate the cause for why there are fewer women in programming—this is an empirical question, and moreover only a minority of women are offended by either position (indeed there’s no reasonable cause for offense). This doesn’t fit any reasonable definition of “hate”. I’m strongly of the opinion that “hate” is the weaponized term, used by a specific group of bad faith actors to suppress reasonable debate as previously discussed. I agree that this is evidence of regressive and hateful politics, but probably not in the way you mean. Anyway, I don’t really enjoy rehashing these same largely semantic debates over and over, so I’ll leave you to the last word.

                                                                                                    1. 3

                                                                                                      there’s no reasonable cause for offense [for] women engineers [witnessing] colleagues arguing for the “innate” position [in a debate in the office on] whether fewer women exist in programming occupations because of hiring bias, or because of some innate, gender-based distaste for science or math

                                                                                                      Your claim kind of stands on its own here, I think.

                                                                                            2. 2

                                                                                              Or should we not always seek to improve?

                                                                                              Improving isn’t cost free. if the cost to implement a solution exceeds the fruit, it is not worth it.

                                                                                              1. 4

                                                                                                if the cost to implement a solution exceeds the fruit, it is not worth it

                                                                                                Well, that’s a truism, so I’m not sure how much insight it delivers. The issue tends to be an appropriate accounting of the benefit. For example, many of the advanced safety features on cars probably seemed “not worth it” by this formula at the time of their introduction or regulatory mandate, because it’s difficult to put an accurate value on future lives saved. Similarly, many of these discrimination-reducing measures could easily be seen as “not worth it” as it’s difficult to put an accurate value on the long-term benefits of more representative and diverse organizations.

                                                                                              2. 1

                                                                                                then there is no limit to the number of “fixes” that will be proposed.

                                                                                                I guess that’s true, and I guess a good thing

                                                                                                Depends on your appetite for another can’t-win, perpetual war. We are still fighting the wars on poverty, terrorism, and drugs.

                                                                                                I see two independent ideas in your comment:

                                                                                                1. seeking to improve oneself
                                                                                                2. seeking to improve others

                                                                                                1 is virtuous IMO. 2 leads to pesky authoritarian over-corrections.

                                                                                                1. 8

                                                                                                  As an aside, I always find it fascinating how readily tiny incremental changes that are light years short of perfection get called pesky authoritarian over-corrections.

                                                                                                  1. 0

                                                                                                    My comment was specifically about the risk of over-corrections, not tiny changes, that tends to follow from a zeal for improving other humans. Errors accumulate.

                                                                                                    1. 2

                                                                                                      Sure. When they’re errors. My point is that what you might call an over-correction someone else might call a tiny incremental change.

                                                                                                  2. 5

                                                                                                    “seeking to improve others” leads to pesky authoritarian over-corrections? Do you find this to be true when you think of anything else besides gender or racial equality?

                                                                                                    1. 1

                                                                                                      I’m guessing you expect an answer other than “yes”, or perhaps you had hoped to enlighten a lower creature.

                                                                                                      Your reply (and others in this thread) quickly turned personal. Hmm.

                                                                                                    2. 2

                                                                                                      seeking to improve others . . . leads to pesky authoritarian over-corrections

                                                                                                      Well, that’s certainly one failure mode of any kind of policy of mandate from authority, and we should be careful to guard against it, but it’s hardly inevitable. More importantly, there is an upper bound to the amount of progress that can be achieved on a large scale via individual virtue and action alone.

                                                                                                      1. 1

                                                                                                        Even for part 1 it is not clear that it is always good. Why would becoming the best programmer in the world good, when it would take so much effort and time, which can be spent on enjoying yourself, if you are already a pretty good programmer that can command a pretty good salary. Diminishing return is a thing and self-improvement is not without cost.

                                                                                                    3. 14

                                                                                                      If the bar is perfection, then there is no limit to the number of “fixes” that will be proposed.

                                                                                                      Indeed, that’s exactly what a benefactor of a broken system would say. For excluded folks, there’s no cost in trying to change the system; they aren’t benefitting from it regardless.

                                                                                                      1. 1

                                                                                                        that’s exactly what a benefactor of a broken system would say.

                                                                                                        Okay, but is he wrong?

                                                                                                        1. 4

                                                                                                          About what? This?

                                                                                                          If the bar is perfection, then there is no limit to the number of “fixes” that will be proposed.

                                                                                                          Yes? No? I’m unfamiliar with any documented result that answers this question, so I’m going to go with maybe here. This is such a nebulously defined problem that I’m not really sure one can answer it. Fixes to what? Perfection where? How much effort does perfection take?

                                                                                                          My point was simple: it doesn’t matter what you think about perfection or fixes or the lack thereof. Disenfranchised people, throughout history, have agitated for franchisement. From the Roman Empire’s discrimination of Christians to the brutality of Belgian rule on the Congo. No matter the advantages of the system, the disenfranchised have sought to topple theirs. Advance a position with known disenfranchisement at your own peril.

                                                                                                        2. 0

                                                                                                          Indeed, that’s exactly what a benefactor of a broken system would say.

                                                                                                          While this is a fun argument, it’s not an actual one. Just because a bad entity would say something, does not make whatever is being said bad.

                                                                                                          For excluded folks, there’s no cost in trying to change the system; they aren’t benefitting from it regardless.

                                                                                                          They think they are benefitting from it. But predicting the future is hard, and regrets do happen.

                                                                                                          1. -1

                                                                                                            Indeed, that’s exactly what a benefactor of a broken system would say.

                                                                                                            If you have nothing to hide then you don’t need privacy, right?

                                                                                                            This is Christianity’s “original sin” concept, a cudgel used against anyone who questions the priest class.

                                                                                                            1. 4

                                                                                                              If you have nothing to hide then you don’t need privacy, right?

                                                                                                              I’m unclear what that has to do with my comment.

                                                                                                              This is Christianity’s “original sin” concept, a cudgel used against anyone who questions the priest class.

                                                                                                              I’m not sure what you’re talking about. What cudgel is this for what or for whom? I’m just trying to tell you that if you don’t find a solution acceptable by all parties, then disenfranchised people will understandably not care about your perspectives or your ideals. Telling them to be quiet in the name of a philosophical good is sophistry at best.

                                                                                                          2. 5

                                                                                                            By enacting more laws, or how exactly?

                                                                                                            The legal industry has Diversity Lab.
                                                                                                            They conduct policy hackathons, create policies to promote diversity, and report on the results.
                                                                                                            The Mansfield Rule is an example of one of their policies.
                                                                                                            Now in its third iteration, the Mansfield Rule Certification measures whether law firms have affirmatively considered at least 30 percent women, attorneys of color, LGBTQ+ and lawyers with disabilities for leadership and governance roles, equity partner promotions, formal client pitch opportunities, and senior lateral positions.
                                                                                                            102 large law firms have agreed to this rule and their compliance is assessed by third-party audits.

                                                                                                            There’s nothing to stop the technology sector from taking similar actions.

                                                                                                            1. 1

                                                                                                              Thanks. But those efforts are part of the free market, they aren’t state mandates. So this is an example of the market voluntarily experimenting with solutions.

                                                                                                              1. 2

                                                                                                                It’s a combination of voluntary actions and state mandates.
                                                                                                                The legal industry has been working on gender equity and diversity for decades.
                                                                                                                Law schools used affirmative action to admit gender-balanced classes in the 60 and 70s.
                                                                                                                Class-action lawsuits in the 1970s forced law firms to change their hiring practices.
                                                                                                                Diversity is also important to clients for legal services; Diversity Lab’s initiatives are backed by Microsoft, Starbucks, Bloomberg LP, and 3M.

                                                                                                                Edited to add the following text.
                                                                                                                Law firm diversity metrics are public and widely shared.
                                                                                                                The National Law Journal’s Women in Law Scorecard, for example, was released earlier this week.

                                                                                                            2. 7

                                                                                                              The majority of the article makes clear that perfection isn’t the bar. The point of using the word “perfectly” seems to be to address a specific disingenuous claim:

                                                                                                              1. Markets are competitive
                                                                                                              2. Unjustified discrimination is a competitive disadvantage
                                                                                                              3. Market forces in a competitive market must tend toward a perfect balance that squeezes out disadvantageous behavior
                                                                                                              4. Therefore, discrimination can’t be a significant factor in the market

                                                                                                              I’d reckon laws aren’t likely the point. It’s already illegal (in the US at least) to discriminate on most of the bases talked about in the article; from a policy perspective, a change in enforcement might help. But most likely, the change is going to have to be a cultural one. The sort of thing where enough people talk about this, and enough people agree that it’s not okay, that it’s almost impossible to be a member of the computer science community and not find yourself actively checking for this bias in your decision-making.

                                                                                                              1. 0

                                                                                                                But most likely, the change is going to have to be a cultural one.

                                                                                                                Culture, social change, blog posts, are all part of the free market. These discussions are poisoned with the common assumption that anything lacking a credit card transaction is somehow not part of economics.

                                                                                                                Has anyone considered that anti-discrimination laws may contribute to the problem? Hiring a protected class carries the perception of increased liability in terms of lawsuits, whereas hiring a non-protected class may carry less liability.

                                                                                                                Maybe the laws did their job and now it’s time to enter “phase 2”: remove special protections. That was the goal, wasn’t it? Note that discrimination on the basis of liability is different in origin though not effect. The laws never cared about origin, only effect.

                                                                                                                1. 6

                                                                                                                  Conversely, I feel as if your claim is poisoned by the common assumption that anything lacking government intervention is somehow part of the free market. Maybe economics in general has broader applicability, but if we’re talking about market forces, the topics of our discussion are explicitly going to be things related to the accumulation of capital. Culture, social change, blog posts, and so on are only related insofar as they have value that can be expressed in monetary terms. Whatever the case may be, it’s empirically true that whatever value they currently have isn’t sufficient to change hiring behaviors. So either we need to change that value or we need to effect changes that cause people to act for reasons unrelated to satisfying market pressures.

                                                                                                                  Your second argument might be correct (I don’t know), but if it is, it’s a red herring. You can’t argue that discrimination doesn’t exist in the market because anti-discrimination laws are such a poor remedy that they create incentives to discriminate. The original article is addressing a single, specific claim - that it is economically impossible for discrimination to exist, because if it did companies would find competitive advantage by not discriminating until the market reached equilibrium. If you want to find a cause for why discrimination does exist, that’s a different thing.

                                                                                                                  1. 3

                                                                                                                    Culture, social change, blog posts, and so on are only related insofar as they have value that can be expressed in monetary terms. Whatever the case may be, it’s empirically true that whatever value they currently have isn’t sufficient to change hiring behaviors. So either we need to change that value or we need to effect changes that cause people to act for reasons unrelated to satisfying market pressures.

                                                                                                                    This does not follow at all. Let’s say (although almost no one is explicitly calculating as follows) one is willing to hire men rather than women up to $1M of economic damage due to held value, and resulting inefficiency causes loss of $500K economic value. Also, social shaming (“culture”) causes $300K of economic damage. In this scenario, as you say, culture is empirically insufficient to change hiring. But it is simply incorrect to say the choice is either to change such held value, or to deploy non-monetary force. If social shaming intensifies 2x, the total damage reaches $1.1M and hiring changes.

                                                                                                                    1. 1

                                                                                                                      I might have miscommunicated - your example is exactly what I meant by changing the value of culture, etc.

                                                                                                                      1. 2

                                                                                                                        That must be the case. From what I can read, you and jmk are in agreement: quotes giving me such impression include “By enacting more laws?” and “market forces (which includes culture)”.

                                                                                                                    2. 3

                                                                                                                      Culture, social change, blog posts, and so on are only related insofar as they have value that can be expressed in monetary terms

                                                                                                                      It’s really simple why these are part of the free market: These things influence the reader, the reader’s thinking, and therefore they influence the reader’s spending.

                                                                                                                      1. 1

                                                                                                                        We agree. And since the existing influence on spending isn’t enough to force changes in hiring practices, then if different hiring practices are desired, the solution is either intensifying that influence, or relying on something other than economics, e.g. community norms.

                                                                                                                        1. 4

                                                                                                                          FYI, the entire point of contention is that community norms IS economics. That’s what jmk’s “common assumption that anything lacking a credit card transaction is somehow not part of economics” phrase is about.

                                                                                                                      2. 2

                                                                                                                        The original article is addressing a single, specific claim - that it is economically impossible for discrimination to exist, because if it did companies would find competitive advantage by not discriminating until the market reached equilibrium.

                                                                                                                        Framing the question as a binary (“perfect or not perfect”) is not meaningful and thus disingenuous. “Economically impossible” (assuming anyone used those words) is obviously a theoretical upper bound, since economics is dynamic (changes with time and context).

                                                                                                                        The meaningful question is whether market forces (which includes culture) tend to improve the situation. Are we going in the right direction or not?

                                                                                                                        1. 5

                                                                                                                          The claim the article argues against is explicitly a binary one - that the economic disadvantage of discrimination proves that discrimination can’t explain observations about women and minorities in the tech industry. If you agree that the economic effects are anything less than perfect, then you agree that discrimination can explain some of the observations, and you may agree with the article.

                                                                                                                          1. 4

                                                                                                                            I think dl and jmk are talking past each other, because dl’s article is mostly about diagnosis, and jmk is mostly concerned about solution. dl says observed disparity is not wholly explained by economics and it is likely explained by discrimination, and I think jmk agrees? (I agree.)

                                                                                                                            But to jmk (and to me), the important part is what to do about it. Existence of discrimination does not support any governmental intervention, since it is possible all government interventions are harmful. The thing is, I think dl also agrees. The article merely says “We can fix this, if we stop assuming the market will fix it for us”, it does not suggest or support any specific governmental intervention, it doesn’t even suggest or support governmental intervention is necessary.

                                                                                                                            To me, the whole debate is about what “market” means: which is boring terminological discussion. To dl, large scale social movement (aka jmk’s “culture”) is not part of the market, but to jmk, it is. So I think both agrees about the solution (mostly cultural, not legal), but disagree whether to call it market or not.

                                                                                                                  2. -2

                                                                                                                    By enacting more laws, or how exactly?

                                                                                                                    “muh free market” isn’t the perfect solution, but a pretty good solution that’s very easy to implement. Women and minorities can get what they can get, nothing is stopping them. If the boys are bad, then just make a women only corporation, crush the market with your 20% reduced salary expense, and counter-discriminate the boys. that will show them….

                                                                                                                    No it’s not perfectly workable. But it’s really good. Trying to ‘fix’ the ‘injustice’ is like balancing a ping-pong on a paddle while blind folded. There’s no guarantee that if we fix the 20% gap, we are not subsidising women for their choices that we simply didn’t know about.

                                                                                                                  1. 5

                                                                                                                    Back when architectures were designed for human assembly programmers and not just as compiler targets. Having a simple and elegant instruction set was considered a selling point.

                                                                                                                    If you’re interested in learning an assembly language, you’d be hard-pressed to find a better one than m68k.

                                                                                                                    1. 5

                                                                                                                      Back when architectures were designed for human assembly programmers and not just as compiler targets. Having a simple and elegant instruction set was considered a selling point

                                                                                                                      Turns out it also makes it hard to make a fast processor. Mashey believed it wasn’t the number of instructions, but the ergonomic and symmetrical forms that lead to things like memory-to-memory instructions, which makes it harder to optimize. (Memory decode, dependencies, etc. when it gets broken down into µops…)

                                                                                                                      Ironically, the ugly duckling of CISC, x86, is ugly in ways that mostly don’t matter for performance. IBM System/3x0 is actually pretty clean, and arguably on the borderline of RISC with clean mostly fixed instruction forms and mostly schewing memory to memory instructions. (Arguably, the Model 44 comes pretty close to RISC!) I don’t think it’s a coincidence that x86 and z are around today while the more aggressively assembly-friendly architectures like VAX and 68k died.

                                                                                                                      1. 3

                                                                                                                        It isn’t at all a coincidence; x86 and the Z series are much easier to implement than the 68k or the VAX.
                                                                                                                        John Mashey discusses the difficulties with implementing a high-speed VAX here.

                                                                                                                      2. 2

                                                                                                                        If you’re interested in learning an assembly language, you’d be hard-pressed to find a better one than m68k.

                                                                                                                        Someone who wrote several assemblers thinks MSP430, MIPS, and AVR8 are the cleanest architectures:

                                                                                                                        https://github.com/mikeakohn/naken_asm/issues/60#issuecomment-471514168

                                                                                                                        1. 2

                                                                                                                          While I still love the m68k, See MIPS Run is the best processor architecture book I’ve ever read…

                                                                                                                          1. 2

                                                                                                                            I think that refers to parsing by a machine, and indeed MIPS was designed to be very easy to parse (as were other RISC ISAs), but not to ease of writing the instructions by a human.

                                                                                                                            I’ve found MIPS to be somewhat obnoxious to write, but I realize my experiences refer to privileged code intended to work on multiple machines, so aren’t the typical MIPS experience.

                                                                                                                            1. 2

                                                                                                                              Having worked on a MIPS implementation and the LLVM MIPS back end, I’d agree that MIPS is clean from the perspective of writing an assembler or instruction decoder, as long as we’re talking about MIPS IV and not the newer MIPS32 and MIPS64. That is; however, the only positive thing that I could think of to say about the ISA.

                                                                                                                            2. 1

                                                                                                                              I think AVR is a good, modern-day contender that I would recommend to anyone looking to get started with Assembly.

                                                                                                                              1. 1

                                                                                                                                risc-v?

                                                                                                                                1. 4

                                                                                                                                  risc-v?

                                                                                                                                  If you want to get an understanding of a simple close-to-the-metal environment, RISC-V is fine. If you want to write assembly code, it’s painful. The lack of complex addressing modes means that you end up burning registers and doing arithmetic for simple tasks. If you want to do complex things like bitfield manipulation, you either need to write a lot of logic with shifts and masks or you need to use an extension (I think the bitmanip extension is standardised now, but the cores from ETH have their own variants). There are lots of clunky things in RISC-V.

                                                                                                                                  ARM (AArch32 or AArch64) is much nicer to use as an assembly programmer. Both are big instruction sets, but the addressing modes on ARM are really nice to work with (it’s almost as if they, unlike the RISC-V project, followed the RISC I methodology of examining the output from compilers and working out what the common sequences of operations were, before designing an instruction set).

                                                                                                                                  Note that ARM doesn’t call itself a RISC ISA anymore, it calls itself a load-store architecture. This is one of the key points of RISC (memory-register and memory-memory instructions make out-of-order execution difficult), but they’re definitely not a small ISA. They do have a much more efficient encoding than RISC-V (which, in a massive case of premature optimisation, optimised the ISA to be simple to decode in an in-order pipeline).

                                                                                                                                  1. 2

                                                                                                                                    ah, I made the mistake of assuming that the smaller instruction set of the risc-v meant that it was easier to work with.

                                                                                                                            1. 1

                                                                                                                              Can you produce, say, an IPv6 packet as a fully valid BARE message?

                                                                                                                              1. 1

                                                                                                                                The idea is if the specification is generic enough to fit ti, then it might be able to produce a lot of tooling for existing formats. Like actual TLV packets, 9P or SFTP protocols, TLS packets, BER encoded X509 certificates…

                                                                                                                                Then out of the existing situation, adding another standard that describes the others would not produce a 14 std + 1 std == 15 std, but 14 std + 1 std => 8 std (1 that describe them all + 7 spare ones that did not fit the generic one added).

                                                                                                                                1. 2

                                                                                                                                  Kaitai Struct is a DSL for describing arbitrary binary structures.
                                                                                                                                  Here is the specification for an IPv6 packet.

                                                                                                                                  1. 2

                                                                                                                                    It’s not a DSL, it’s YAML. Dear god, please write a DSL for this, specifying arbitrary binary data types in YAML is a fate I wouldn’t wish on my worst enemies.

                                                                                                                                    1. 1

                                                                                                                                      The generated code looks quite readable in comparison…

                                                                                                                                          this.srcPort = _io.readU2be();
                                                                                                                                          this.dstPort = _io.readU2be();
                                                                                                                                          this.seqNum = _io.readU4be();
                                                                                                                                          this.ackNum = _io.readU4be();
                                                                                                                                      
                                                                                                                                    2. 2

                                                                                                                                      Name an idea, and surprise!, someone did it already. :)

                                                                                                                                    3. 1

                                                                                                                                      No, this use-case is not the intended usage of BARE. But it would be nice to have something which could universally represent any data structure, though such a task would be large indeed. I imagine that, in practice, I could come up with data structures which were unrepresentable at least as fast as you were able to come up with representations of them.

                                                                                                                                      1. 1

                                                                                                                                        I could come up with data structures which were unrepresentable

                                                                                                                                        I trust you for that! Some “throw everything from your bowels at me, and I will parse it” sounds like an over-complex metaformat. It feels better to aim something like 20% of the most common.

                                                                                                                                  1. 4

                                                                                                                                    I think where it ends up is right:

                                                                                                                                    In practice you would probably just compile a version that was passed a pointer to the type information, because the type information gives you size, alignment, and pointer information all in one place with only a single argument.

                                                                                                                                    But, just as a curiosity, I think you could do a copy with only a size. The only member besides size that the typedmemmove source accesses is ptrdata, which, though the name sounds super general, only says how far into the object you need to look to be sure you’ve found all the pointers. Using that instead of the object size here seems to be an optimization: if ptrdata is 1, for instance, the runtime can quit worrying about possible pointers in an object after the first word, and if it’s zero it needn’t scan at all. You could write memmove code to conservatively act as if any word of the object might be a pointer, you’re just potentially wasting some effort.

                                                                                                                                    The detailed data about which words of the allocation have pointers/need scanning comes from a GC bitmap that’s set up at allocation time. (You can just use an address to look a word up in this bitmap.) But that means that to allocate you need pointer/(no)scan information to set the bits. If allocating just to copy data you could in theory copy the GC bitmap from source to dest before you copy the data, but you’d still need the type’s alignment to get a properly aligned slot in memory and…yeah, maybe at that point we just pass a type pointer around instead.

                                                                                                                                    This all makes me wonder what choices the team will make about compilation of generics: max speed of compiled code (by compiling as many optimized versions of the code as needed) vs. a dynamic implementation to avoid hurting compile time or binary size (so the resulting machine code looks like if you’d used interfaces). I can see the case for either: maybe these are a specialized tool for max performance for sorts, collections, etc. or maybe they’re mostly to make source better-checked and clearer. Or maybe we start with the dynamic approach (possibly quicker to implement?) then tune the generated output over future releases. Haven’t followed discussions super closely; if someone knows what has been said about this I’m interested.

                                                                                                                                    1. 2

                                                                                                                                      Yeah I wonder if there will be any implementation problems due to the combination of monomorphized generics and a potential explosion of GC bitmaps per type.

                                                                                                                                      I think most of the languages monomorphized generics like C++ and Rust don’t have GC. Although I guess D is an exception. Not sure what they do exactly, but it probably helps that they have their own back end and not LLVM.

                                                                                                                                      Not sure what C# does either. I think it has more VM support.

                                                                                                                                      1. 2

                                                                                                                                        .NET generics use monomorphization for value types and a shared instantiation for reference types.
                                                                                                                                        The new() constraint is handled with reflection.

                                                                                                                                        1. 1

                                                                                                                                          This might provide useful background on the topic.

                                                                                                                                        2. 2

                                                                                                                                          I believe they were intentionally careful not to specify so that they could experiment & potentially offer multiple compile-time options.

                                                                                                                                          1. 2

                                                                                                                                            Yes, the design document’s implementation section explicitly leaves the question open (and is worth reading).

                                                                                                                                            Curious what they do!

                                                                                                                                          2. 1

                                                                                                                                            Besides reducing the code bloat and avoiding the need for a special intermediate representation of compiled but unspecialized generics, the dynamic approach has the added benefit (at least from the POV of Go’s goals) that it discourages excessively fine-grained abstractions (e.g., how Arc and Mutex have to be separately applied to get Arc<Mutex<T>> in Rust), because it would have too much runtime overhead.