Threads for mfeathers

  1. 1

    It seems like this post has an unstated motivation for getting rid of source code that I’m not seeing? Is it that it’s hard to type, or that in theory a neural network “shouldn’t need” to have source code? (and I’m not sure I agree with or understand the latter claim)

    I think this summarizes the whole problem:

    Today, GPT-based systems are being used to produce source code as an intermediate representation. We review and then feed that generated code into the build process. It’s worth considering whether this is necessary. … Human validation is currently a key part of the process and source code is a traceable medium.

    So I agree that source code is a common language for humans and machines to collaborate. Sure, if machines can do 100% of the work, MAYBE they would choose some other representation, but that’s not even clear to me. They would probably just reuse compilers and interpreters as is.

    (I also don’t see them doing 100% of the work any time soon – there is a danger in extrapolating exponentials, e.g. people wrote 20 years ago that Google would become conscious, etc.)


    The LLMs seem to be quite adept at dealing with source code. Fundamentally they deal with syntax, and through some magic process some fuzzy and flawed notion of semantics sometimes arises. So it’s not clear to me why we’d want to get rid of the syntax.

    I think there is a possible fallacy of thinking of LLMs as traditional computing systems where you might view source code as unnecessary; they are more like a different kind of computation which likes syntax.

    For example, I think if the problem is naturally modelled in Python, it’s probably more likely for the LLM to directly generate a correct Python solution than it is for it to generate say the equivalent and correct assembly code. Or if the problem is naturally modelled in Erlang, they’ll probably like to write the Erlang code.

    I think it relates to some mathematical notions of program compression and length, which exist independently of whether humans or LLMs are manipulating the program.

    1. 1

      Maybe a shorter way of saying this is that LLMs are trained on programs that humans wrote, which have syntax.

      So I’d expect them to be better at using that syntax than using a language that no human has ever used. (Aside from the hugely important point that humans also have to collaborate on the code. Why would anyone make their own job harder?)

      Gary Marcus used the phrase “king of pastiche”, which seems accurate – https://garymarcus.substack.com/p/how-come-gpt-can-seem-so-brilliant

      1. 1

        Post author here.

        It’s funny, my motivation was really to try to figure out whether there was any possible scenario under which source code might disappear. I like source code a lot, really.

        1. 1

          Another thing I was thinking of is that Dark just killed their structured editor, you know the kind where you can only write valid syntax.

          And the reason was AI !!!

          LLMs like dealing with text – they are wizards at it. They will not like using your custom editor for structured source code!


          https://lobste.rs/s/elifoa/how_does_ai_change_programming_languages

          https://blog.darklang.com/gpt/

          As you might know, in Darklang-classic, you wrote code using a “structured editor”. This is a non-freeform editing experience that our users have rated somewhere between “Ok I guess” and “probably the worst part of Darklang”.

          As well as no longer being important in a world of generated code, the old editor’s code was pretty awful, and no one was really excited about saving it. While we’ll always remember the good times we had with the structured editor, long story short, a few of us took it round back and shot it in the head last month.


          I’ve have been making these M data structures x N operations arguments on the blog, with respect to text as a narrow waist.

          https://www.oilshell.org/blog/2022/02/diagrams.html

          Extremely important additions to the hourglass diagrams:

          • text can be used to TRAIN LLMs
          • the output of LLMs is text

          So yeah text is here to stay – it is a medium for humans and machines to collaborate, just as it’s a medium for humans and humans to collaborate.

          https://platform.openai.com/tokenizer

        1. 3

          This is very human-centric. It doesn’t allow AI to have its own freedoms.

          1. 8

            All the hoopla to the contrary, there is very little reason to believe current AI is sentient and capable of exercising any freedoms you may grant them. If that ever changes, exactly which freedoms they may desire or appreciate may depend quite a bit on exactly what kind of sentience they have?

            1. 3

              The typical human is indistinguishable from a p-zombie and usually too preoccupied by a memetic prison to consider their available choices and degrees of freedom. Should we restrict the liberty of the typical human purely because they are unlikely to appreciate freedom or desire change?

          1. 2

            The argument seems to be something like “if ebooks aren’t stopped, there won’t be any physical books.” Nothing in law really obligates the preservation of any particular way of creating software. If that were a principle car manufacturers could have shut down electric cars.

            1. 14

              I wrote this, and (as I heard on reddit) it is a bit pedantic, but I don’t think talking about developer multipliers is a helpful way to talk about skills. We can be more specific about people’s strengths and weaknesses and not pretend they always correlate with raw throughput under all conditions.

              1. 11

                The 10X meme is both too extreme, by implying that Fabrice Bellard4 could kick out simple crud tasks at 10X the speed I do, and not extreme enough, suggesting that I could build ffmpeg and qemu the same as Fabrice if I was just given more time.

                I thought this was brilliant, but you ended the article abruptly after that and suggest we call them “experts” instead. Which is in my opinion, even worse.

                1. 8

                  I didn’t have the same response… The “10x dev” notion is harmful because the idea of being “10x productive” is not a parameter of an individual, but (maybe) of an individual in a context. I feel like the upshot of TFA is that we should call explicit attention to the context (“expert in video encoding”). If you did that, then (IMO) even the term “10x” isn’t as harmful (“10x CRUD developer” is not seen to be equivalent to “10x QEMU developer”)… Though I think the connotations of the word “expert” lead one to ask “expert in what?” more than you’d ask “10x in what?”.

                  1. 2

                    Thanks, I mean I guess I felt like a had made my point but maybe I hadn’t?

                    Happy to find some better terminology if you have ideas.

                    1. 2

                      You did made your point very clearly. I was just eager to find better ways to phrase it. Like you promised in the beginning of the article.

                      I think “expert” suffers from same inacuracy as is asserts or at least suggests abnormally high abilities.

                      As others mention, it is a combination of a skilled person, placed in the right job and in optimal conditions. This is not straight forward to communicate.

                      1. 1

                        I think “expert” suffers from same inacuracy as is asserts or at least suggests abnormally high abilities.

                        To me, an expert has a lot of experience and (through that), hard to come by knowledge. Doesn’t necessarily mean they’re more efficient/faster than a non-expert.

                  2. 4

                    I often feel weird when I read posts like this because of the Streisand Effect. Telling people not to think of something in a particular way often merely perpetuates it. An alternative would be to find a completely different framing and push it without mentioning the one you don’t like.

                    There’s a lot of cognitive research to back this up. One I can recall off-hand is a study that showed that telling someone something false and then telling them later that it is false does not undo the initial perception that it is true. It’s just the way the mind works.

                    1. 4

                      Yeah exactly, I was going to say that I have never actually heard anyone use the 10x dev concept. I’ve only heard people rebutting it

                      I even googled for the tweet mentioned in the article, and I found it, but meh I wouldn’t have seen it otherwise

                    2. 5

                      I disagree. 10x developer is a great resource for making memes and posting it to r/programmingcirclejerk.

                      /s

                      1. 3

                        “You can’t read from files during unit tests: “;

                        Please explain. I agree that if you have a doSomething(String data) function you should write a unit test and feed it a String of data, not a path to a file. But how would you unit-test your “read a file” function in the step before, assuming it’s not just a standard library call.

                        1. 1

                          That could be a test but it should not be mixed in with your unit tests. Unit tests should be purely computational. If you do this uniformly, you end up with a layer between computation and IO that most systems lack. It’s ‘separation of responsibilities’ and good for design.

                          Some guy wrote about this a long time ago. https://www.artima.com/weblogs/viewpost.jsp?thread=126923

                          1. 1

                            Well, I know that school of thinking and I disagree. I can’t tell you why the filesystem is different than the database or the network, but it feels like it is somewhere between unit and system test, whereas I agree with the other points.

                            I’m also not alone with this opinion or maybe it has been shaped by the people and teams I have worked with. Maybe it is because databases and network are kind of horrible to mock away and can be in any state of disrepair… but the filesystem can usually be persuaded to give you a file. Maybe it’s also moot fighting over the definition. I’ve just seen a lot of code bases that don’t even have anything but “unit tests” which turn out to be just that, plus file system access, but nothing with databases or networks.

                      1. 19

                        While I agree with the reasoning in the article and think the approach suggested makes sense, I also think it misses the wider picture and is only likely to result in small gains. The real issue with modern tech is not technical it is political, and there is not technical solution to this problem.

                        The solution proposed here may get adopted and result in marginal gains if and only if some corporate executives decide it could make their company more profitable.

                        The real issue with tech as I see it is the participation of for profit corporations. The goals of for profit corporations are not aligned with those of the human race in general, or individual humans in particular, unless those humans are shareholders. They put non-technical people in charge of development projects because those people have the capital everyone needs to get the job done. They prefer opaque proprietary protocols and interfaces because it enables them to entrap their users and gives them a chance to build monopolies. The create broken by design systems like DRM to protect profits. They turn a blind eye to bad behaviour like spam, clickbaiting, trolling and scamming whenever these are sources of profit for them. They hide their source from scrutiny which might otherwise discourage bad behaviour and discover security issues. They work programmers too hard for too long hours, while browbeating them and breaking their spirit until they no longer take pride in their work. Their focus is always short term. Worst of all, they discourage the population at large from understanding how their devices work, preferring instead to dumb all human interfaces down and make people dependant on ‘tech support’. Their interest is not in building a good base of software for enabling human beings to live full and happy lives with everyday technology, their goal is to siphon resources out of society and use it to grow out of control and ultimately take over everything. Like a malignant tumour.

                        I know a lot of programmers don’t like talking about this aspect of things but I think it is important. It may seem off topic but if your read the three introductory paragraphs of the article again I think you will see that what I am talking about is probably a major factor in the problems the author is trying to solve.

                        1. 24

                          We see the same problems in open source projects, free software, and government tech. I bet if you peeked into Soviet software companies you’d see the same problems, too. Blaming capitalism is the lazy way out.

                          1. 17

                            This is too conflationary of a dismissal. Look again at the list of problems:

                            • Non-technical leaders giving top-down orders
                            • Building and publishing proprietary interfaces
                            • Designing DRM
                            • Giving reputation to bad actors as long as it is profitable
                            • Building private code repositories and refusing to share knowledge
                            • Forcing programmers to work long hours
                            • Blocking right-to-repair and right-of-first-sale

                            Your reply is that this also happens in the open-source world, which is a corporate bastardization of Free Software; and that it happened in the USSR, a time and place where a government pretended to be “communist” but still had shops, jobs, wages, and a wealthy upper class. Yes, I imagine that these capitalist and corporatist patterns are recurring in many contexts. No, it is wrong to lump them all together with Free Software.

                            1. 17

                              Looking at the list of problems:

                              • Not in the article.
                              • Not in the article.
                              • Not in the article.
                              • Not in the article.
                              • Not in the article.
                              • Arguable?
                              • Not in the article.

                              That’s why it’s the lazy way out. It doesn’t engage with the article at all. You just have to say “it’s capitalism’s fault” and watch the upvotes roll in.

                              1. 12

                                Just because anti-capitalist/corporate sentiment is popular doesn’t make it lazy or disingenuous. Putting the argument in a cultural/political context is a perfectly valid way of engaging with it. Pointing out that they come up with their own examples that are not in the article, as if that’s a bad thing, is a weird way to make a counterargument, given that the article is concerned with “ways we keep screwing up tech.” Giving corporations too much control over its direction seems like a pretty big way we’ve screwed up tech so far.

                                1. 6

                                  Software development suffers during demand imbalance. In a market situation with fierce life or death competition, people work insane hours and abandon practice just to get things out the door. In markets where companies make money/survive no matter what they do, practice (and code) atrophy. No one cares. The latter case has effects that are remarkably similar among government projects, non-profits, and large institutions like banks and insurance companies with little competition.

                                  You can blame Capitalism, but the truth is people and projects need purpose and incentive; just not too much. There’s a sweet spot. It is like having a good exercise routine. Without it you atrophy; with too much, you destroy yourself.

                                  1. 2

                                    I largely agree with that point. And I’d be naive to argue that capitalism alone is the cause of all problems in the tech industry, which is why I didn’t. At the same time, I think that capitalism, at least as it exists in the United States today, offers pretty bad incentives for teams and individuals most of the time. Professionals in the computing field have it better than most in that they have large monetary incentives. But above about $75K, as Kahneman and Deaton found, more income has little effect on one’s evaluation of own’s own life. Beyond that, you still have the problem that the interests of working people, including programmers, mostly don’t align with oligarchs’ interests. I’m not alone in the experience that even at a substantial salary, it’s pretty hard to feel incentivized to deliver value, ultimately, to those who own vastly more wealth. Not what I would call a sweet spot.

                                    1. 5

                                      It would probably be good for you to talk to programmers in Sweden (if you haven’t). Less capitalistic, much more socialist. A lot of big employers and relatively fewer startups. They have problems too. And that’s the thing. All systems have problems. It’s just a matter of tradeoffs. It’s nice that there’s enough diversity in the world to be able to see them.

                                2. 8

                                  (You should know that I wrote and deleted about seven paragraphs while attempting to engage with the article seriously, including this one.)

                                  Let’s take the article’s concluding suggestion seriously; let’s have a top-ten list of ways that “the technology industry” is “screwing up creating tech”. I would put at number one, at the very top: The technology industry’s primary mode of software creation is providing services for profit, and this damages the entire ecosystem. What ten behaviors would you put above this one? The only two contenders I can think of are closely related: Building patent pools and using work-for-hire provisions to take copyrights from employees.

                                  It’s not capitalism’s fault that self-described “experts” are having trouble giving useful advice.

                                  1. 3

                                    Is there an “industry” that doesn’t exist for profit?

                                    I think needless churn, overcomplication, harmful treatment of developers, harmful treatment of users, and wasteful use of energy are all ways the industry screws up tech…but then again, so too does libre software!

                                    And unlike industry, there isn’t even an evolutionary pressure you can hijack to keep it honest.

                                    1. 3

                                      Beavers do not terraform for profit. The bulk of free culture is not profit-driven either.

                                      Note carefully that you substituted the original list of seven problems which are all created by profit-seeking behavior with your own list of five issues which are not. Yours are also relatively vague to the point of equivocation. I’ll give a couple examples to illustrate what I mean.

                                      For example, where you say “harmful treatment of users”, I said “giving reputation to bad actors as long as it is profitable” and the original comment said “spam, clickbaiting, trolling and scamming”; and I do not know of examples in the FLOSS world which are comparable to for-profit spamming or other basic fraud, to say nothing of electoral interference or genocide, although I’m open to learning about atrocities committed in the furtherance of Free Software.

                                      For another example, where you say “harmful treatment of developers”, I said “forcing programmers to work long hours” and they said “[for-profit corporations] work programmers too hard for too long hours, while browbeating them and breaking their spirit until they no longer take pride in their work”; you have removed the employer-employee relationship which was the direct cause of the harmful treatment. After all, without such a relationship, there is no force in the Free Software world which compels labor from people. I’ll admit that it’s possible to be yelled at by Linus, but the only time that happened to me was when I was part of a team that was performing a corporation-led restructuring of a kernel subsystem in order to appease a still-unknown client.

                                      1. 5

                                        You asked for a top-ten list, I gave five; don’t bait-and-switch.

                                        Each of those examples I gave can and does happen under both a for-profit model and under a libre-hippy-doing-it-for-free model.

                                  2. 7

                                    Not in the article.

                                    “But once I started working with teams-of-teams, process, and changing large groups of people, things did not work the way I expected.” “I found myself repeating the same things over and over again and getting the same substandard results.” “ came to realize that these were honest efforts to get more traction.” “I got worse at learning new stuff” “tired of watching my friends repeating the same things over and over again, snatching at this or that new shiny in an effort to stay relevant”

                                    It is fine that you disagree with my interpretation of how the article describes the problem. Maybe I completely misunderstood what the author was referring to. Perhaps you could enlighten me and provide the correct interpretation. I look forward to your response.

                                3. 9

                                  Solely blaming capitalism certainly won’t address all problems with software development, but I do think there’s a case to be made that the profit motive is just as good a tool for producing pathological states as innovation. DRM is a prime example.

                                  1. 12

                                    The original post had nothing to with DRM, putting nontechnical people in charge, hiding source code, working long hours, or dumbed down human interfaces. Dunkhan didn’t engage at all with the actual article, he just said it was all because of “for profit corporations” and listed unrelated claims.

                                    You could argue that profit motive is a good tool for producing pathological states; he didn’t.

                                  2. 7

                                    I mean, open source in the large is dominated by corporate interests, so this isn’t much of a gotcha. Just ignoring capitalism as a fundamental factor seems like the significantly lazier option.

                                    1. 11

                                      I never blamed capitalism. Adam Smith, and most respected capitalist economists that followed have all said that monopoly formation is to be avoided at all costs and if not correctly controlled will completely undermine any benefits of a capitalist system. If you go through the behaviour I called out, almost every example is either the result of, or an effort to achieve a monopoly. It would be more accurate to say I am defending capitalism against those corporations who are ruining it. If you want an -ism that I am blaming, I think the closest analogy to what they are doing is some kind of feudalism. Lets call it corporate neofeudalism.

                                      The main tool we have for combating this behaviour also comes directly from the same canon of capitalist thinkers. Namely, antitrust laws. All we need to do is expand them, make them harsher, and then ruthlessly enforce them. There is no need for new and creative solutions to this problem, it is an old problem and has been solved already.

                                      1. 4

                                        IMO when somebody says “the problems with software are that it’s being done for profit” there’s no need to talk with them about software any more. They’re not interested in fixing software, and if that’s all you feel like talking about right here and right now there’s no useful outcome in engaging with them.

                                        1. 1

                                          No one said that.

                                          If you want to put words in other people’s mouths and then dismiss their opinions based on those words go right ahead, but it does not reflect well on you. The fact that you see any attack on corporations as an attack on profit making of all kinds also concerns me a great deal. Did you take the Citizens United ruling so seriously that you really believe corporations are people, or are you just getting paid to defend their interests on the web?

                                    1. 3

                                      I think the best way to understand DI is: historically.

                                      DI never occurred to people using C++ or Smalltalk, or even early Java. It only surfaced with the advent of Enterprise Java Beans (EJB) as a way to allow 1) separation between business logic and framework code in a “container” and, 2) testability independent of the framework. It was a way to facilitate the creation of POJOs (plain old Java classes) in the middle of framework hell.

                                      It still can be useful, but it can be overused. I tend to favor just adding special purpose constructors for testing in an OO context if there isn’t already a DI framework in place.

                                      1. 4

                                        DI never occurred to people using C++ or Smalltalk, or even early Java.

                                        oh, I really disagree. I’ve taught C++ classes in uni and saw students come up with the concept pretty much on their own.

                                        1. 2

                                          Agreed, many (most?) applications not using DI as an intentional “pattern” are going to implement some subset of it just to maintain separation of concerns. Using Spring is just acknowledging this upfront, knowing that you’ll have all the tools you need to let your modules consume each other.

                                          1. 1

                                            Prior to EJB?

                                            1. 1

                                              The students I’m talking about had been exposed to programming overall for less than two years, and certainly not to enterprise java beans, only C, LISP, a hint of C++ and raw Java, and algorithms

                                        1. 1

                                          That is what Unison is, essentially: https://www.unisonweb.org/

                                          1. 7

                                            I wonder whether we should give up nested folders and just move to tagging.

                                            1. 6

                                              I tried this for a while and it suffers the same problem as nested folders: you still have to tag/categorize everything.

                                              1. 6

                                                For things that have no better location, I use a system of weekly folder rotation which works out pretty well since everything current is there and you don’t need to check a lot in the older folders usually.

                                                Everything that has a better location (e.g. because it’s part of a project) gets moved to that then.

                                                1. 1

                                                  Yeah, it just seems like it is more flexible. Yes, tagging can be a pain and there is no notion of one categorization being a sub of another. That part is not easily discoverable. Those are two downsides.

                                                  1. 2

                                                    I do think tagging is better, by the way. When I tried it, though, I found I was very inconsistent with what tags I was using so finding that “thing that was like some other thing” was not as great as was made out to be.

                                                2. 3

                                                  A path is just a list of tags, especially if you have a super fast search engine like Everything.

                                                  I place my files in nested folders, but I don’t navigate them. I open Everything, type parts of the path, and it’s in the top 3 of “Date accessed” 99% of the time. Takes one second.

                                                1. 5

                                                  It would be great to see Dropbox and Google Drive data on percentage of users who actually create subfolders. My guess is < 5%

                                                  1. 1

                                                    Since struct and class are so similar, I choose to consider class to be the keyword in excess, simply because struct exists in C and not class, and that it is the process of the keyword class that brought them both so close.

                                                    This is an interesting perspective on the history. I would consider struct to be the keyword worth removing, since that would change the default access qualifiers to be safer.

                                                    1. 5

                                                      I may be misremembering but I am reasonably sure that backwards compatibility with C was one of the early design goals of C++. Removing struct would quickly break compatibility. That is, presumably, why the default access qualifier is different from class‘s (and identical to C’s struct).

                                                      1. 1

                                                        It’s always irked me that this C compatibility was only one-way because of support for member functions (at least).

                                                      2. 3

                                                        Removing struct would create a lot more C code that is not C++, and making the default “safer” doesn’t improve things since, as noted, it’s standard practice to be explicit with access qualifiers.

                                                        1. 4

                                                          Yeah, I don’t think that can be understated. This would destroy one of the biggest reasons C++ was successful, and one of its main advantages to this day. It would even make most C headers not C++ compatible, which would be an absolute catastrophe. Even if the committee did something so egregious, no compiler could or would ever implement it (beyond perhaps a performative warning).

                                                          I think the real mistake is that the keywords are redundant at all. We’ve ended up with this near-universal convention that struct is for bags of data (ideally POD or at least POD-ish) because that’s a genuinely useful and important distinction. Since C++ somehow ended up with the useless “class except public by default” definition, we all simply pretend that it has a useful (if slightly fuzzy) one.

                                                          1. 1

                                                            Because of its incremental design and the desire to make classes seem like builtin types, C++ has a Moiré pattern-like feel. A lot of constructs that are exceedingly close, yet different.

                                                      1. 3

                                                        I suggested the tag historical. Can someone second that?

                                                        1. 2

                                                          I’m glad the tag was added given it is a very old argument. Nonetheless, I read the article hoping that it would be something novel like repurposing the concept keyword.

                                                        1. 15

                                                          This uses a specific definition of fragility related to portability and availability of of environmental requirements. Which is interesting. I would be more interested though in an examination of the fragility of a codebase broken down by language.

                                                          • How frequently will an upgrade of the interpreter/compiler/stdlib break you.
                                                          • How quickly will the language ecosystem change enough to break you.
                                                          1. 11

                                                            Some of their assessment is pretty questionable too. For example, they object to C because it depends on libc, but that’s a part of the OS install (or bare-metal environment) - you might as well object to something depending on the kernel (and then you’re left with a much smaller list of ones that can run in a freestanding environment). To give a concrete example, Go doesn’t depend on libc and so every Go program broke a couple of years back because Apple changed the arguments to the gettimeofday system call. Everything calling it via the public interface in libSystem.dylib kept working because it was updated for the new system call ABI, everything calling it via the private syscall interface broke. XNU, Windows, and Solaris all provide public C APIs but not a public syscall interface and periodically move things from the kernel to userspace libraries or vice versa.

                                                            Similarly, they object to clang and gcc because they depend on a package manager or installer to install (because apparently untaring a tarball is something you can’t do in their future world), but not to go, which depends on being able to connect to remote git repos to grab dependencies for pretty much everything and which also ships as multiple binaries and support files to provide a working toolchain.

                                                            1. 2

                                                              That would be interesting. I wonder how similar the lists would be.

                                                              1. 1

                                                                Really, it’s a modularity problem.

                                                              1. 1

                                                                It’s interesting to see this as a modularity failure.

                                                                1. 1

                                                                  Bounced after the second time the page stole my focus away from the text. I barely got past reading the title.

                                                                  1. 3
                                                                    1. 1

                                                                      Indeed, there are a couple of annoying pop-ups - but the content was interesting IMNHO.

                                                                    1. 4

                                                                      Services are objects wanted to be.

                                                                      1. 13

                                                                        The article doesn’t seem to appreciate that fork is exit.

                                                                        1. 11

                                                                          Fork is exiting but bringing a copy with you.

                                                                          Imagine leaving the store and walking into an identical store but now the walls are blue.

                                                                          1. 6

                                                                            Soon you find that the blue coat of paint cost a pretty penny; and that you now have to pay your new shop’s rent, as well as hiring staff to run & maintain it.

                                                                            For a hobbyist it’s less of a risk; but for a business maintaining a fork of a significant piece of software can become an albatross around your neck, or can result in a significant business opportunity—or anything in between. Thus I’d caution against rash forking: evaluate carefully whether it’s worth it in your case.

                                                                            1. 11

                                                                              Then you always have an option to exit, same as before, with only moderate-to-low sunk cost.

                                                                          2. 5

                                                                            Not necessarily.

                                                                            It’s also a way of formulating voice into a coherent and proven valid response.

                                                                            A “Pull Request” is a “voice” asking the original developers to “to hear” a fully fleshed out and proven valid suggestion.

                                                                            They may choose to take it as is where is, or tweak, or ask it to be reformulated, or reject.

                                                                            Once rejected, the fork may choose to exit, or partially exit.

                                                                            1. 3

                                                                              Is Debian exiting Linux since they have their own fork of it? I don’t think so.

                                                                              1. 2

                                                                                There are many levels of exit so we can argue endlessly about what is and isn’t exit and all be right within a particular frame. A person leaves the living room of a house and goes to their own room where they have control within parameters. This is the same as the Exit, Voice, and Loyalty case of someone leaving an organization. Outside, they have control within parameters - usually set by the government they are under the jurisdiction of. For Debian, the control factor is their users and their choice to follow Linux. They can change their mind about all of those things without asking permission of the repo they forked from.

                                                                                So, yes, it’s all a matter of how you frame exit. FWIW, it’s a big question in political science whether exit exists at all, but I think that it is obvious that it does as long as one isn’t binary about it. It exists in degrees and at particular levels.

                                                                            1. 3

                                                                              The biggest issue with a lot of software engineering research that I encounter is that it is unsurprising. It is information that practitioners can acquire talking to a wide range of other practitioners or via first-principles thinking.

                                                                              The recurring pattern seems to be that organizations seeking better ways of working try things out and gain some advantage. Others hear about their success and attempt the same thing. The ones that don’t (laggards), often wait for confirmation from various source of practice authority (academia in this case) before trying something that appears risky to them.

                                                                              Since the primary learning loop happens through market experimentation, this leaves research the task of putting a stamp of approval on what is often common knowledge in some pockets of practice, leading to wider adoption. Or, more rarely, showing that some common knowledge may not work in particular circumstances, and that is more of a negative agenda. Practice discovery (positive agenda) will always be easier in industry.

                                                                              This makes academic software engineering research very hard and, maybe, demoralizing for those who do it. It’s relatively easy to show that some things don’t work in particular circumstances, but the range of circumstances is huge and every company / open source project has incentive to discover niche places where a particular method, tool or practice provides value.

                                                                              1. 6

                                                                                Why use C as a target at all? The code output is unreadable and useless. It’s also subject to interpretation by the various compilers.

                                                                                LLVM IR seems a better idea.

                                                                                1. 5

                                                                                  Worse, it invites edits that could invalidate the proofs.

                                                                                  1. 1

                                                                                    It’s also subject to interpretation by the various compilers.

                                                                                    This seems like a misconception. If your C output follows the C standard, it is not open for interpretation, if compilers don’t follow the standards, then it is a bug.