Threads for dpk

    1. 2

      Trying to get dynamic-wind implemented for scheme-rs, which is the last control feature I need to implement before it’s just banging out features.

      Going on a trip tho, so not sure how far I’ll be able to get. It’s an annoying feature

      1. 1

        For a basic ‘get it working’ implementation, the definition in the denotational semantics section of the report can be translated fairly easily into running code. See the definition in Chibi for an example.

        A clever/more efficient implementation can be tricky and depends on your implementation of procedure call and return.

        1. 1

          Thanks for the resource! I had not seen this file.

          Chez Scheme also lists an example implementation. That implementation I cannot use as it is not properly thread safe as far as I can tell. I do not like implementations that use global variables, even thread local ones are not robust enough since scheme-rs functions can move threads.

          I’m doing the same thing as I do for exception handlers - pass an extra parameter that contains the current dynamic extent. Then, when calling the escape procedure, we can compare dynamic extents and determine which functions need to be called:

      2. 11

        I know a couple of programs which continue to use a similar trick in the 21st century. It’s quite neat.

        In the Xmonad window manager, your configuration is written in Haskell and your configuration file is actually compiled to be the effective entry point of the window manager program that runs. Since Haskell is a static language, there’s no way it could implement live-reload of configuration written in Haskell itself without restarting the process, so here’s what it does: it shells out to GHC to compile your configuration file, then it serializes all the current window state and execs the newly-compiled configuration file binary (which itself restarts the window manager with your new configuration), which then reloads all the window state from the serialized information saved by the pre-exec window manager and then continues talking to the X server over the same file descriptor as if nothing had happened. It’s comparatively easy to serialize and deserialize an entire program state in Haskell, because everything is encapsulated in a tree of immutable objects!

        A similar trick seems to be used to allow the Mac OS terminal emulator program iTerm2 to upgrade without losing any live terminal state, but I haven’t looked at the source code to confirm exactly how it works. I assume it’s very similarly implemented.

        1. 10

          Relevant recent post by @fanf with Lobsters discussion: https://lobste.rs/s/esmju4/against_tmp

          1. 9

            That post also had https://systemd.io/TEMPORARY_DIRECTORIES/ merged into it which has the best advice at the top – use /tmp for small files, /var/tmp for larger files and $TMPDIR instead of either if it is set.

            The article here suggests using /tmp for downloads, which will work on some systems, but on other systems /tmp might be a tmpfs and you’re downloading into memory/swap.

          2. 14

            It’s worth noting for historical context that it seems unlikely that Norvig wrote his solver in order to show Jeffries up, or something. According to my memory, at the time there was something of a sudoku craze, not only among programmers but more popularly: a few years before, hardly anyone in the West had ever heard of sudoku, and then suddenly it burst onto the scene and every newspaper was printing sudokus, sudoku puzzle books were flying off the shelves, etc. Predictably, a lot of programmers looked at this simple pattern puzzle and thought ‘Hey, I could write a program to do that’ and did. In hindsight it feels like every programming blog I followed around 2005/2006 had a ‘look at my cool sudoku solving program!’ entry at some point. The fact that Norvig’s effortless-looking solution ended up being contrasted with Jeffries’s incompetent pushing-code-around-on-the-plate is likely just because that was the most extreme contrast in a large field of solvers of various quality. The fact that the contrast also flattered the prejudices of Lisp and Lisp-aligned hackers must also have helped.

            1. 2

              To kickstart some conversation, here are some further ideas in defense of gotos:

              • continuations are like functional gotos and permit the same spaghetti etc. (but prevent shared memory issues)
                • handling exceptions with gotos is cool goto recover_from_error
              • gotos were simply renamed break (but limited to the current context/scope, limiting shared memory)
              • gotos lowered the barrier in Basic, helping beginners focus on logic and syntax
              • Dijkstra was attacking unstructured programming (before loops etc. were popular) not specifically the goto
              • gotos help implement state machines / looped switch statements resemble gotos

              Though of course we love structured programming, fitting our code structure to our data (and not wallowing in the mire of implementation details).

              1. 5

                I think the first two defenses are great examples of the core problem with goto: it’s too unrestricted in what it can do. Continuations and break are significantly more restricted than the original goto, so they’re not harmful!

                1. 3

                  This was, in fact, Dijkstra’s actual point:

                  The go to statement as it stands [emphasis mine –dpk] is just too primitive; it is too much an invitation to make a mess of one’s program. One can regard and appreciate the clauses considered as bridling its use. [Here Dijkstra seems to be talking about using ‘go to’ with some discipline to apply it only in particular situations; but it’s also possible to interpret as referring to alternative control flow constructs beyond the theoretical straitjacket of what the Structured Program Theorem allows, such as ‘break’. –dpk] I do not claim that the clauses mentioned are exhaustive in the sense that they will satisfy all needs, but whatever clauses are suggested (e.g. abortion clauses [‘break’ in modern terms –dpk]) they should satisfy the requirement that a programmer independent coordinate system can be maintained to describe the process in a helpful and manageable way.

                2. 1

                  re unstructured programming: this was even a time before the stack frame was a standard concept that everyone assumes will always be present.

                  Even Minecraft Redstone computers are including dedicated stack hardware because it’s just assumed that you need a stack

                  1. 2

                    Yes, stack frames were not yet standard at the time of Dijkstra’s letter in 1968, but they were standard in new systems and languages after about that time – tho Knuth was still concerned they might not be supported in 1974. By the time of the 1987 flamewar that worry was pretty much gone.

                3. 1

                  Ho hum, another person who dislikes but misunderstands OOP.

                  1. 12

                    You also have to remember this was written in 2006. Java itself is very different now.

                    1. 5

                      Do you have a constructive comment, or are you just going to imply you know something the author isn’t telling us without letting us know what it is?

                      1. 2

                        It reminds me of stupid old me when I was moving from Applesoft DOS to IBM (Microsoft) DOS back in 1942, and I wanted to know why anyone would use directories. I mean, why bother, right? You can just store everything on the disk without directories, so why introduce yet another complexity?

                        OOP is as much about how to organize as it is about how to implement. Steve wasn’t stupid, but he sure missed this aspect of the design, and it hurts (feeling sorry for his ignorance) to read his rant as a result.

                        1. 16

                          There’s a difference between “why bother with modularity” and “Java’s approach to modularity is not very good”.

                          1. 1

                            It’s about namespacing and organization.

                            When Steve says:

                            Classes are really the only modeling tool Java provides you. So whenever a new idea occurs to you, you have to sculpt it or wrap it or smash at it until it becomes a thing, even if it began life as an action, a process, or any other non-“thing” concept.

                            I hear:

                            Directories are really the only organizing tool the file system provides you. So whenever a new file occurs to you, you have to save it into a directory or move it into a directory or copy it into a directory until it becomes a file in a directory, even if it began life as an action, a process, or any other non-“file in a directory” concept.

                            Which for someone who has never used directories before, makes a lot of sense. And for the rest of us, it is just a lot of words to say nothing.

                            1. 8

                              Well, you’re definitely misreading it then.

                              The criticism is that a class is a thing that bundles together a bunch of more-or-less unrelated things like inheritance, polymorphism, and encapsulation and prevents you from accessing these concepts a la carte. If you just want namespacing, too bad, you’re getting a bunch of other stuff whether you like it or not.

                              1. 1

                                Sure, I agree with that, to some extent. After all, that’s the design – when you’ve only got one building block (with its purpose to do a bit of everything as needed), you can’t be too surprised to see it used for all those myriad of reasons.

                                Having worked on large systems in a number of different languages, I grew to appreciate the natural organizational capabilities of languages (and conversely, to despise the lack of organization capability in some languages). Java didn’t even add a module system until quite recently, and that module system is still mostly useless for most developers (since its primary purpose was to modularize the JDK itself, not to be a general purpose module system for all Java developers). Yet even before the module system, there were Java systems with well over 100 million LOCs 🤢 … the fact that such a thing is even possible is pretty amazing. (And fortunately, I never personally had to work on such a codebase, but I can point you to a big one at a certain enterprise apps company that also sells a database.)

                                Anyhow, the natural ability to organize code hierarchically is quite useful. That’s not the primary purpose of an OOP design, but it’s something that an OOP design can support, as the Java example illustrates. And I for one really appreciate the value of that capability.

                                But back to your main point:

                                The criticism is that a class is a thing that bundles together a bunch of more-or-less unrelated things like inheritance, polymorphism, and encapsulation and prevents you from accessing these concepts a la carte.

                                Classes may have provided or leveraged inheritance, polymorphism, and encapsulation, but that wasn’t their purpose. I’m curious if you’re confusing mechanism with purpose here. Their purpose was to collect together the logic and data structures that went together, allowing operations and the data (and the data structures) the operations effect to be organizationally close. In a class-based OOPL, the class is the mechanism that provides that capability. And the benefit was that developers could pretty easily find what they were looking for, even in a large system. Autocomplete options in an IDE serves as a pretty good example of how such code organization can be leveraged.

                                And there are obviously other approaches. These are all just language design attempts to solve various aspects of the expression problem. OOP puts forward a solution that attacks it from the X axis. FP puts forward a solution that attacks it from the Y axis. Julia claims to solve everything, but in reality it has a solution that can support custom binary math operators with arbitrary custom mathematical types (and combinations thereof) really well. The expression problem is indeed an interesting (and fundamental) problem in computer science, and one that isn’t solved or even necessarily solvable, per se. Languages simply provide abstractions that let us solve large useful swathes of it, with rapidly diminishing returns (yielding complexity) as those swathes attempt to cover additional capabilities.

                                I get the sense that you’re approaching this conversation as an argument to win. I’m sorry if I pushed you in that direction; to me it’s not an argument, but rather a very interesting puzzle that smarter people than I invested lifetimes in trying to understand and improve our solutions for. In that context, I didn’t find Steve’s blog to be on point 20 years ago when he wrote it, although I laughed pretty hard when I read it. Rod who wrote the Spring framework is a good friend, but we’d tease him about the complexity and the long names. And Spring was hugely successful, which probably is what teed Steve off in the first place. Maybe Steve ended up having to fix something in that 150 million LOC app that I was referring to … and if so, I could understand the frustration 🤣

                                1. 1

                                  I get the sense that you’re approaching this conversation as an argument to win

                                  Not that I’m trying to change your mind exactly; I just saw a post that seemed to imply “this person doesn’t like this aspect of Java; it must be because they don’t understand it because if they did understand it, then they would like it” (whether that was the intent or not, that’s how it came across) and I couldn’t let such a bad take go unchallenged.

                                  Perhaps that’s a personal failing of mine and I would have more serenity in my life if I learned to leave these things alone. Something to consider for sure.

                                  1. 2

                                    It’s just as likely that I need to be more mindful of how I write things. Thanks for the feedback.

                                    We live in a stressful world at the moment, and I need to be sure to not add to the problems out there.

                              2. 6

                                I think the second quote block is quite meaningful! If directories were all I’d ever thought about, it would absolutely get me asking questions like “what would it look like to change that”, “what are the benefits and downsides to that”, and “have other mechanisms been tried in the past”. I’d get to learn about, say, WinFS, which might help me make a connection with the other mechanisms to store data that are widely used – such as databases. I could then dive into either the technical aspects of databases and the tradeoffs with respect to file systems, the history of why WinFS didn’t work out, or even both.

                                Honestly this is such a great example of why the blog post is valuable. Thank you!

                                1. 1

                                  I understand what you’re saying, and it’s a reasonable point. We often face these types of decisions when building software: Do I keep this data structure simple and uniform, and then use it to reference more complex abstractions, or do I make the core data structure itself extensible so that the complex abstractions can be included in that core data structure? There are successful examples of both routes, and examples where incorporating the complexity in the core structure seems like a mistake. UNIX is famous for always going with the overly simplistic uniform core abstraction, and Windows is infamous for going to the other extreme. I personally vacillate a bit between the two approaches, although I’d rather portray myself as a wise architect that always makes the right decision 🤣.

                                  A benefit of a hierarchical model is that you can represent the entire hierarchical system with one simple core data structure. As but one example, in some generic language:

                                  struct Node {
                                      Node parent;
                                      Node child;
                                      Node next;
                                  }
                                  

                                  All of the operations like walking, visiting, manipulating, etc. can be implemented in a uniform manner. I’m actually using this concept at the moment to work with XML 🤮 – something a customer needs, not something that I want to work on! – and it makes so many things simple and natural.

                                  As for WinFS, the main reasons that it didn’t work out were (like most failed things at Microsoft) internal political issues, not technical issues per se. There are a lot of brilliant engineers at Microsoft, but unfortunately most of them are trapped in roles where their efforts are regularly stymied by corporate politics. I lived in Redmond by 156th and 40th (they ripped down my condo to build a Microsoft building) and most of my friends in Belred were softies. They’d ask me why I wouldn’t come work with them, and I’d point out that every conversation about work that we ever had was them complaining about how much they hated their jobs because of the insane political b.s.

                                  1. 3

                                    Totally agreed, this is definitely a matter of judgment and taste. I think part of building that sense is studying the alternatives and their respective histories, and a lot of my own judgment has been informed by a lifelong curiosity about paths not taken.

                                    For example, I think it’s fair to say that structs (product types) alone do a pretty bad job modeling states. You really want enums carrying data (sum types) as well. From a mathematical perspective this makes sense – imagine if elementary arithmetic was only done with multiplication, never addition. But I only truly appreciated this later in life once I learned about their history and gained experience.

                                    All of the operations like walking, visiting, manipulating, etc. can be implemented in a uniform manner. I’m actually using this concept at the moment to work with XML 🤮 – something a customer needs, not something that I want to work on! – and it makes so many things simple and natural.

                                    Absolutely, a uniform hierarchical model is very valuable. I think it’s interesting to write programs in a “compiler pass” style where you have a sequence of operations transforming representations into other ones, from more generic/uniform to more domain-specific and back.

                                    1. 2

                                      Thanks for the thoughtful reply :)

                                      I love learning and seeing all the amazing, clever, sometimes bizarre, often beautiful things that people create in code. I too have my preferences, but I’m learning to find ways to appreciate even things that don’t seem familiar or natural to me.

                        2. 19

                          Exercises like Fizzbuzz ‘used to be popular’ for a reason. They’re an insult to the intelligence of actually qualified candidates, and there are far better ways to weed out the manifestly unqualified candidates who can’t even make a start on it.

                          If I were interviewing from the position of already having a good job and merely scouting for potentially greener pastures, I would feel at liberty to take the piss out of such an exercise by re-interpreting it as ‘create a term-rewriting system’ or ‘create a Scheme interpreter in Prolog’. (Well, more likely I’d just end the interview; I don’t think I have quite such a skill for dreaming up such computational perversions on-the-spot.) Starting from a place of (long-term) unemployment is different; it’s probably better to just go along with whatever exercise you get and solve it in a normal way. You can maybe see it as a sign of a company culture you won’t want to stay in too long, but getting the job would nonetheless let your foot on the ladder, and let you approach future interviews from a position of confidence and not desperation.

                          On the other hand, changing the rules post facto because the candidate is finding follow-up questions too easy is also just a shitty move. If your interview question isn’t proving sufficiently challenging because of a candidate’s choice of tool, that’s on you for allowing a free choice of tool to begin with. Why should it be considered a negative for the candidate to have chosen a tool that made the task easier than you expected?

                          It’s also not hard to see racism in the way the company treated him here. Unfortunately, stereotyping southern Europeans as lazy, stupid, incompetent etc. is common here in northern Europe.

                          1. 12

                            Well, more likely I’d just end the interview

                            Perhaps offer them the opportunity to not require the Fizzbuzz? My last company for instance systematically proposes a “coding game”, triva and small programming problems to solve at home (~1h), before it is examined by the technical interviewer. I recall one experienced dev having refused to do it, no problem, I just picked a few relevant questions and asked him directly. He demonstrated in 5 minutes that he knew his stuff better than I did. We hired him.

                          2. 10

                            Oracle obtained the JavaScript brand after acquiring Sun Microsystems in 1997.

                            Uh, what? I’m sensing LLM hallucination.

                                1. 1

                                  You could argue that it was initiated in ‘09. Or completed in ‘10. ‘97 is so far from a typo relative to either of those that it could practically only be a hallucination. But I think I’ll just flag it “spam”.

                                1. 2

                                  Eh. Yours would be more accurate. But “spam” works for me. Their claim:

                                  Oracle obtained the JavaScript brand after acquiring Sun Microsystems in 1997.

                                  Is so completely outlandish that “spam” makes sense, IMO. There’s no universe I can imagine in which that’s a good-faith statement, anyway. And “spam” seems to cover that particular sort of bad faith pretty well.

                              1. 16

                                A couple of remarks on this.

                                First, I guess we shouldn’t hold our breath waiting for HtDP 3e.

                                Second, it might be my bias speaking – as a Schemer, I’m more aware of Northeastern’s work because they use and contribute back to my language of choice – but Northeastern seems to me to be one of the top centres for in programming language education and research in the world, and this seems like a total betrayal of that. Its existing curriculum, as Felleisen explains, was already a very well-thought-out balance of preparation for theory and preparation for practical work, widely applauded. That curriculum has produced truly excellent students in the past: just picking something at random of stuff from their department which I’ve seen and personally found useful recently, this paper comes to mind, whose lead author was still an undergrad at the time of its publication – it’s notable as a significant and useful contribution in the otherwise still fairly under-researched area of applications of mutation testing to functional programming languages. If I’m ever recruiting programmers again, it’s likely I’ll consider an undergrad CS degree from Northeastern will be much less of a positive, compared to other universities, after these changes – though let’s wait and see.

                                Third, the events of the 2000s do also come to mind: not only MIT’s switch from SICP to Python which @minimax noted, but handwringing about JavaSchools (major universities switching from ‘academic’ programming languages to Java for their intro courses). I think MIT dropping 6.001 was mostly lamented by Lisp fans for the sake of dropping a Lisp – what replaced it certainly doesn’t sound from the outside any less rigorous. Perhaps now, the PythonSchool is the new JavaSchool. In any case, it would be instructive to gather some kind of data from the schools which switched from SICP-based (or ML/Haskell/whatever-based) courses to Java-based courses on how the changes affected their graduates’ career prospects. While having ‘Java’ on your CV might have got you past an HR drone faster in 2005, and ‘Python’ might do that today, the actual technical interviewers at the really top, competitive tech companies of the time (like, say, early-2000s Google) can presumably (hopefully!) do a lot better at telling who’s actually a good programmer. That might give useful data on how this kind of change will affect today’s graduates’ career prospects.

                                Lastly, Northeastern’s researchers have also contributed a huge amount to the Scheme/Racket community, and although that obviously won’t dry up overnight, it’s hard to see this as anything but the Racket/PLT-oriented part of the CS department being sidelined internally; hopefully that doesn’t translate into things like funding and staffing reductions which would rob our entire community of great ideas and great people.

                                1. 6

                                  Sadly, it is happening everywhere. In the 2000s, Europe had lots of great CS schools teaching rigorous PLT and formal methods. For example, CTM came from UCLouvain & KTH, two places heavily invested in PLT education and research. Many others existed. By the mid 2000s, this niche was already fading away.

                                  In 2008, I attended a one-to-one viva voce exam for an AI course, and the professor was surprised to see all my code was written in Scheme. He was both excited and sad to go through my submission. He said to the external examiner: “We no longer teach this for political reasons”.

                                  Personally, I think teaching the language du jour instead of timeless ideas is a big mistake. Someone who deeply understands FP & OO can quickly adapt to Python, but not the other way round.

                                2. 9

                                  Sure, the syntax of the language is fully parenthesized but that doesn’t mean it’s Racket. It’s just a prefix form of algebra.

                                  Christ, seems like a great change. This sort of “we don’t teach Racket” is just dodging the question - students don’t want your “prefix form of algebra” they want something they’ll get paid to use, like Python. And for so many good reasons. It is radically easier to find tutorials, blog posts, libraries, etc, when using Python. It is radically easier to write programs that feel useful in Python, it makes you like to program.

                                  I’m reminded of how absolutely out of touch my programming professors were and they talked very much like this.

                                  1. 35

                                    University isn’t trade school. For the purposes of academia (research, learning, etc.), it’s better to learn things that will teach you the fundamentals underpinning the field better. (I also think this is true for engineers as well.)

                                    If you just wanted to learn something purely for a job, you could go to a community college instead.

                                    1. 9

                                      Northeastern is built around the co-op program. Students there start doing 6-month, full time co-ops in their second year, completing three co-ops by the end of their five years at NEU. While it’s not a trade school, the university’s relationship to industry is built on this program and it is the cornerstone of the experience for students; the co-op program is why many students and alumni choose Northeastern. It’s a big part of why I chose Northeastern. Students often want what they are learning to be relevant to their co-op jobs. I would be curious what the rest of the students think and what the outcomes of the alumni are some years after the change.

                                      1. 7

                                        None of this tracks with my experience.

                                        Like what you’re saying on paper is sensible. Keep the Computer Science field for the people who like theory and let the people who just want to write code take trade classes. But this is not representative of reality.

                                        I went to a four-year college with one of the highest rates of doctorate attainment among schools in the US and a very theoretical computer science program (in my opinion). A significant majority of my peers in computer science are working as software engineers of some form now. My friends and I all agree that the classes we took contributed very little to our ability to do software engineering work.

                                        The problem is that there’s a mismatch between the hiring desiderata for tech companies and the True Computer Science curriculum. I wouldn’t tell a high-achieving student who wants to work in tech to go to a community college. I would tell them to go to a school with good brand name recognition and connections and make the most out of whatever caliber of classes they offer. Learn enough of the theory so you can tell an interviewer how to do QuickSelect in O(n) and then proceed to shelve that knowledge until the next time you feel the need to show off your geek level. And if you aren’t taught to program well, then get your hands dirty with some personal projects/hackathons/what-have-yous.

                                        In my view, this all comes down to the problem of Computer Science and Software Engineering degrees not being separately offered by most colleges.

                                        1. 4

                                          I responded to this elsewhere but just to parrot that - no, I’m not describing a trade school. There is plenty of room for research and learning with what I’ve said. Learning theory and science is not incompatible with making it fun, interesting, and practical. In fact, I am advocating the opposite - that if you want people to learn theory it is best to make it fun, interesting, and practical.

                                          1. 15

                                            When I worked through HtDP, I found it exactly that: fun, interesting, and practical. My school’s official intro CS courses were taught in C++ at the time, and they were just awful.

                                            But I ended up as a maths major, and I’m still no good at C++ (no motivation!) … so YMMV.

                                            1. 3

                                              I don’t think it’s wrong to teach that way, I think it’s wrong to be so dismissive of a student’s desire to learn Python. The whole straw man argument concocted just comes off really badly to me.

                                              1. 11

                                                Suppose you want to be, I don’t know, let’s say a structural engineer. You already enjoy building physical stuff, and you think it would make a good career. Regardless of how smart or motivated you are, it just would not behoove you to complain about having to learn calculus and statics and all that theory. You wouldn’t complain about having to build little toy models with materials and techniques that don’t exactly correspond to what working professionals do.

                                                You should instead trust that your professors present all the concepts that you will eventually need, in an order that helps you learn it, that it will all “click” eventually. In a mature field like structural engineering, this would be a well-supported belief. In CS, you can maybe be excused for being a little doubtful. So in that sense, I’m sympathetic… but I’ve known way too many intellectually lazy undergrads to just give you carte blanche here.

                                                If you, as an independent learner, want to go learn programming on your own, more power to you and best of luck. There are plenty of resources out there. If, on the other hand, you checked yourself into an institution of higher learning, you should be prepared to adopt a certain attitude of humility. You should also have known what you’re getting into. Heaven knows there were already plenty of Python-first CS curricula out there. This is a loss of academic diversity, if nothing else.

                                                1. 7

                                                  Things have naturally changed since then, but nowhere in my SE/CS education was there any focus on some desire for a specific language, and the only way to really ‘get’ something language-quirk specific was to enroll in summer courses (padding to have something to do) which covered language tooling and quirks.

                                                  From the top of my head, indirect language exposure went something like:

                                                  • OOP - Java, C++.
                                                  • Realtime systems - Ada, C, MIPS assembly.
                                                  • Operating Systems - C, X86 assembly.
                                                  • Compilers - whatever you liked, evaluation were on explaining your algorithmic breakdown and choices, some did it in Bash. The input language was a custom dialect of Simula. My submission was in brainfuck and received a heartwarming WTF?! (he was unaware that it was generated by another compiler that was 666 lines of Ruby).
                                                  • AI - parts in Prolog, parts in SPARC assembly, parts in LISP.
                                                  • ‘engineering’ projects against external clients? PHP, Scheme, C#.
                                                  • Software Security? X86 assembly, shell and Lua.

                                                  Outside of that, some smaller groups of students that enjoyed the challenge had language poetry readings in various esoteric languages, Shakespeare being a popular choice. Ook much less so. Pretty much the same cliques added ‘meta’ challenges to the courses taken. Database systems? Competition on writing a backend for dealing with the SQL in the course.

                                                  1. 6

                                                    I feel like this presents a binary situation that isn’t there. You can learn CS and Python at the same time. You can learn CS through Python. I maintain that you’ll learn CS and theory better if you do so in a way that motivates students by making it more enjoyable and feel more practical. I feel like this shouldn’t be contentious, honestly.

                                                    This is a loss of academic diversity, if nothing else.

                                                    I think that’s fair, I just don’t think it’s fair to sneer at students for wanting to enjoy learning or gain practical knowledge as part of their education.

                                                    1. 2

                                                      …structural engineer.

                                                      To be fair, a physicist could argue that structural engineering is just dumbed-down applied physics. To me, the ultimate problem with these discussions is that CS was never broken into multiple fields except at a few institutions (UIUC, I believe, has a separate Software Engineering department, for example). We wouldn’t have these weird discussions if SE, CS, and maybe even “Computing Technology” or something were completely separate degrees with a certain amount of overlap, as they probably ought to be, as basically every other field is.

                                                      1. 4

                                                        a physicist could argue that structural engineering is just dumbed-down applied physics.

                                                        Not convincingly.

                                            2. 11

                                              They are trying to teach something very specific, which is how to construct certain patterns of thought. If you’re doing it 1:1 their approach works fine in Python. When you’re teaching a class, their approach of targeted teaching languages is actually really brilliant. The students interact with a compiler that is restricted to the structures they are dealing with, so it can provide excellent error messages and feedback. As the course proceeds, the language shifts along with it.

                                              Any given language is only a vehicle for teaching a certain body of knowledge. Similarly in physics we teach Gibbs-Heaviside vector calculus in all its wartiness. Would differential forms or Clifford algebra be prettier? Sure. But the vector calculus is only a means to get to something else, notably electromagnetism.

                                              Your professors might have been out of touch, or they might be looking back from decades you don’t have and they might be trying to do something you aren’t aware of.

                                              1. 10

                                                Syntax aside, I think there’s something to be said for teaching in a pedagogy-oriented DSL that no LLM will give any help with. I’ve been a TA for classes like this and I shudder to imagine grading beginner $VERY_POPULAR_LANGUAGE nowadays.

                                                1. 8

                                                  It’s strictly true: the course teaches BSL, not Racket. Syntax is a distraction. Students don’t know enough to know whether what they think they want is what they actually want, let alone what they need; that’s why schools exist.

                                                  1. 7

                                                    Students don’t know enough to know whether what they think they want is what they actually want

                                                    That’s nonsense. Students want to learn a language that:

                                                    1. Maps to practical applications like getting a job
                                                    2. Is motivating, fun, etc.

                                                    It’s why I think JS is a good language for beginners. You can have a pretty webpage with cute features in no time. Students aren’t wrong for wanting these things.

                                                    that’s why schools exist.

                                                    I don’t agree.

                                                    1. 12

                                                      You’re describing the purpose of a vocational school, not a university.

                                                      1. 4

                                                        No I’m not. There is plenty of room for theory in what I’m saying, but that doesn’t mean it shouldn’t strive to also be motivating.

                                                        1. 10

                                                          I’m not sure how much more motivating you want than a fourth-year undergrad passionately arguing for this model in the student newspaper.

                                                      2. 6

                                                        This reminds me of a moment in the MIT intro to electronics course (which I watched on YouTube) where the professor is working through how the transistor implementation of a logic gate works, how the valid logic high/low voltages are determined, etc. He says sometimes students complain that in real life they would just plug chips together where this is already figured out. He says (paraphrasing) “perhaps so, but you are at MIT so that if you want, you can be the one designing the chips”.

                                                  2. 4

                                                    It’s not exactly clear, but this article seems to miss that ‘fast’ is supposed to refer to how long something takes to develop, not how performance optimized the final product is (which is one factor that can rather be counted into ‘good’).

                                                    1. 3

                                                      I’ve seen both interpretations, and chose one.

                                                      1. 5

                                                        In the context of good/fast/cheap, fast definitely describes the dimension of a shorter time-to-delivery, and not a higher measure (however defined) of the performance of the ultimately-delivered artifact.

                                                    2. 0

                                                      The whole thing’s pointless and indicative of CS navel-gazing, since there are Real-World problems that can’t be handled by a Turing Machine.

                                                      For example, if you have two independent tasks and want to switch contexts, then you need to record the current tape position. Since the tape is of infinite length, that will require an infinitely-long label (i.e. an infinite number of digits etc.). But since the control unit is finite, it can’t be stored there.

                                                      The concept of a Turing Machine is fine for discussing algorithms. But Real World languages need to do far more.

                                                      1. 7

                                                        This submission is not about tape-driven Turing machines. They’re mentioned in one answer (not the top one, and they’re not the main point even of the one that mentions them) and in one or two comments; the main content is about actual, real-world programming languages, and far from being ‘CS navel-gazing’, it does mention a real-world application.

                                                        1. 1

                                                          Yeah. The obvious relevance is to compilers, though there’s the caveat that compiler targets are usually highly expressive so it’s rare to need a global transformation to compile a target to a lower level of expressiveness. But it does happen for restricted targets such as Wasm.

                                                        2. 2

                                                          For example, if you have two independent tasks and want to switch contexts, then you need to record the current tape position. Since the tape is of infinite length, that will require an infinitely-long label (i.e. an infinite number of digits etc.). But since the control unit is finite, it can’t be stored there.

                                                          Maybe I’m misunderstanding, can’t the bookmark just be a unique symbol on the tape? With all the usual techniques you can handle this without increasing the alphabet size, though it isn’t very efficient it doesn’t seem particularly impossible.

                                                        3. 3

                                                          Reposting because the responses to this submission strongly suggest Lobsters could do with a refresher on it (or on the original paper) ;-)

                                                          1. 4

                                                            Thanks, I needed the refresher. I’d listened the talk and skimmed the paper not too long ago but couldn’t recall the main point without the summary in the top answer: A feature adds power if it would require a global transformation to emulate.

                                                          2. 4

                                                            I believe John Romero claimed that id Software also independently found the bug while developing Doom, so Nicely was not the only one affected for real. I’m not going to watch it again to find the exact timestamp right now but I think it was in this talk.

                                                            1. 12

                                                              It’s even worse than that. Leap seconds were only introduced in 1972. Before that, UTC was adjusted from TAI to keep it in synch with mean solar time by changing the length of a second in the UTC scale. So a TAI second was an SI second, based on the transitions of a caesium atom, but a UTC second was minutely, fractionally longer – and the amount by which it was longer changed over time to reflect changes in the Earth’s rotation.

                                                              So the epoch is actually 1 January 1972, which is Unix time 63,072,000. Take away those 63,072,000 seconds and you don’t get back to an integral number of seconds in UTC.

                                                              The proposal to abolish leap seconds is bone-headedly irresponsible, which I intend to write about at more length one day. The summary is that the abolition punishes good engineering practice (formal timekeeping in TAI) because of the widespreadness of bad engineering (formal timekeeping in UTC or even local time zones); the plan to replace them is going to be a disaster when it’s finally necessary, but the metrologists responsible likely feel – like older generations and climate change – that they won’t be around then to deal with the consequences, so it’s fine to shift this problem onto the engineers of the future to deal with the consequences of a whole leap minute, as if increasing the discontinuity would make it easier to deal with. There have been vague and disingenuous statements about why anyone needs the ±1 second from mean solar time guarantee of UTC anyway, but these ignore that the whole point of leap seconds is that they are the lesser evil.

                                                              1. 8

                                                                There were a couple of reasons for getting rid of rubber second UTC:

                                                                • UTC was disseminated using radio broadcasts, and adjusting the length of the second required retuning the transmitters, so that the number of carrier waves per second remained the same. This was a massive pain in the arse.

                                                                • Following the 1967 redefinition of the second, the atomic second was being enshrined in law in some countries, and in Germany in particular there were concerns that it would be illegal for the PTB to disseminate time with rubber seconds.

                                                                UTC was designed in a rush behind closed doors without widespread consultation. It isn’t well engineered: there are other ways they could have dealt with the difference between atomic time and earth rotation that are far less disruptive and which have better compatibility with ancient standards. A lot of programmers treat UTC as if it were an immutable law handed down from the gods, but really it was an expedient hack that should have been overhauled decades ago. Civil time was re-engineered several times in the mid-20th century, so it’s kind of weird that it just stopped evolving for 50 years.

                                                                Leap minutes aren’t going to happen. They were a vague suggestion that was included in the planning material to mollify lobbyists from countries that think precise earth angle matters for civil timekeeping. The actual plan is simply to use atomic time, and to keep studying earth rotation so we have a better idea of how it changes, and what might be a better long-term basis for civil time. UTC with leap seconds will fail thousands of years before atomic time without leap seconds gets uncomfortably out of sync with earth rotation, so at the moment abolishing leap seconds seems like the best option.

                                                                The short term variations in earth rotation are too poorly understood to be predicted with any accuracy. As has happened to calendars many times in history, it’s better to reorganize our timekeeping to decouple it from unpredictable leaps even if that’s less astronomically precise.

                                                                Regarding epochs, the 1972 introduction of leap seconds is about as relevant as the 1977 introduction of the gravitational correction between EAL and TAI. Neither of those epochs affect unix time, because unix time is defined by a function from broken down timestamp labels to a number, and that function doesn’t change in 1972 or 1977. The unix epoch is defined by the result of that function being zero.

                                                                1. 5

                                                                  UTC with leap seconds will fail thousands of years before atomic time without leap seconds gets uncomfortably out of sync with earth rotation

                                                                  Why? Not arguing with you, just wanting to get more information. While it’s not a nice system, it’s not obvious why you can’t just keep doing what we’re doing.

                                                                  1. 10

                                                                    Oops, I exaggerated, I should have said centuries, d’oh!

                                                                    In the long term earth rotation is slowing down due to tidal coupling with the moon. As the earth slows down, leap seconds need to happen more frequently. So far, leap seconds have only been inserted at the end of June or December. The specification ITU TF.460 allows leap seconds to be inserted as a second preference at the end of March or September; failing that, at the end of any month. So UTC will be unable to keep within 0.9s of UT1 at some point between 4 leap seconds per year and 12 leap seconds per year, when they would have to do careful dithering or modulation to match the average rate.

                                                                    I define uncomfortably out of sync as being worse than the effects of timezones or DST on how apparent solar noon relates to 12:00 o’clock. When UTC fails, when rotation rates differ by seconds per year, it will take hundreds of years to accumulate an hour of discrepancy between noon and 12:00.

                                                                    Steve Allen’s tables of the long-term trends suggest that leap seconds start to become impractical in about 300 years, and we might need a leap hour in about 500 years to turn a +30 minute discrepancy into a -30 minute discrepancy. His charts show that the rotation speed varies a lot from decade to decade either side of the long term trend, so these estimates might be wrong by centuries.

                                                              2. 12

                                                                This article is really weird. The author starts with a false dichotomy, even mentions WordPress at one point which ought to blast through that false dichotomy, but never explains what’s wrong with it. A potential alternative to WP, Ghost, is also dismissed for … being written in the wrong programming language? Notwithstanding hosted solutions for both of these (and more) are available which don’t require you to do the care and feeding of a VPS etc. yourself. If the author is hell bent on sticking with old school shared hosting, I bet Ghost will actually run fine on, say, NearlyFreeSpeech.net, which has Node.js support.

                                                                So yes, if you have a bunch of requirements which you don’t really explain, and reject perfectly viable solutions out of hand because of vibes (?), making a website is still hard. But I know or know of plenty of non-technical people who get on perfectly well with one of these, or with some proprietary solution.

                                                                1. 9

                                                                  Ghost, is also dismissed for … being written in the wrong programming language?

                                                                  The post says “i literally cannot make this work because all the commands from the ghost documentation just make the interpreter dump a trace and exit with an inscrutable error about not being able to create a thread or some other gibberish like that”

                                                                  So no, it isn’t about being the “…wrong programming language?” (though I do expect that is a good piece of the problem, little php things feel approachable in a way these big complicated frameworks with lots of dependencies don’t), but more because he tried it and the instructions didn’t work on his server. I also tend to give up on things that don’t work.

                                                                  1. 4

                                                                    but he also says he doesn’t want to host his own vps so,, why is he trying to run it to begin with? honestly my hunch is that he didn’t he’s just angry and making assumptions

                                                                    1. 2

                                                                      he’s trying to run it because he just wants to run a blog with a wysiwyg editor and there aren’t many options other than wordpress and ghost

                                                                      1. 2

                                                                        Apparently Ghost has “support for popular static site generators”, whatever that may mean. Perhaps you can use Ghost to write pages and have the static site generator emit HTML which you can then upload?

                                                                    2. 2

                                                                      Wow, I missed the memo that NearlyFreeSpeech.net now supports non-CGI Node.js (i.e. persistent processes). Thanks for mentioning that!

                                                                      1. 1

                                                                        i have a website, and CGI is perfect for that. it allows just enough complexity to make it easy to update your site without any external tools, and to have things like automatic indexes and search. yes, if you overreach you end up with wordpress - so don’t do that, and you’ll be fine.

                                                                        Seems like he considers Wordpress as too complex. That is vague though. Likewise a comment there says:

                                                                        wordpress is fine honestly, but it’s super bloated and. you know. gestures broadly at whatever tf the wordpress team is doing this week

                                                                        1. 3

                                                                          A good chunk of this post (and its subsequent follow up) is about customizing little things, and I sympathize with that. If there’s a wordpress plugin that does what you want (and nothing you don’t!), it isn’t so hard to click the install, but it is a lot of effort to find that… learning wordpress’ ecosystem is more daunting than just making a few hacks to an old .php file. And writing your own wordpress theme or plugin is a real pain. (Well, it was when I last tried, but if it has changed much since then, that is its own problem - wordpress must update frequently for security but then you risk breaking your hacks and ugh who wants to deal with that?)

                                                                          So when you want something that does the job but is something you can hack on with minutes or hours of effort, not days or weeks sunk into learning the ecosystem or studying the docs, and risk it randomly breaking later… I wouldn’t pick wordpress either.

                                                                          1. 2

                                                                            Wordpress is bloated and complex, in spite of being compromised to try to cater to the lowest common denominator. Having hosted a very large Wordpress site, then seeing it run under Drupal, it was clear that Wordpress was trying to do too much, and this was after countless amounts of time spent optimizing everything.

                                                                            Separately from that, Wordpress has an absolutely horrible security model. Couple this with the fact that there are crap plugins for everything, and many of those plugins never get updated, and people are afraid to update Wordpress because they’re afraid the site will break because the plugins will stop working, and, well, you’ll quickly understand why it’s the number one phishing site hosting platform on the planet: insecure instances get compromised and used to host phishing sites.

                                                                        2. 4

                                                                          Oh hey, a new Scheme implementation! Welcome to the club :-)

                                                                          Let me know when you’re nearly compatible with R6RS and I’ll add you to the list!

                                                                          1. 4

                                                                            Woo!! That’s some motivation! I’ll get right on it!

                                                                            I’m slowly working through R7RS first just because it’s easier to have the spec printed out on my desk in front of me, but a superset of R6RS is the end goal (plus a built in package manager, but that’s a ways off)

                                                                          2. 28

                                                                            i like antirez’s take on licensing here. reminder that the OSI was started by libertarians, and has always held business interests first & software freedom second.

                                                                            the OSI definition of open source is basically “as long as companies can sell your software labor for free & not contribute back in any meaningful way, it’s open source.”

                                                                            it’s time to explore alternatives. anyone holding the “open code = good” opinion should re-evaluate in the face of companies viciously exploiting open projects for money — it’s a digital land grab.

                                                                            1. 36

                                                                              OSI was started by libertarians, and has always held business interests first & software freedom second.

                                                                              Sure glad we’re moving away from that to this new wave of fair, moral, equitable licenses which instead grant users the freedom to make the author’s SaaS business profitable.

                                                                              1. 3

                                                                                I’ve heard this critique before, and I believe it’s an important one, but I’m interested to know: what do you think would be more fair, moral, and equitable? Especially in the current landscape, where it’s expected that everything valuable which you publish will be laundered by others.

                                                                                1. 17

                                                                                  I like the idea behind the Anticapitalist Software Licence – not least because a licence like this could define its terms a lot better than most so-called ‘Ethical Source’ licences – but the current version is somewhat problematic in that regard. (E.g. it should say that individuals can use the software without restriction, but as written it potentially doesn’t cover hobbyist use, only use by sole traders.)

                                                                                  If you don’t want to be perceived as quite so radical (because the free software and open source definitions are still satisfied), the AGPL, EUPL, and other licences with a network clause seem to do a good job at scaring off the parasites in practice.

                                                                                  Another interesting idea is a patent guillotine clause which applies if the licensee sues any of the licensors for patent infringement. (Note this is the inverse of the infamous Facebook PATENTS file, where they as licensor could terminate the licence at any time with a patent lawsuit.) That also scares big companies off, and could in theory be applied as a condition to otherwise ‘permissive’ i.e. non-copyleft licences. Taking a stance against software patents is also likely to be less controversial than taking a stance against capitalism, even if you’re really aiming your sights at the latter.

                                                                                  1. 1

                                                                                    I’ve said over and over in threads on this topic that I have no problem with proprietary software existing. And I think that if someone intends to build a business around their software project, proprietary licenses almost certainly make the most sense. They certainly make more sense than trying to give away your product free of charge under an open-source license and eventually realizing that won’t work, at which point you post about The Next Incredible Chapter Of Our Open Journey, which always involves relicensing to something anticompetitive and often not Free and/or Open Source (the usual exception there is AGPL wielded to try to create a commercial-exploitation monopoly, but even that doesn’t seem to work long-term as we’ve seen some adopters eventually go full proprietary).

                                                                                    So I think if you want to have an open-source project, have an open-source project. If you want to have a business, have a business. If you want to have a business that is also an open-source project, I think you will inevitably find that you cannot have both and will have to settle for one or the other.

                                                                                2. 31

                                                                                  anyone holding the “open code = good” opinion should re-evaluate in the face of companies viciously exploiting open projects for money — it’s a digital land grab.

                                                                                  When I make a thing and give it to the world for free on purpose, it doesn’t bother me when others use the thing I made and gave away on purpose.

                                                                                  It does bother me when people imply I’m wrong to enjoy making things and then giving them to the world for free.

                                                                                  1. 4

                                                                                    I read the comment the other way around - insisting that others release everything free and open source “or else”.

                                                                                  2. 6

                                                                                    I don’t see a problem with this. There are many valid reasons to use an OSI license.

                                                                                    I do see a problem with the idea that people build a community around a liberally licensed project and then decide to profit off of it when the community did all the marketing, support, and maintenance.

                                                                                    If you believe in commercial, proprietary software, then by all means go build that. It’s possible! If your idea is any good, though, consider that there are very motivated people out there that love to challenge themselves to “rewrite it in Rust” or “rewrite it for Libre,” for other incentives that aren’t tied to fiat currencies, and that could cost you.

                                                                                    1. 3

                                                                                      I more or less agree with everything else you said, but this looks suspect:

                                                                                      […] the community did all the marketing, support, and maintenance.

                                                                                      From the article:

                                                                                      […] I would not be what I was able to be thanks to VMware and, later, more extensively thanks to Redis Labs later: a freaking Robin Hood of open source software, where I was well compensated by a company and no, not to do the interests of the company itself, but only to make the best interests of the Redis community. […]

                                                                                      So here we have one of the product’s cofounders saying that he would not have been able to do the work he did without the involvement of corporate sponsorship. Short of uncharitably dismissing it as sweet talk from a newly interested party, I’m curious how you would reconcile these.

                                                                                      1. 5

                                                                                        So here we have one of the product’s cofounders saying that he would not have been able to do the work he did without the involvement of corporate sponsorship.

                                                                                        There’s corporate sponsorship, ie, paying open source maintainers to make open source software for all, and then there are companies like Redis Labs, Elastic Search (which reversed course), and HashiCorp that built up a ton of good will, ecosystem, community, only to the change the terms of engagement, in an attempt to restrict other parties from making money off of the community’s collective work.

                                                                                        I have no problem with corporate sponsorship and certainly no problem with maintainers getting a bag. I have problems when copyright holders extract thousands of hours of knowledge work out of a community and then say “you have less rights now for how you can use what you helped create because my company sucks at making money, and we have investors to feed.”

                                                                                        You want to make money off the backs of open source maintainers? Cool. Find a way to incorporate open source into a larger niche product and contribute bug reports/fixes/features as thanks, or spot the maintainers $5 once in a while. Or build something useful, even something proprietary, and sell it. For libraries and frameworks that aren’t core to your product’s main purpose, consider spreading good will by making them available in under an appropriate OSI approved license.

                                                                                        1. 1

                                                                                          There’s corporate sponsorship, ie, paying open source maintainers to make open source software for all, and then there are companies like Redis Labs, Elastic Search (which reversed course), and HashiCorp that built up a ton of good will, ecosystem, community, only to the change the terms of engagement, in an attempt to restrict other parties from making money off of the community’s collective work.

                                                                                          I can see how you can see it this way, but I encourage you to talk to some people on either one of those ideological divides. I personally, as someone who has built Open Source software and continues to do so, rather work at those companies to build an Open Source product than to become employed by some large megacorp and hope that they will continue to pay me until forever to contribute to Open Source software that I hope stays aligned with what they want. However, that’s me, that’s not everybody. When you create something you have a very different attachment and desire for what it can be, than if you contribute to a community project or a shared thing.

                                                                                          Or build something useful, even something proprietary, and sell it.

                                                                                          I think this is where this becoming a problem. The world is not black and white. I don’t think we arrive at the right outcome when we say either something is proprietary forever or something is the most Open Source it can be. That might be okay if copyrights would not last that long, but in the real world copyrights last way too long. In that world, we need to find other ways to restrict the total damage that corporate exploitation over a creation can have.

                                                                                          I wrote about my thoughts on this many times over, but I think that single-vendor “open source ish” projects should have the right of commercial exploitation, but there needs to be an escape hatch if that vendor turns out to be abusive or problematic. The FSL was the best we came up with so far, but it’s probably not perfect. But giving up on that idea entirely I think just leaves us all worst off for no reason at all.

                                                                                          1. 3

                                                                                            I can see how you can see it this way, but I encourage you to talk to some people on either one of those ideological divides.

                                                                                            I used to work at HashiCorp, I’m quite aware of both sides. :)

                                                                                            I don’t think we arrive at the right outcome when we say either something is proprietary forever or something is the most Open Source it can be.

                                                                                            I agree, but following this advice would reduce the number of blog posts about this topic immensely, and would be right for the “average project.” If your goal is to make money from selling software, we know how to do it. You put a moat in front of your competitors by making your software proprietary so it can’t be trivially replicated, and then learn a bunch of other skills like marketing and sales to sell it.

                                                                                            If your goal is release software to the world, gratis (and/or) libre, for whatever motivation, doing it in a sustainable way, monetarily, is way more difficult. There are avenues to do this, but nothing universally applicable. It may be “go work for a company using the software you created.” It may be “become a contractor supporting this software.” It may be “acquire enough donations / sponsorship to sustain it.” It may be “sell a service that uses the software, but doesn’t sell the software directly.” Obviously, other skills are super useful here too, since you still have to make the software known, used, supported, etc…

                                                                                            Companies, like HashiCorp, try that last bit and find that other companies can do the same thing. Then they find themselves competing against others in offering a new service (which isn’t their competency), not improving the original software, leading to the natural conclusion of “cut off the competitors legs,” and we’ll be fine.

                                                                                            In most cases this isn’t what’s best for anyone.

                                                                                            (I will say that in regards to HashiCorp Vault, I didn’t like the switch to BUSL, but it’s a much more defensible position than Terraform. The main community contributions for Vault has been evangelism, not code. The shitty part of the switch is that the evangelists became unpaid corporate shills, overnight, with the change. Anyone who built their identity as a Vault evangelist on the pretense that it was an open source project, was rocked. People volunteer to be unpaid corporate shills for proprietary software all the time, of course, but they opt into that relationship.)

                                                                                        2. 4

                                                                                          It’s worth being very clear here that the company which now is “Redis” basically bought the already-existing Redis open-source project, and that the “freeloaders” so often complained about were contributing to Redis prior to its relicensing. Amazon was paying the salary of a core developer!

                                                                                      2. 3

                                                                                        The OSI definition of open source is copied almost directly from the community norms formed at the Debian project and codified as DFSG.

                                                                                        1. 1

                                                                                          I look at this differently. The OSI-approved licenses solve the problem of being locked into a single provider that maintains software. As long as it can be forked, maintained, and distributed by anybody then it is open source. If a single vendor or entity can forbid use, modification, and/or distribution, then it is not open source.

                                                                                          It’s funny you use the phrase “land grab”, here. The way Redis and others have used open source is to accelerate adoption and try to become the primary solution in a specific area. When that is achieved, then they start whining about competitors actually doing what open source allows them to do. Would they have ever achieved the same market share using the licenses they switch to? I’m skeptical.

                                                                                          I might have missed it, but I don’t see a bunch of companies launching projects under these source-available licenses. Perhaps because they realize that those licenses are inherently blockers to adoption, contribution, and advocacy. A lot fewer people are excited about promoting a project where all control resides in the hands of a single company that can do a heel turn at any time. Open source licenses are a protection against that. The company can still do the heel turn, but there’s the right to fork.

                                                                                          All that said, companies are free to publish software under source-available licenses, proprietary licenses, or open source licenses. But when a company says for years “this will always be under $license” and then does a 180, they should expect that they’re going to burn a lot of trust.

                                                                                          (It’s worth noting, too, that other cases where open source is better are overlooked. For example, when a company launches a project and then loses interest or when a company is bought and a project is discontinued. Citrix basically walked away from CloudStack, for example, but because it was turned over to the ASF and under the Apache License, other companies have been able to pick it up. That could’ve gone badly for those who adopted the software.)

                                                                                        2. 9

                                                                                          Time and again the FSF has pulled this trickery that essentially amounts to an argument that the GPL covers an API, and that the API is by extension copyrightable. The Objective-C front end for GCC was another, nearly contemporary example.

                                                                                          They can’t have it both ways. Either it’s a GPL violation to require the user to link to a GPL library, because the library’s API is copyrighted and covered by the GPL, but also Oracle gets to sue anyone who reimplements the Java standard library; or APIs are not copyrightable and a non-GPL program can use a GPL library’s API, requiring the user to link to the actual copyrighted, GPL’d implementation – perhaps under the assumption someone will one day create a non-GPL’d implementation of the same API.

                                                                                          See also the Linux-syscall-note, only really needed because Linus had to stop the FSF pulling this argument on any program ever compiled for Linux.

                                                                                          1. 4

                                                                                            I’m not a lawyer, but it seems reasonable to me to consider the Objective-C frontend scheme subterfuge, because as far as I understand it from rms’ message here, it was supposed to be included as a single “program” to the user, i.e. the compiler, so breaking it up into a proprietary frontend and libre backend, then distributing both separately, where the intention is clearly to link one with the other into one program, the gcc backend is clearly part of the program, but simply being distributed separately.

                                                                                            The CLISP case is different. He proposed distributing a replacement for libreadline. I’m not sure how one can infer “that the one part clearly shows the intention for incorporation of the other part”. Packaging libnoreadline.a with lisp.a while separately distributing libreadline.a as an alternative to libnoreadline.a doesn’t demonstrate that lisp.a and libreadline.a are one program.

                                                                                            He could just as easily stop distributing libreadline.a, even separately and simply link to its GNU site as an alternative, hell, he could decide not to even link to it or mention it at all. His program would still work. Because it’s a completely separate program, with part of it being a reimplementation of libreadline.a.

                                                                                            1. 4

                                                                                              Agreed. I believe that the fact that CLISP was originally developed for Atari ST with no readline in sight would have been pretty important to the judge. RMS failed to recognize the difference in the situation, in my opinion. It would have been a different case if they have built it using readline to shape the user experience, then decided to stub it out and act as if it’s up to the user, wink wink. Which they didn’t.

                                                                                            2. 4

                                                                                              The Objective-C front end for GCC was another, nearly contemporary example.

                                                                                              If you read far enough down the original linked thread, RMS discusses this:

                                                                                              I say this based on discussions I had with our lawyer long ago. The issue first arose when NeXT proposed to distribute a modified GCC in two parts and let the user link them. Jobs asked me whether this was lawful. It seemed to me at the time that it was, following reasoning like what you are using; but since the result was very undesirable for free software, I said I would have to ask the lawyer.

                                                                                              What the lawyer said surprised me; he said that judges would consider such schemes to be “subterfuges” and would be very harsh toward them. He said a judge would ask whether it is “really” one program, rather than how it is labeled.

                                                                                              So I went back to Jobs and said we believed his plan was not allowed by the GPL.

                                                                                              The direct result of this is that we now have an Objective C front end. They had wanted to distribute the Objective C parser as a separate proprietary package to link with the GCC back end, but since I didn’t agree this was allowed, they made it free.

                                                                                              I’m not a lawyer, but assuming RMS is accurately relaying the FSF’s lawyer’s opinion, it seems to be at odds with yours.

                                                                                              1. 2

                                                                                                When a lawyer makes a statement to their client, or to the public on behalf of their client, on what the law in a particular situation is, they cannot actually know what a court will eventually rule. This means only that the FSF’s lawyers, having the FSF’s interests in mind, believed it would be possible for them to argue that the ObjC frontend had to be GPL because it would be subversion of the licence’s intention to accept NeXT’s argument.

                                                                                                But if the FSF wanted to stay in the land of copyright (which they were, until much more recently, very keen on doing), they have only one option: they would have to have argued that the ObjC front end is a derivative work of the main GCC distribution. (The GPLv2 is explicit that a ‘“work based on the Program” means either the Program or any derivative work under copyright law’.) This is very difficult for exactly the same reason a program compiled for Linux – even a complex assembly language program written only for Linux, invoking Linux system calls directly with their magic syscall numbers – is not a derivative work of Linux.

                                                                                                If the FSF were more willing to accept that the GPL is a contract with wider-ranging obligations on its parties than a mere licensing of copyright under certain conditions, the ‘subterfuge’ argument might have held more water. That might have been a way it could have worked in court. But in the 1990s, in US law, that wasn’t how they were trying to sell it, as far as I know.

                                                                                                (Disclaimer: I have not seen the relevant version of the GCC ObjC front end; it’s possible it could have been found to be a derivative work. Given that rms went to this roundabout argument about subterfuge, I think even he must have found that a doubtful argument.)

                                                                                                1. 4

                                                                                                  (Disclaimer: I have not seen the relevant version of the GCC ObjC front end; it’s possible it could have been found to be a derivative work. Given that rms went to this roundabout argument about subterfuge, I think even he must have found that a doubtful argument.)

                                                                                                  IANAL but my understanding of the relevant argument, as it pops up in other contexts, is that:

                                                                                                  1. The ObjC frontend is linked against the GCC backend and they run in the same process while the Linux kernel runs as a separate process. (i.e. One has no IPC boundary and the other does)
                                                                                                  2. There is no alternative implementation exposing the GCC APIs that it could be linked against instead, while Linux tries to implement POSIX. (i.e. One cannot function without the GPLed code while the other can.)

                                                                                                  Those are the two standards I’ve seen used to draw lines like “It’s OK for Ark or File-Roller to link against libarchive, but the source to UnRAR must be built as a separate binary and exec‘d” or “It’s illegal to redistribute precompiled nVidia binary drivers, but you can run GOG.com games on Linux” when talking about derivative works.

                                                                                                  RMS’s argument makes no sense there.

                                                                                                  In the context of the e-mail, it’d be similar to how a compiled ffmpeg binary may or may not be required to be GPL depending on whether you build it with the GPLed bits enabled.

                                                                                                  If CLISP functions without readline and the APIs are intercompatible enough to be interchangeable at link time, then I don’t see how builds which don’t link against readline could be argued to be derived works.

                                                                                                2. 1

                                                                                                  Yes, I read that part, it’s the part I was responding to. I’m saying I agree with the lawyer’s opinion on Objective-C, but I don’t think his reasoning would apply to CLISP. RMS is taking the lawyer’s opinion on the Objective-C situation and applying it to the CLISP situation, which I think is a misapplication.

                                                                                                  I thought this was a response to my comment at first, despite obviously not receiving a notification.

                                                                                                  1. 4

                                                                                                    I’ve mistakenly done that before too, and almost done it more than once; the rendering of an immediate sibling is similar enough to the rendering of a reply in my browser that I find it a little hard to distinguish the two.

                                                                                                3. 3

                                                                                                  These sorts of discussions make me increasingly convinced of the value of the EUPL as a kind of non-viral project-wide copyleft license. If you distribute an EUPL-based project, even with modifications, then you must distribute that code plus modifications under that license - there’s no way to turn it proprietary again as with MIT or something similar. However, there’s no requirements on code that interacts with your project, uses its API, links to it, uses its syscalls, or extends it à la the Emacs example someone linked to in the comments.

                                                                                                  This legal trickery of forcing other projects to adopt your license is clever, but it feels like the FSF’s approach is far too zealous, and the waters have been muddied too much around what effects the GPL actually has.