Threads for jlarocco

  1. 2

    In 2001, StarLisp emulator was ported to ANSI Common Lisp and released under public domain. It is now available at GitHub.

    1.  

      How feasible would it be to use lparallel and sb-simd to get parallelism in the emulator? Do you think there would there be any benefit? Are the paradigms are too different from the Connection Machine?

      1.  

        Looks difficult. *lisp seems to be based on ‘pvars’ (‘parallel variables’); realising useful parallelism from such would require non-trivial non-local analysis.

    1. 2

      Jai sounds a lot like zig. Any reason to choose one over the other?

      1. 14

        Zig actually exists, and you can confirm that to yourself. Jai does not have a publicly available compiler.

        1. 4

          That’s a pretty uncharitable comment. Jai exists but is not ready for public release, it’s as simple as that.

          1. 11

            I didn’t mean it literally doesn’t exist, I just mean you can’t use it, so it might as well not - you can’t use it.

            1. 6

              I think at the current time anyone that can be bothered to mail Jon with a small blurb about wanting to get into the beta can get in. At one point people had invitations available to give others.

              1. 13

                Thanks for sharing, I was unaware there were copies of the compiler out there people are using build Jai programs. I’ll stop spreading misinformation.

                1. 11

                  I kind of have to agree with WilhelmVonWeiner on this. That’s quite a barrier to entry compared to other languages, especially for one with a bunch of kruft from C and C++.

                  Are there any public projects written in Jai demonstrating why I’d jump through hoops to use it? The list in the blog reads like they copy/pasted marketing fluff, and the majority of the discussion is dedicated to (perceived) problems with C++ and D and other languages, but awfully light on details about Jai.

                  1. 2

                    I’m pretty sure the beta targets the devs that are actually following Jonathan Blow on twitch. If you were never interested enough in watching, then the language is probably not for you at the moment.

                    1. 2

                      I think it’s probably still a little too soon for that. The closed beta is definitely a sign that Jai’s definition is mostly solid, but I have no doubt that there will be changes to the compiler (maybe even the language?) in response to the feedback they get. Given Jonathon Blow’s habit of not committing to deadlines until something is really ready, I would continue to consider Jai to be unavailable until a publicly available version is announced.

                2. 2

                  You are confusing languages with implementations. Just because somebody has written a compiler, and claims that the compiler takes the Jai language as input, does not mean that the Jai language exists. We must not let private projects appear to be community projects.

                  You could make this same complaint of Zig, for what it’s worth. Zig isn’t specified yet, and it’s not clear whether a given hunk of code is valid Zig. (Although recently this attempt at specifying Zig seems to be quite good.)

                  1. 1

                    I disagree strongly. If I would take your definition to heart then technically you’d be correct because the syntax is not definitive and the data model will never get what you might call a specification. But, I’m pretty sure that the Jai compiler will forever have only one implementation, the one released by Jonathan Blow. Even if he ever puts the source code under a permissive license, I doubt he will have the patience to deal with pull requests from contributors.

                    Despite all of that the Jai programming language can be used now and it produces binaries that run on multiple OSes, so for this specific case I would say that your statement makes a distinction without any difference. Adding the bold claim that a language that is not a community project is not a real language somehow is a level of pedantry that takes it even more over the top. :)

                    1. 4

                      So the question remains, why he at all made videos/presentations to announce Jai, starting nearly ten years ago; if you want public feedback for a language, the first thing you publish is a specification or “language report” (as e.g. Wirth calls it). This document specifies purpose, syntax and semantics of a language. Examples: https://people.inf.ethz.ch/wirth/Oberon/Oberon07.Report.pdf, https://github.com/oberon-lang/specification/blob/master/The_Programming_Language_Oberon%2B.adoc, http://software.rochus-keller.ch/busy_spec.html

                      If there is no specification, people have to make assumptions on the language based on some examples, and the feedback is close to useless.

                3. 4

                  While I absolutely agree with the sentiment here, I do think it’s worth pointing out that the difference is not as stark as one might initially think. Access to Jai is limited because Jonathon Blow wants the language to be “right the first time”. Thus he discusses its features freely, but people can’t actually use it. You may believe this approach is smart or stupid, but it’s worth noting that while Zig is freely available, it is still at version 0.x. I’ve been able to use Zig to write small programs, but most of those programs no longer work because the current compiler (and/or standard library) has changed since then.

                  In other words, the distinction between the two is less qualitative and more just one of management.

                  1. 2

                    The approach is demonstrably wrong. Nothing is correct the first time except for formal proofs, and there is no evidence that Blow is formally proving Jai’s behavior in either the implementation or specification. Jai will probably have minor revisions.

                    1. 3

                      The way you’re commenting here makes me believe that you had no real contact with the work that’s being done on Jai and you only speak generalities about programming language design from the perspective of someone that enjoys the theory more than the rest. Jonathan Blow is probably the antithesis of that, and from what I’ve seen he abhors theory over pragmatism.

                      This is a language that’s been in the works for about 7-8 years now, there’s nothing “first time” about the compiler at this moment, and it will be even less so at release time, whenever that will be.

                      1. 2

                        Relax. It is not your fault that Blow has not released their code.

              1. 46

                I would happily make a long-term wager that small, bespoke languages will not replace high-level languages in any meaningful way. I have three reasons for believing this:

                1. The relative time/effort savings of “using the right language for the job” are not as great as they seem.
                2. The cost of working with multiple languages is greater than it seems.
                3. Political forces of organizations, and the individual force of laziness, favors high-level languages.
                1. 9

                  Very good points. It might very well be the case that “little languages” will take their place next to Lisp as one of those ideas that showed impressive results but failed to make a meaningful dent in the industry.

                  1. 8

                    As someone who has done a lot of research on program semantics, static analysis and verification, I have gone from thinking little DSLs are ugly to thinking they are the future (as you can see from my other comments in Lobsters).

                    The key idea is that verifying programs written in Turing-complete languages is hard. We should either try to automate as much as possible (and here static analysis might be the most practical approach, but it is somehow less popular than other techniques) or switch to DSLs, where verification is much easier.

                    I think what could make DSLs take off is some tooling to construct them (like Racket) paired with some tooling for building verifiers.

                    1. 4

                      An interesting related example to this big languages vs DSLs debate is the design of Julia vs Python in the context of ML.

                      Julia has taken the big Turing-complete language route, whereas Python is just a framework to build little DSLs (or not so little, JAX has around 200 operators). Julia is undoubtedly more elegant, but it’s a lot trickier to implement differentiable programming on a huge language vs a tiny DSL.

                      So, the end result is that nobody uses Julia to build large ML models because autodiff is not robust enough. You can find a very long and illuminating discussion here: https://discourse.julialang.org/t/state-of-machine-learning-in-julia/74385/2

                    2. 8

                      What’s interesting about “little languages” in a Lisp system is that they are no different than libraries. If you use that same argument in non-Lispy languages, “libraries are little languages,” then “little languages” are already incredibly successful and are absolutely the future. The API is a “little language.”

                      Personally, I want these “library languages” to be able to provide more to the higher level languages they are designed for, but I seem to be in the losing side of this, too. I recently wrote a small wrapper on top of Go’s database/sql to make it easy to do transactions across our database abstraction, and, of course, the only way to make this work reasonably was to take a func () error as a sort of “block” and on error rollback, else commit. But of course, most Go programmers would cringe at this, and even I cringe at it in some ways.

                      1. 3

                        Lisp is a funny comparison to make.

                        One thing I like about Lisp (Common Lisp, specifically) is that the language lets you embed “little languages” and DSLs, so this trade-off doesn’t need to exist there - if I need a new “language” to describe something, I can create it and use it easily without messing around with new tooling, editor support, etc. It’s basically no different than using a library.

                      2. 2

                        I think I mostly agree here… but a couple points:

                        1. Your usage of “high-level” seems off here. Do you mean “general purpose” or just “big”, maybe, instead? I’d argue that many/most of the little languages in question are also high-level by any common definition.
                        2. I think one thing this misses is another route to success for little languages, which is to bypass “traditional software development” altogether and allow end users and domain experts to put together things that are “good enough.” There’s a long history of things like Access and Excel, up to IFTTT and the glut of modern “low-code/no-code” projects. I’d argue some of these absolutely have (and will continue to have) that sort of success.
                        1. 1

                          i might argue with #1, but #2 and #3 are definitely huge factors. if a ton of work is put into reducing the cost of working with multiple languages industry-wide i can see the strategy becoming more common.

                        1. 5

                          This gross underestimate was due to the fact that I thought parsing and making sense of C is simple. You probably think the same.

                          The section of the C standard that covers the language description has 11 sub-sections and is over 130 pages long. Doesn’t sound simple to me.

                            1. 2

                              It’s a C function and often C++ avoids adding extra qualifications to these.

                              There’s another variant of this that I’ve used. If you have a template class where the template parameter is the size of a char array then you can return the length via a static method or cosntexpr field. You can use a deduction guide to generate the right instantiations given a string literal. This is probably overkill for this use, but you need such a thing if you want to use strings as template arguments in C++20 (string literals are pointers and so can’t be used, but a class containing a char array can) and if you have one anyway (I wish the standard library provided one, it’s under 10 lines of code, it’s just annoying to have to add it to every project) then you can use it in this context. This version has the benefit that you get a compile failure if you can’t deduce the length at compile time, whereas the constexpr call will fall back to run-time evaluation.

                              1. 1

                                Because strlen is from the C library, and C doesn’t have constexpr.

                                1. 1

                                  This is not an answer. __cplusplus exists for a reason.

                                  1. 2

                                    I’m not sure what you’re getting at. “__cplusplus” doesn’t exist in C, and so it can’t help at all.

                                    It’s clunky, but that’s how it is.

                                    1. 1

                                      There is such thing as #ifdef. __cplusplus is not defined in C and defined in C++, so you can conditionally declare strlen to be constexpr only in C++ and not in C.

                                      1. 1

                                        __cplusplus is not defined in C

                                        But “strlen” is defined in C, and that’s why it can’t be changed. The C++ standards body can’t change C any more than it can change Rust, Posix, or glibc.

                                        1. 2

                                          Sure, but strlen can be unchanged in C and can be constexpr in C++. That doesn’t involve any change to C standard.

                                          1. 1

                                            They can change std::strlen, though, and this kind of difference isn’t unprecedented, std::isinf (C++) is a set of overloaded functions, whereas isinf (C) is a macro.

                                1. 1

                                  @work I’m still changing how our front end app exports process plans and other workspace attributes to our backend service. I’m hust about done, but I’m fixing up tests and handling some corner cases.

                                  @home I’m reading Ivan Yefremov’s Andromeda.

                                  And i’d like to change up my daily schedule to start waking up earlier.

                                  1. 2

                                    As a non-Haskell user, it’s amazing how poorly it handles JSON parsing. It’s the ugliest and most complicated JSON parsing of any language.

                                    Why do all of the libraries insist on de-serializing into user-defined Haskell objects? IME that’s almost never worth the hassle, and it’s 100x easier to just treat the incoming JSON as its own data structure and objects.

                                    1. 9

                                      You absolutely can just parse a json object into a map of keys to json values. That’s how a lot of parsers start out.

                                      The problem is that you need to write functions that work on some of the values inside of that json object. So what do you do? You can pass every function the entire json blob and have it try to extract the elements it needs, and possibly return an error if something was missing or invalid. That leads to a lot of duplicated effort though, since a lot of your functions will be looking at the same data. It’s also a pain to write a bunch of functions that might fail.

                                      An alternative would be to write one function that takes the json blob and tries to get all of the fields that it needs, and if one of them is missing then it fails. If everything exists, then it can call all of the functions you need to call. That would work great as long as you know ahead of time what fields you need and what functions you want to call, but it’s also a bit messy.

                                      It would be idea if you could just say “here’s a set of common fields that I want to work with, and I’ll check these fields once ahead of time. If they are all present and valid, then I’m good to go, otherwise I’ll report an error”. That’s exactly what these json libraries are doing. You write a type that says “here’s everything I expect to get and how I expect it to look”. Then you do the check once to make sure your json object has everything you need. Once you’ve done that, the rest of your code can happily assume that it’s getting valid data and you don’t have to worry about errors.

                                      1. 1

                                        I guess my point is that a runtime error for a missing key is just as useful (and probably more readable) than a runtime error from typing, so why bother with all of the typing and conversion boilerplate?

                                        1. 3

                                          There’s no reason for readability to be impacted either way, and in general the existence of types aren’t de facto boilerplate. Handling the errors up front simplifies all of the code that comes afterwards because you are assured that you have good inputs. Ad hoc error handling spread throughout your application causes a lot more boilerplate because you end up having to handle errors many more distinct places. It’s also a lot more error prone (you can forget to handle them) and makes testing harder (you have to test every function to make sure it can deal with errors)

                                          1. 1

                                            I’m not arguing against up-front verification or for dispersing error handling throughout the code, though.

                                            What I’m saying is that it’s a little clunky and awkward to use the type system for that purpose. IMO, of course.

                                            1. 2

                                              If you don’t like types for that then you don’t really like types. That’s fine I guess, but I think it’s wrong to say the approach is clunky. It’s not, it’s just not aligned with the way you like to write code.

                                      2. 4

                                        Nothing insists on forcing you to work with your own data types, but not doing so is, frankly, insane. We could pass around the Value type everywhere, but anywhere I use it, I have to revalidate all my assumptions about the data: is foo.bar[3].baz a number? and is that number an Integer without any fractional part? Is there a string at home.page.next that is a valid URL?

                                        Most Haskell developers believe strongly in the parse, don’t validate mantra - if I parse my data and force the data into only valid shapes, then I know statically that I never need to re-check anything. If I have a Value, I know absolutely nothing other than I received a bytestring which parsed as valid JSON. if I have a CreatePayment, I know I have a field called createPaymentIdempotencyKey which is a valid UUID, I have a field called createPaymentAmount which contains a valid MoneyAmount, etc. I never need to check those assumptions again - I can’t have got a CreatePayment unless I had valid data.

                                        This also makes applications faster, I don’t have to do error handling all throughout the app - to look something up by its idempotency key, I already know I have a valid UUID, so I just serialise that in by DB interface, I don’t need to first extract it, hope it exists, deserialise it, ensure it’s the right format of string, then convert that; that was all handled by the parser.

                                        Dealing with JSON is fine for toy projects, but you need to get rid of it as soon as possible in anything doing real work. Applications become much simpler when you build a core application which only operates on valid data, and then wrap it in the code that protects it from the incorrect data, like an onion; bytes -> utf-8 -> json -> my data -> business logic data validation -> business logic.

                                        1. 1

                                          None of that makes the code not ugly, klunky, and awkward, though.

                                          And at the end of the day, if “foo.bar[3].baz” isn’t a number and the code expected it to be, it’s going to be discovered at runtime, regardless of the programming language.

                                          The difference is how much extra code has to be written to detect and handle that condition, and that’s where Haskell falls down compared to other languages, IMO.

                                          1. 5

                                            Here’s the valid Haskell version of that:

                                            key "foo" . key "bar" . ix 3 . key "baz" . _Number

                                            I write stuff like this all the time. Really not awkward!

                                            1. 3

                                              On the contrary, absolutely no extra code is written, but in your preferred style, it’s scattered all throughout the codebase - I can never fully know if all my assumptions have been checked everywhere they need to be. But if I need to add a new constraint to my Haskell code, I know exactly where I need to do, to the parser.

                                              I’m guessing you don’t write a whole lot of commercial or production software?

                                              1. 2

                                                On the contrary, absolutely no extra code is written, but in your preferred style, it’s scattered all throughout the codebase - I can never fully know if all my assumptions have been checked everywhere they need to be. But if I need to add a new constraint to my Haskell code, I know exactly where I need to do, to the parser.

                                                I’m not sure you know what my “preferred style” is, just that I don’t like the burdensome, type heavy Haskell way of doing it.

                                                I’m guessing you don’t write a whole lot of commercial or production software?

                                                There’s no need to be condescending. I’ve written enough commercial and production software to realize Haskell adds extra up front development cost but doesn’t eliminate the most expensive bugs.

                                          2. 3

                                            Why?

                                          1. 2

                                            @home I’m reading Stanislaw Lem’s book “Eden”.

                                            And I’m writing a Lisp package to represent and manipulate mazes, with the main goal being to visualize them as OpenGL textures using Blend2D.

                                            And I’m researching upgrades I want to make to my bikes over the winter. I’m thinking about writing a Lisp library to model the bikes and help keep track of different builds.

                                            @work I’m mucking with some of the “data connector” code that’s used to associate attributes and external data with CAD models in our system. First I had to add hyperlinks to our XML format (to the schema, and then the parser), and now I’m adding the ability to use tables from the XML format as “Process Plans,” which we use to describe processes associated with the CAD model.

                                            The existing system reads plain text from Excel and CSV, but going forward we’ll allow more limited rich text formatting, like hyperlinks. This is turning out to be a bit of challenge to integrate, due to some assumptions we make about everything being simple strings.

                                            After that, there’s one more change in this ticket, related to how we publish to our back-end server, but I’m not expecting much trouble there.

                                            1. 40

                                              Everyone uses an editor? No no no no… 1000 times no. I hate WYSIWYG editors and what thei represent. Putting formating ahead of content was an horrible idea that tends to survive in the heads of many, while at the same time it already has been proven conter productive anyway.

                                              Markdown is human writable, and could be adopted by the masses for example on messaging apps, social media, etc. If people are introduced or forced to use it at work or school.

                                              Bbcode was very popular in the 2000s and webforums broke through in popularity well beyond techies.

                                              What if students had to make their written school assignments in mark down? Is it such a complicated thing to ask the? In which way is MSWord any simpler? It’s not!

                                              1. 9

                                                If you need to include tabular data, markdown is hard, IMO. The original markdown required you to just write HTML for them, which was no picnic. None of the dialects that have evolved since then are anywhere near as easy as editing a table in Word. And I say this as someone who intensely dislikes Word.

                                                I like writing in markdown, using a plain old text editor. But when I need to insert a table, I use visidata to edit and export github-flavored markdown. I don’t mind it, because I appreciate the other benefits of markdown. I could not claim, with a straight face, that it’s as easy as a WYSIWYG editor would be for creating the document.

                                                (Also, FWIW, markdown has been adopted on discord, and I think most matrix clients do the right thing with it too.)

                                                1. 16

                                                  another nice option is pandoc:

                                                  $ pandoc -f csv -t gfm <<-EOF
                                                          foo,bar,baz
                                                          1,2,3
                                                          4,5,6
                                                  EOF
                                                  | foo | bar | baz |
                                                  |-----|-----|-----|
                                                  | 1   | 2   | 3   |
                                                  | 4   | 5   | 6   |
                                                  
                                                  1. 4

                                                    FWIW, Emac’s markdown-mode has a few functions that make writing tables easy.

                                                    There’s markdown-insert-table which prompts for the size and alignment and inserts a pre-built table, and even allows tabbing between cells.

                                                    And then there’s a number of markdown-table-* functions for editting them - moving rows, adding columns, etc..

                                                    1. 2

                                                      I wrote my own Markdown/ORG mode markup language for my blog. The one thing I do not do is store the posts in my markup language, but the final HTML render—that way, I’m not stuck with whatever syntax I use forever (and I’ve changed some of the syntax since I initially developed it). Also, for tables, I use simple tab-separated values:

                                                      #+table Some data goes here
                                                      *foo        bar     baz
                                                      **foo       bar     baz
                                                      3   14      15
                                                      92  62      82
                                                      8   -1      4
                                                      #-table
                                                      

                                                      Whitespace are tabs, the line starting with the asterisk is a header line; the double asterisk is the footer. This handles 95% of the tables I generate, and because I store posts HTML format, it doesn’t matter much that it looks a bit messy here.

                                                      I think most people don’t get what John Gruber was trying to do—make it easier to write blog posts.

                                                    2. 2

                                                      “Putting formating ahead of content was an horrible idea that tends to survive in the heads of many”

                                                      I use Emacs and Org-mode but I have never understood the insistence that those who use anything from LaTeX to Docbook to Markdown are separating content and structure.

                                                      Oh, how I tried to learn LaTeX until it smacked me in the forehead that I had to complie a document!

                                                      Anyone who types #header ##subheading * bullet while typing (or using autocomplete) is thinking about format and structure while producing content.

                                                      I loathe word processors but creating a template makes it just as easy to seperate content and structure. Even back in the 90s on USENET and other pure plaintext forums, or RCF’s that matter, it was commonplace to insert ASCII tables and /emphasis/, like I am now with * and /s.

                                                      Nothing has ever stopped anyone from treating a screen like a typewriter or pad of paper and just writing and writing and writing and come back later to add structure and formatting.

                                                      Writing is writing. Editing is editing. Typesetting is typesetting. The only difference now is we all have to do all three, but nothing but our minds prevents us from doing them separately.

                                                      1. 1

                                                        Agreed. The only WYSIWYG editor I’ve ever enjoyed using is TexMacs, despite it’s strange window/buffer approach and bugs. I wish every WYSIWYG editor learned from it. The vast majorty of them are a complete nightmare. I want to throw my computer every time Slack’s new WYSIWYG message box screws up my formatting.

                                                      1. 2

                                                        Not really a “tweak”, but a couple weeks ago I discovered Alt-PageUp and Al-PageDown to scroll another window. So obvious now that I know about it, but I never thought to look for it.

                                                        1. 2

                                                          How noticeable is this in “real life” usage? Are there modes or use-cases that run into GC problems?

                                                          I can believe it, I’m just curious what they are. I don’t recall it ever being a problem for me.

                                                          1. 5

                                                            It was very noticeable to me because I have a lot of minor modes running that send requests and receive responses that are pretty large, so they create lots of temporary objects, and the slowdowns went from being noticeably distracting on my end to never noticing a GC pass. In practice, the real GCs are probably even faster than the benchmark implies because most regular GCs would fit inside a single block, and the experiment used a lot more conses than would fit in a single block.

                                                            Edit: I should add that I have gc-cons-threshold set a lot higher than the default as well, so that trades longer pauses for less GC passes. I like having as little friction as possible for typing, so I generally don’t want the GC running while in Evil insert mode.

                                                          1. 1

                                                            First weekend in a while that I don’t have social engagements, so I’m resting, reading, getting chores done and going for a long bike ride.

                                                            1. 3

                                                              This is all rather outdated. Not only is a LSP client implementation being added to Emacs, tree-sitter integration will be as well. I note that this isn’t about the features, rather it is about rms’s stewardship of the whole GNU project.

                                                              I think it would be more interesting to chronicle how things went with adding Sqlite support into Emacs. rms was opposed, because Sqlite allows proprietary extensions, which could be depended on by packages, etc. etc. Now, Sqlite support is coming in 29.0 but I missed the part where the maintainers agreed on adding it. I think it was solved off-list by Eli, Lars and rms.

                                                              1. 26

                                                                No, it’s not, as it’s a post about how RMS is causing harm by not accepting that the battle for GPL-or-nothing mentality was over, not a specific episode.

                                                                The problem is that RMS refuses to acknowledge that the environment has changed and is intentionally limiting free software at all cost, to try and protect against a threat that no longer existed - seriously, name any company that would want to touch gcc specifically, or more generally anything GPL3, with a 10 foot pole when llvm+clang exist. For any proprietary shop the equation is super simple: llvm+clang are BSD licensed so there is no need to even touch the GPL3’d GCC, and llvm+clang are designed from scratch to be usable and embeddable in different ways. The threat that such a shop would willingly risk having one group of engineers make an “export the AST” plugin that was GPL3 as required, and then a different group build a program that read that output is laughable.

                                                                The addition of LSP to emacs is happening long after other editors, and for C and C++ to my knowledge all of the servers are built on clang and llvm, which was what the original emacs project the RMS blocked was doing. RMS blocked it because clang+llvm was not sufficiently free. The result of which is that clang now defines what code completion is for the majority of C/C++ code editors - gcc isn’t involved. Because of the fear of people reusing gcc’s amazing codegen leading to bad technical choices, most new languages now simply use llvm.

                                                                The longer RMS’s refusal to accept that the battle is at least different from when gcc was the only free game in town, and so he continue to require poor technical decisions, and block things that would actually help gcc, the longer gcc continues a slide towards irrelevance. The original thread being referenced in the article (where RMS was making decisions on the basis of policy for a subject he demonstrably did not understand) was ~7 years ago now, which means that a person could be starting at uni and now have finished a phd in CS, and through their entire academic life gcc, or any of the FSF compilers or languages was not remotely relevant or useful to them, beyond compiling assignments.

                                                                That’s kind of the point of the article: RMS is fighting a battle that is lost, and because of that the decisions that he unilaterally enforces are in totality harmful. The specific episode the article is referring to is just used as a very clear example of how his approach is harming free software.

                                                                1. 5

                                                                  The goals of the FSF and RMS aren’t necessarily to win a popularity contest and dominate the competition. Beating LLVM and LSP, but losing GPL protection is a loss by their rules.

                                                                  In a lot of ways their concerns are more relevant than ever. Just the other day there was an article here (or HN?) about corporations exploiting non-GPL licenses and developers.

                                                                  1. 4

                                                                    Just the other day there was an article here (or HN?) about corporations exploiting non-GPL licenses and developers.

                                                                    There are many such articles. I generally don’t understand them.

                                                                    All my personal open-source work is BSD licensed. I BSD my stuff because I want it to be useful to the widest possible audience of developers in the widest possible set of situations. If they then exercise the freedom that I explicitly chose to give them, I don’t see how that’s “exploiting” either the software, or me. I also have a day job to pay the bills and I make heavy use of permissive-licensed software at that day job. Sometimes it’s even software that I wrote years ago – am I “exploiting” myself when I use it?

                                                                    The only way it could be “exploitation” is if I expected companies to pay me or to do their own maintenance. But I don’t expect them to, and there’s no major copyleft license I’m aware of that would force them to – even if I AGPL something, that won’t force them to pay me or do maintenance.

                                                                    And I know the real thing people are getting at is a claim that it’s “exploiting” because the companies can modify the software without being forced by the license to release their modifications to the world, but if I wanted to force that I’d pick a copyleft license and I very explicitly did not pick a copyleft license. I’m not obligated to feel “exploited” because someone else says I should – I licensed my software with no requirements for recipients to release their modifications, and I knew exactly what I was doing when I did so. So please do not get angry on my behalf, or declare that I am being “exploited” or whatever. I’m part of an ecosystem that works just fine for me, even if it’s not what you think I should want.

                                                                  2. 2

                                                                    seriously, name any company that would want to touch gcc specifically, or more generally anything GPL3, with a 10 foot pole when llvm+clang exist.

                                                                    And Clang exists specifically because Apple wanted to integrate a C/C++ language model into their Xcode IDE (which used to use GCC) but couldn’t do it by hooking into GCC internals due to the GPL restrictions.

                                                                    I find copyleft licenses sort of self-defeating for this reason: they create enough limitations for companies to use the software, that at some point a company (they are, after all, the ones with more engineering resources) will find it makes business sense to write their own replacement. The silver lining is that, in a lot of cases, the companies have made those replacements open source rather than proprietary.

                                                                    1. 3

                                                                      And that stirred great competition: clang and gcc pushed each other to make better error messages for c++, and the c++ committee pushed themselves to make it harder to make good error messages for c++ :D

                                                                  3. 2

                                                                    Well, yes. The thing is that good stuff in Emacs tends to be done over RMS’s objections, and those objections are rarely more than tangentially related to user freedom.

                                                                    1. 1

                                                                      The article explicitly mentions elgot being added with rms’ approval.

                                                                      1. 1

                                                                        The linked approval is a post from 2017, and isn’t really an approval of any kind. Eglot did not exist then. The details have changed a lot since then.

                                                                        1. 1

                                                                          I took that to be stating it’s soon but showing rms had approved it at some point. But I can understand your side now, thanks.

                                                                          (Given the text of the rms post, it didn’t really seem to me that it could make sense otherwise. It’s not saying soon in 2017.)

                                                                    1. 2

                                                                      Visiting a friend near Portland, OR.

                                                                      Reading Solaris on the plane and before bed.

                                                                      1. 7

                                                                        A decade ago I’ve spent a lot of time learning Haskell and trying to apply that knowledge solving practical problems. Maybe my brain was already wired in a different direction, but I’ve failed to finish every non-trivial project I was starting. During this time I was showing the same amount of zeal and evanghelism in praising Haskell whenever I got the chance, the rust community is giving us nowadays.

                                                                        Then I’ve tried Scala, and things were better, but I was cheating writing Scala the Java way.

                                                                        To this day I believe that the masses don’t need Functional programming in its fullest, but maybe a few functional features here and there that can make the code more compact. Or maybe I am failed and bitter functional developer :).

                                                                        1. 3

                                                                          Both Scala and Haskell are very complex languages. If you want to learn statically typed functional programming, Elm can be a cool first experience. If you prefer dynamic, Elixir is good too.

                                                                          1. 3

                                                                            Then you’re in luck, because OCaml is not purely functional :-). It offers you a blend of imperative and functional that allows you to pick the best flavor for a particular task (a bit like Java-style Scala, I’d say). You can have actual pure values where it simplifies your life (caches, messages in concurrent programming, etc.) but you also don’t need to embed in a monad to perform IO.

                                                                            1. 2

                                                                              Ditto. I learned a lot of cool and important concepts from Haskell, and I try to apply those ideas in other languages, but overall I spent too much time fighting with the language to make it worthwhile for me to use Haskell itself.

                                                                              1. 4

                                                                                Haskell, Smalltalk, and Erlang are all languages that I think everyone should learn, but very few people should use. They all take a useful idea to its extreme and learning to think in the ways that they encourage gives you a useful set of tools that can be applied almost anywhere.

                                                                            1. 6

                                                                              As a full-time C++ dev who actually likes working with C++, I really don’t like language changes like this.

                                                                              On one hand I agree this syntax is convenient, and subjectively “better” than the existing syntax, and I understand the language needs to grow to stay relevant, but I feel like big syntax changes aren’t productive.

                                                                              It’s becoming a little ridiculous.

                                                                              C++ is already big and complicated in the worst way: multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach.

                                                                              Deprecation doesn’t help, because outside of the big tech companies nobody can afford to go back and update debugged, working code, so most commercial systems just compile in C++11 mode or whatever, and in practice the language only grows.

                                                                              And as an outside observer to the standards process, there’s no clear direction or design goals for most changes, except that notable people and “experts” in the community proposed them, or somebody found it convenient and had the political savvy to get it adopted. So-and-so at a FAANG read a book about about feature “foo” in language Bar, so now there’s a proposal to cram it into C++.

                                                                              Meanwhile, learning arbitrary new C++ changes takes away energy from learning new, better designed languages without all of the baggage. C and C++ were designed for an obsolete time in computer history. There are old language that were forward thinking, with modern features, that would be great for new development (cough, Common Lisp :-), but C++ isn’t one of them. By all means, learn new techniques, and apply them in C++, but not every little things needs to be added in.

                                                                              That said, nobody’s forcing me to use C++, and there’s a lot of new languages to move to, so I guess it’s my own problem…

                                                                              1. 3

                                                                                C++ is already big and complicated in the worst way: multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach

                                                                                this is exactly why something like cppfront is needed, to make bold syntactic and semantic changes that can attempt to regularize the language without being overly shackled to the existing state of affairs. it provides a clean upgrade path for people who are able to use it; for everyone else there’s the more conservative evolution of c++.

                                                                                1. 3

                                                                                  Languages grow and adapt. C++, Rust, C#, Java, Python, Go, OCaml, Javascript. With research and advances in computing, we are always going to find new, potentially better ways of expressing our programs. And many would argue that C++ isn’t evolving fast enough; I’m generally in that camp. Having to wait for some of the stuff that’s coming in C++23 is a bit frustrating.

                                                                                  A lot of “modern” codebases won’t work with the original versions of a lot of those languages mentioned. Sure, C++ is probably one of the hardest ones to cope with in terms of change I think, but engineering is hard.

                                                                                  1. 2

                                                                                    I only hear about C++ language changes as a cautionary tale.

                                                                                    I don’t hear Java devs complaining it has jumped the shark, and Java is super old by now. C# also managed to survive pretty long and remain coherent. JavaScript got only minor complaints about the dense ES6 syntax, but once everyone got used to it, it’s doing very well. PHP managed to bury a lot of its early mistakes, despite having huge install base and backwards compatibility liability. Rust users welcome its changes with “omg, finally!”. Python3 screwed up, but even they’re getting back on track now.

                                                                                    There’s something unique about C++ that makes it keep adding partial fixes that get more partial fixes every 6 years.

                                                                                    1. 2

                                                                                      I was thinking about that a bit after I posted last night.

                                                                                      So far the history of languages has been to throw them away and create new ones, but maybe the future is to adapt the existing language to the current needs. I still feel like C++ isn’t the best language for that, but it doesn’t hurt to try.

                                                                                      Ironically, Lisp was designed with that kind of growth and evolution in mind, but it never really panned out for other reasons.

                                                                                    2. 3

                                                                                      multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach.

                                                                                      This, plus the problem is not just an artifact of C compatibility, it’s an ongoing issue with the recent additions to the C++ standard.

                                                                                      I was very annoyed by the C++11 “universal and uniform initialization” syntax, precisely because it is not universal and uniform. It looks like one faction of the language committee wanted to use the brace initialization syntax for this, and another faction wanted to use the same syntax for aggregate initialization, so they compromised and overloaded the syntax to mean “universal” initialization for some types, and aggregate initialization for other types. So it’s not universal: there’s a gotcha that you need to understand before you can safely use this syntax in generic code.

                                                                                      Ad hoc overloading, where the same syntax means semantically incompatible different things depending on argument types, can be found throughout the language. It makes the language hard to use by creating “gotchas”, and it works against generic programming.

                                                                                      My suggestions for designers of future programming languages: support generic programming.

                                                                                      1. Do not use ad-hoc overloading anywhere in the language, because it breaks generic programming.
                                                                                      2. However, do use “principled overloading”, where all of the overloaded meanings are semantically compatible and are different implementations of the same algebraic structure, satisfying a common set of axioms. This is important, it’s what makes generic programming possible.

                                                                                      Herb Sutter appears to get this, when he says “generic code demands that consistency” with respect to his proposal, which is intended to be a universal and uniform syntax for a variety of pattern matching. Well, he gets half of it anyway. In his video, he says “do not needlessly use divergent syntax”, because it breaks generic programming.

                                                                                      But, Sutter’s proposal nevertheless introduces ad-hoc overloading. For one, the “is” operator is overloaded for two incommensurate cases:

                                                                                      • T1 is T2 means “the value set of type T1 is a subset of or equal to the value set of T2”.
                                                                                      • V is T means “the value V is contained within the value set of type T”.

                                                                                      If you accept my “value set” metaphor of types, then these two operations correspond to T1 ⊆ T2 and V ∈ T in set theory. DIfferent operator symbols are used, they aren’t the same thing. Or in the Julia language, which is designed from the ground up for generic programming, these two operations are T1 <: T2 and isa(V,T).

                                                                                    1. 1

                                                                                      Visiting my mom and sister, riding motorcycles, and eatting out too much.

                                                                                      1. 5

                                                                                        Interesting project. I await the eventual showdown between C++2 and Carbon.

                                                                                        What I’d really like to see is a new language that is well designed, and also highly interoperable with both C++ and Rust. Because I’d like to write apps using best of breed libraries chosen from both the C++ and Rust worlds, without creating a shim layer around a library before I can use it. I don’t have high hopes that this will exist, because of the difficulty, and also because it’s probably heresy to language purists in the Rust and C++ camps.

                                                                                        1. 1

                                                                                          I’m familiar with the difficulty of using C++ from other languages, but what’s the issue using Rust? I’m guessing the borrow checker is the biggest hurdle?

                                                                                          1. 8

                                                                                            Rust explicitly doesn’t conform to any external ABI. This means no binary compatibility with anything that isn’t built with the same exact version of the compiler; in practice it’s not quite as strict as that, but generally if you want to link a Rust library into a Rust program you must start by building the library from source.

                                                                                            You CAN write Rust code that explicitly uses the C ABI, so you can create static or dynamic libs that are callable from other languages. But it’s basically as much work as writing an FFI wrapper layer, and there’s many Rust features you can’t use in it such as generics, traits/trait objects, etc.

                                                                                            1. 3

                                                                                              For a concrete example, I’d like to use the egg library in a C++ program. It has interfaces like this:

                                                                                              pub struct EGraph<L: Language, N: Analysis<L>> {
                                                                                                  pub analysis: N,
                                                                                                  pub clean: bool,
                                                                                                  /* private fields */
                                                                                              }
                                                                                              

                                                                                              where Language and Analysis are traits. To use this library directly from another language without creating a wrapper, the language needs to understand Rust traits. (Also, what @icefox said.)

                                                                                            2. 1

                                                                                              To get some idea of feasibility, let’s consider how Rust would need to change to allow C++ interop without a shim layer. (Rust in highly interoperable with itself.)

                                                                                              To be able to use any C++ library, you’d need to be able to use overloads and instantiate templates. For completeness, that requires matching C++’s notion of what known-size integers are aliases of (long etc.) and the distinvtion between e.g. char16_t and uint16_t. Can a language be well-designed and fit C++ overload resolution shimlessly simultaneously?

                                                                                              1. 1

                                                                                                Herb Sutter says that C++2 will be

                                                                                                10x simpler than C++, type-safe and memory-safe by construction. You still have seamless interoperability with all C++ code via module import, but not via #include.

                                                                                                Assuming this is possible, this is the kind of language that I’d like to extend with seamless Rust interoperability via module import.

                                                                                                In my version of such a language, the core language would be significantly simpler than either C++ or Rust. However, some additional C++ specific complexity would be exposed when referencing members of a C++ module, and likewise for Rust.

                                                                                                I am not a fan of C++ overload semantics, but these would need to be used when compiling a call to a function M::f, where M is a C++ module. So maybe you would pay the cost of dealing with C++ overload semantics only if you import a C++ module.

                                                                                            1. 12

                                                                                              What a crappy writeup… the amount of time given to fossil does it no justice. I’ve been using it locally for a few months now and hosting multiple repos and I’ve had such a good experience. So easy to run from a jail on my TrueNAS and even add offsite backups to because all the projects sit in the same directory with a single file each. For open source I think you could easily do some sort of “fossil localdev to public git” flow. The focus on windows/WSL is also annoying but I suppose it allows the whole post to be dismissed for folks who use neither. Hopefully the mention of all the different projects sparks folks’ interest. I think it’s fun to tinker with using different VCS tools.

                                                                                              1. 12

                                                                                                The focus on windows/WSL is also annoying but I suppose it allows the whole post to be dismissed for folks who use neither.

                                                                                                Windows compatibility is really interesting: it’s important for a lot of users but not something a lot of developers have the computers, interest, or even awareness to match. Anything that wants to seriously compete with git would need to run natively on windows without wsl.

                                                                                                1. 4

                                                                                                  But Fossil does (it’s a single drop-in executable via either winget or scoop), and Git not only runs fine on Windows; it’s the default SCM that Microsoft internally uses these days. This would be like evaluating Subversion on Linux by running TortoiseSVN on WINE.

                                                                                                  1. 6

                                                                                                    Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.

                                                                                                    Personally it doesn’t bother me enough to run through WSL, but I’ve heard people suggest it.

                                                                                                    It’s slow enough that for big operations I’ll occasionally switch over to the shell and manually run git commands instead of using Magit, because Magit will often run multiple git commands to get all of the information it needs, and it just slows down too much.

                                                                                                    1. 5

                                                                                                      Context for this response: I did Windows dev professionally from 2005 to 2014, and as a hobbyist since then. Half the professional time was on an SCM that supported both Mercurial and Git on Windows.

                                                                                                      Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.

                                                                                                      Setup is literally winget git (or scoop git, if you’re weird like me), aaaand you’re done–or at least in the same git config --global hell everyone on Linux is in. Performance has been solid for roughly four years if you’re on an up-to-date Git. It’s not up to Linux standards, because Git relies on certain key aspects of Unix file and directory semantics, but there are even official Microsoft tools to handle that (largely through the Windows equivalent of FUSE).

                                                                                                      Running anything through WSL on files will perform atrociously: WSL1 is slower than native, but mostly okay, but WSL2 is running on p9, so you’re doing loopback network requests for any file ops. I do run Git in WSL2, but only when working on Linux software, where it’s running natively on Btrfs. You’re trying to lick a live propeller if you use WSL2 Git on Windows files.

                                                                                                      I have zero experience with magit on Windows because Emacs on Windows is, in my opinion, way too painful to deal with. I love Emacs on *nix systems! Keep it! It’s awesome! This is just about Windows. And in that context, things like e.g. Emacs assuming it’s cheap to fork a new process–which it is on Linux, but not on Windows–can make a lot of Emacs stuff slow that doesn’t need to be. That said: if you’re using native Git, and not e.g. Cygwin or WSL1 Git, it should perform well out-of-the-box.

                                                                                                      1. 4

                                                                                                        To clarify, most of the finicky setup on Windows was related to SSH keys, because Windows doesn’t support SSH well. Eventually I ended up getting it working with Putty, IIRC.

                                                                                                        I have the opposite experience with Emacs on Windows. It more or less “just works” for me, and it’s really the only way I can tolerate using Windows for development. Somethings are slower (basically anything that uses a lot fork/exec, like find-grep), but for the most part it’s the same as on Linux and OSX, but a version or two behind :-/

                                                                                                        I suspect we just have different expectations as far as Git performance, though. I’m using the latest version (as of a couple months ago) from https://gitforwindows.org/, have “core.fscache” turned on, and other “tricks” I found via StackOverflow (a lot of people think it’s slow on Windows) to speed things up, and it’s still noticeably slower than on Linux - especially for large commits with big diffs.

                                                                                                        1. 5

                                                                                                          As a reminder, this article was about Git versus other SCMs. That said:

                                                                                                          To clarify, most of the finicky setup on Windows was related to SSH keys, because Windows doesn’t support SSH well

                                                                                                          SSH and ssh-agent are built in since at least Windows 10. It’s directly from Microsoft, requires no third-party dependencies, and integrates directly with the Windows crypt store and Windows Services manager.

                                                                                                          I suspect we just have different expectations as far as Git performance, though

                                                                                                          Git on Windows does perform meaningfully worse than on Linux. Two reasons are generic (sort of) to Windows, one to Git. On the Windows front: the virus scanner (Windows defender) slows things down by a factor of 4- to 10x, so I would disable it on your source directories; and second, NTFS stores the file list directly in the directory files. This hurts any SCM-style edit operation, but it’s particularly bad with Git, which assumes it’s cheap. That last one is the one that’s partially Git-specific.

                                                                                                          In the context of this article, though, Git should be performing on par with the other SCMs for local ops. That’s a separate issue from the (legitimate!) issues Git has on Windows.

                                                                                                          1. 6

                                                                                                            SSH and ssh-agent are built in since at least Windows 10.

                                                                                                            That’s a tiny bit misleading. Windows 10 now includes them but it certainly did not include them when it shipped in 2015.

                                                                                                            1. 4

                                                                                                              Eh, that’s fair; Microsoft’s decision to call over half a decade of Windows updates “Windows 10” leads to a lot of confusion. But in this case, the SSH bits I’m talking about were added in 2018—five years ago. That’s before React Hooks were public, or three Ubuntu LTS versions ago, if you want a fencepost.

                                                                                                              1. 3

                                                                                                                That’s definitely a bit older than I thought. If I had to answer when it shipped in mainstream builds without looking it up, I’d have said it was a 20H1 feature. At any rate, I wasn’t calling it new so much as saying that “at least Windows 10” reads as “2015” to me.

                                                                                                          2. 3

                                                                                                            I have the opposite experience with Emacs on Windows. It more or less “just works” for me, and it’s really the only way I can tolerate using Windows for development. Somethings are slower (basically anything that uses a lot fork/exec, like find-grep), but for the most part it’s the same as on Linux and OSX, but a version or two behind :-/

                                                                                                            Same. Emacs on Windows is good. I use it for most of my C# development. If you want to not be a version or two behind, let me point out these unofficial trunk builds: https://github.com/kiennq/emacs-build/releases

                                                                                                        2. 2

                                                                                                          Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.

                                                                                                          I’d say that setup is not finicky if you use the official installer — these days it sets up the options you absolutely need. It’s still painfully slow, though, even with the recently-ish-improved file watcher. It’s generally OK using from the command line, but it’s not fast enough to make magit pleasant, and it’s slow to have git status in your PowerShell prompt.

                                                                                                        3. 2

                                                                                                          thanks for pointing out the package managers for windows. I saw brew is supposed to work as well but have no context other than a cursory search.

                                                                                                      2. 5

                                                                                                        Her usage of the WSL1 would be curious even in 2021, I just don’t get why one would do that.

                                                                                                      1. 8

                                                                                                        I have a hard time imagining a potential Git replacement actually reaching critical mass and replacing Git, at least in the next 5-10 years. It’s not perfect by a long shot but it seems to be good enough, and the associated costs of replacing it are enormous. At this point I think it’d be like trying to replace Imperial units in the U.S. or convincing the countries that drive on the wrong side of the road to drive on the other side… are there good arguments to do things differently? Yes. Can they overcome inertia, habit, and the costs of switching? No.

                                                                                                        1. 6

                                                                                                          I agree, the reason git got so popular so fast is because any vcs is going to be miles and miles better than no vcs.

                                                                                                          1. 12

                                                                                                            But git didn’t replace no VCS. It replaced cvs and svn mostly, and then displaced mercurial.

                                                                                                            1. 7

                                                                                                              I disagree. While git did replace those tools for some people, git (and GitHub) mostly replaced a complete lack of version control.

                                                                                                              1. 5

                                                                                                                I guess the timing of Git simply matched the wider spread introduction of doing VCS at all. But before the popularity of git, there was GForge (and of course Sourceforge, its main user), which was initially CVS-only. For example, Ruby gems were typically developed on Rubyforge, there was gnu.org for GNU projects etc. I think when the Pragmatic Programmer book came out, that was a big influence on people to start using version control.

                                                                                                                People really were using version control before Git became popular, and as it became popular, doing VCS became popular as well. Then, Ruby on Rails’ massive hype train also boosted Git (and GitHub) usage. But that might just be my perspective, as I was doing Ruby development at the time (although GitHub being one of the first big commercial softwares written in Rails may have had something to do with its rise to fame).

                                                                                                                1. 1

                                                                                                                  Yeah. It definitely popularized the idea of version control in a way that I hadn’t observed before then.

                                                                                                            2. 2

                                                                                                              It doesn’t seem that far fetched to me, but then again I watched Git mostly replace Subversion, Mercurial, CVS, and the rest.

                                                                                                              On the other hand, it didn’t take much to convince people Git was better than the other systems because they all had big limitations and weren’t nearly as flexible. Mercurial was closest, but it doesn’t handle things like branching and forking as well as Git (IMO, I guess).

                                                                                                              All that said, articles like this are kind of silly. It’s not all or nothing. If the author doesn’t like Git, they don’t have to use it. But everybody else can choose for themselves, too, and nowadays they mostly choose Git.

                                                                                                              1. 3

                                                                                                                but then again I watched Git mostly replace Subversion, Mercurial, CVS, and the rest.

                                                                                                                Me too! Which is why I don’t see anything supplanting Git anytime soon. People largely unified behind a single solution and it’s reached a point where it’s ubiquitous and just part of the landscape. It’s hard for me to picture something that’s either Git++ as a drop-in replacement with additional goodness (why not just add to Git?), or that’s so good that projects would tolerate the friction of adopting something that’s not part of the standard toolset. Git has a few holdouts, sure, but the friction for startup efforts would be enormous.