1. 18

    I actually wound up switching off i3 (well, sway, but they’re basically the same) because I kept getting things into weird situations where I didn’t understand how the tiling works. Containers with only one child, that sort of thing.

    river, my current wm, has an interesting model: the layout management is done in an entirely separate process that communicates over an IPC mechanism. river sends it a list of windows, and the layout daemon responds with where to put them.

    Also, since you brought it up: sway is almost entirely compatible with i3. The biggest missing feature is layout save/restore. But it can do one thing i3 can’t do, and that’s rearranging windows by dragging them.

    1. 26

      That’s pretty much why I wrote river. I was using sway beforehand as well but grew increasingly frustrated with how much implicit state i3-style window management required me to keep in my head and how unpredictable that state makes managing windows if your mental model/memory of the tree isn’t accurate.

      1. 19

        link to the project: https://github.com/ifreund/river

        Looks interesting!

      2. 6

        I’m in the same boat (pre-switch). I use sway but, after many years, still don’t really understand how I sometimes end up with single child (sometimes multi generational) containers.

        My personal ideal was spectrwm, which simply had a single primary window and then, to the right, an infinitely subdividing tower of smaller windows which could be swapped in. I briefly toyed with the idea of writing a wayland spectrwm clone.

        1. 7

          That sounds exactly like the default layout of dwm, awesomewm, xmonad, and river. If you’re looking for that kind of dynamic tiling on wayland feel free to give river a try!

          1. 4

            I will! I had some trouble compiling it last time I tried. But I will return to it.

            1. 4

              Feel free to stop by #river on irc.libera.chat if you run into issues compiling again!

          2. 1

            Your reasons for spectrwm (and xmonad’s, etc. model) is exactly the reason I use tiling window managers like i3, exwm and StumpWM: I don’t like that dynamic at all ;-)

            No accounting for different tastes.

            Is there a name for those two different tiling models?

            1. 1

              automatic vs manual?

              1. 1

                I’ve seen the terms static (for when the containers have to be created by the user) vs dynamic used.

                ArchLinux seems to call them dynamic vs manual. See the management style column https://wiki.archlinux.org/title/Comparison_of_tiling_window_managers

            2. 1

              I was also quite lost with the way tiling works at the beginning. There is not much resource around this subject. It seems people just get used to it and avoid creating these useless containers. I am lucky, it was my case.

            1. 1

              Just say no to cleaning up git history people! You don’t look at it enough to pay off.

              Unfortunately, I don’t have data to back up this claim. :/

              1. 13

                I don’t have the data either, but I do have the experience. Bad commit messages and dirty history are the bane of my life. The advice in this post is excellent.

                1. 5

                  I look at it easily more than 10 times a day. I do think you are on the right track as to why a large population of developers don’t take the time to write useful commit messages. They treat think of it as a write only medium. If I used the git command line or the GitHub web UI to navigate history I wouldn’t check the VC history so often.

                  1. 4

                    If people looked more often, perhaps they would care more about their commits.

                    I have a gutter with commit messages for each line/chunk in my editor for much of the day since it gives me some context about why a line/function looks as it does.

                    1. 1

                      Right, I am totally for writing good commit messages! They should contain a description of the changes and a link to the ticket. That way you get requirement + architecture idea. But I never look at the graph structure.

                    2. 3

                      I use our git history constantly (we have a clean, well-organized one). I work at a Very Large Enterprise, too. Could your experience be related to not working within a space where the history is clean enough to be reliably usable, rather than it being worthless?

                    1. 4

                      So this prevents you from being able to confidently answer questions such as: [..] “what was the state of main as of a given date?”.

                      Although it is not w/o caveats when talking about a refname git accepts a date. Quoting man 7 gitrevisions:

                          [<refname>]@{<date>}, e.g. master@{yesterday}, HEAD@{5 minutes ago}
                                 A ref followed by the suffix @ with a date specification enclosed in a brace pair (e.g.  {yesterday}, {1 month 2
                                 weeks 3 days 1 hour 1 second ago} or {1979-02-26 18:30:00}) specifies the value of the ref at a prior point in time.
                                 This suffix may only be used immediately following a ref name and the ref must have an existing log
                                 ($GIT_DIR/logs/<ref>). Note that this looks up the state of your local ref at a given time; e.g., what was in your
                                 local master branch last week. If you want to look at commits made during certain times, see --since and --until.
                      

                      This doesn’t detract from the point of the post, but I thought I’d share

                      1. 7

                        One wrinkle that you may not realize from reading this article is that different human languages have different sort orders for the same characters* — for example, IIRC the character “å” has a different position in the alphabet in Swedish and Norwegian — so proper string sorting requires knowing what language the strings are in.

                        …which is not necessarily the same as the system’s current locale. If I’m bilingual I may have my OS configured for English but still work with a lot of German text.

                        * Oh, and this doesn’t just happen for those weird foreign characters, it can apply to ASCII too. Spanish has special sort rules for “ch” and “ll” — basically they get treated as though they were a single letter that comes after “c” and “l” respectively.

                        (Again IIRC. I worry I’m getting these examples wrong from memory. The ICU documentation, which is where I learned this, has the straight dope.)

                        1. 2

                          Is there a similar sorting issue for the Dutch “ij” which is I think the ascii-indication of ÿ?

                          1. 2

                            I found a big list of language collation rules, and it says “ij” isn’t treated specially any more … except in phone books.

                          2. 2

                            This doesn’t detract from the main point of your post.

                            Spanish has special sort rules for “ch” and “ll” — basically they get treated as though they were a single letter that comes after “c” and “l” respectively.

                            ch and ll have not being considered their own letter in Spanish for around ~25 years. I was in Elementary school when they stop being their own letters (I want to say 94). Also (this I’m less sure of as I was starting to use dictionaries back then) they didn’t affect the sorting. In the dictionary a word with ‘ce’ in the middle would be found before a world with ch in the middle. (ej. ‘hace’ > ‘hacha’). The only difference is that a word starting with ch would be in a separate section in the dictionary.

                          1. 2

                            First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully but it uses very bad syntax, globs . Most macros merely suppress evaluation and this can be done using first class functions. Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

                            1. 7

                              Most macros merely suppress evaluation and this can be done using first class functions.

                              I strongly disagree with this. Macros are not there to “merely suppress evaluation.” As you point out, they’re not needed for that, and in my opinion they’re often not even the best tool for that job.

                              “Good” macros extend the language in unusual or innovative ways that would be very clunky, ugly, and/or impractical to do in other ways. It’s in the same vein as asking if people really need all these control flow statements when there’s ‘if’ and ‘goto’.

                              To give some idea, cl-autowrap uses macros to generate Common Lisp bindings to C and C++ libraries using (cl-autowrap:c-include "some-header.h"). Other libraries, like “iterate” add entirely new constructs or idioms to the language that behave as if they’re built-in.

                              Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

                              Lex/Yacc and CL macros do very different things. Lex/Yacc generate parsers for new languages that parse their input at runtime. CL macros emit CL code at compile time which in turn gets compiled into your program.

                              In some sense your question is getting DSLs backwards The idea isn’t to create a new language for a special domain, but to extend the existing language with new capabilities and operations for the new domain.

                              1. 1

                                Here are examples of using lex/yacc to extend a language

                                1. Ragel compiles state machines to multiple languages
                                2. Swig which does something like autowrap
                                3. The babel compiler uses parsing to add features ontop of older javascript like asyc/await.

                                I am guessing all these use lex/yacc internally. Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.

                                The basic property of a macro is to generate code at compile time. Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.

                                Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.

                                1. 5

                                  Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ?

                                  CL-USER> (describe #'plus-macro)
                                  #<CLOSURE (:MACRO PLUS-MACRO) {1002F8AB1B}>
                                    [compiled closure]
                                  
                                  
                                  Lambda-list: (&REST SB-IMPL::ARGS)
                                  Derived type: (FUNCTION (&REST T) NIL)
                                  Documentation:
                                    T
                                  Source file: SYS:SRC;CODE;SIMPLE-FUN.LISP
                                  ; No value
                                  CL-USER> (describe #'plus-fn)
                                  #<FUNCTION PLUS-FN>
                                    [compiled function]
                                  
                                  
                                  Lambda-list: (A B)
                                  Derived type: (FUNCTION (T T) (VALUES NUMBER &OPTIONAL))
                                  Source form:
                                    (LAMBDA (A B) (BLOCK PLUS-FN (+ A B)))
                                  ; No value
                                  

                                  You underestimate the power of the dark side Common Lisp ;)

                                  In other words … macros aren’t an isolated textual tool like they are in other, less powerful, languages. They’re a part of the entire dynamic, reflective, homoiconic programming environment.

                                  1. 2

                                    I know that but without using lisp runtime and parsing alone can you do the same ?

                                    1. 3

                                      I’m not sure where you’re going with this.

                                      In the Lisp case, a tool (like an editor) only has to ask the Lisp environment about a bit of syntax to check if it’s a macro, function, variable, or whatever.

                                      In the non-Lisp case, there’s no single source of information, and every tool has to know about every new language extension and parser that anybody may write.

                                      1. 1

                                        I believe the their claim is that code walkers can provide programmers with more power than Lisp macros. That’s some claim, but the possibility of it being true definitely makes reading the article they linked ( https://mkgnu.net/code-walkers ) worthwhile.

                                      2. 2

                                        Yes. You’d start by building a Lisp interpreter.

                                        1. 1

                                          … a common lisp interpreter, which you are better off writing in lex/yacc. Even if you do that each macro defines new ways of parsing code so you can’t write a generic highlighter for loop like macros. If you are going to write a language interpreter and parse, why not go the most generic route of lex/yacc and support any conceivable syntax ?

                                          1. 5

                                            I really don’t understand your point, here.

                                            Writing a CL implementation in lex/yacc … I can’t begin to imagine that. I’m not an expert in either, but it seems like it’d be a lot of very hard work for nothing, even if it were possible, and I’m not sure it would be.

                                            So, assuming it were possible … why would you? Why not just use the existing tooling as it is intended to be used???

                                            1. 2

                                              That’s too small of a problem to demonstrate why code walking is difficult. How about this then,

                                              1. Count number of s-expression used in the program
                                              2. Shows the number of macros used
                                              3. Show number of lines generated by each macro and measure line savings
                                              4. Write a linter which enforces stylistic choices
                                              5. Suggest places where macros could be used for minimising code
                                              6. Measure code complexity, coupling analysis
                                              7. Write a lisp minifier, obfuscator
                                              8. Find all places where garbage collection can be improved and memory leaks can be detected
                                              9. Insert automatic profiling code for every s-expression and list out where the bottlenecks are
                                              10. Write code refactoring tools.
                                              11. List most used functions in runtime to suggest which of them can be optimised for speed

                                              Ironically the above is much easier todo with assembly.

                                              My point is simply this, lisp is only easy to parse superficially. Writing the above will still be challenging. Writing lexers and parsers is better at code generation and hence macros in the most general sense. If you are looking for power then code walking beats macros and thats also doable in C.

                                              1. 1

                                                While intriguing, it would be nice if the article spelled out the changes made with code walkers. Hearing that a program ballooned 9x isn’t impressive by itself. Without knowing about the nature of the change it just sounds bloated. (Which isn’t to say that it wasn’t valid, it’s just hard to judge without more information.)

                                                Regarding your original point, unless I’m misunderstanding the scope of code walkers, I don’t see why it needs to be an either/or situation. Macros are a language supported feature that do localized code changes. It seems like code walkers are not language supported in most cases (all?), but they can do stateful transformations globally across the program. It sounds like the both have their use cases. Like lispers talk about using macros only if functions won’t cut it, maybe you only use code walkers if macros won’t cut it.

                                                BTW, it looks like there is some prior art on code walkers in Common Lisp!

                                                1. 1

                                                  Okay, I understand your argument now.

                                                  I’ll read that article soon.

                                                  1. 6

                                                    “That’s two open problems: code walkers are hard to program and compilers to reprogram.”

                                                    The linked article also ends with something like that. Supports your argument given macros are both already there in some languages and much easier to use. That there’s lots of working macros out there in many languages supports it empirically.

                                                    There’s also nothing stopping experts from adding code walkers on top of that. Use the easy route when it works. Take the hard route when it works better.

                                                    1. 6

                                                      Welcome back Nick, haven’t seen you here in a while.

                                                      1. 4

                                                        Thank you! I missed you all!

                                                        I’m still busy (see profile). That will probably increase. I figure I can squeeze a little time in here and there to show some love for folks and share some stuff on my favorite, tech site. :)

                                          2. 1

                                            That kind of is the point. Lisp demonstrates that there is no real boundary between the language as given and the “language” it’s user creates, by extending and creating new functions and macros. That being said, good lisp usually follows conventions so that you may recognize if something is a macro (eg. with-*) or not.

                                        2. 1

                                          Here are examples of using lex/yacc to extend a language

                                          Those are making new languages, as they use new tooling, which doesn’t come with existing tooling for the language. If someone writes Babel code, it’s not JavaScript code anymore - it can’t be parsed by a normal JavaScript compiler.

                                          Meanwhile, Common Lisp macros extend the language itself - if I write a Common Lisp macro, anyone with a vanilla, unmodified Common Lisp implementation can use them, without any additional tooling.

                                          Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.

                                          …at which point you have to modify the build processes of everybody that wants to use this new language, as well as breaking a lot of tooling - for instance, if you don’t modify your debugger, then it no longer shows an accurate translation from your source file to the code under debugging.

                                          If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.

                                          Similarly, if you wanted to write a code highlighter that highlights defined functions differently without querying a compiler/implementation, you couldn’t do it for any language that allows a function to be bound at runtime, like Python. This isn’t a special property of Common Lisp, it’s just a natural implication of the fact that CL allows you to create macros at runtime.

                                          Meanwhile, you could capture 99.9%+ of macro definitions in CL (and function definitions in Python) using static analysis - parse code files into s-expression trees, look for defmacro followed by a name, add that to the list of macro names (modulo packages/namespacing).

                                          tl;dr “I can’t determine 100% of source code properties using static analysis without querying a compiler/implementation” is not an interesting property, as all commonly used programming languages have it to some extent.

                                          1. 1

                                            If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

                                            I don’t know why you’d think they are comparable. The amount of effort to write a macro is way less than the amount of effort required to write a lexer + parser. The fact that macros are written in lisp itself also reduces the effort needed. But most importantly one is an in-process mechanism for code generation and the other one involves writing the generated code to the file. The first mechanism makes it easy to iterate and modify the generated codec. Given that most of the time you are maintain, hence modifying, code I’d say that is a pretty big difference.

                                            The babel compiler uses parsing to add features on top of older javascript like asyc/await.

                                            Babel is an example of how awful things can be when macros happen out of process. The core of babel is a macro system + plugable reader .

                                            I am guessing all these use lex/yacc internally.

                                            Babel certainly doesn’t. When it started it used estools which used acorn iirc. I think nowadays it uses its own parser.

                                            Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.

                                            I have no idea why you think scaffolding it is like parenscript. The common use case for parenscript is to do the expansion of the fly. Not to generate the initial boilerplate.

                                            Code walking is difficult in lisp as well.

                                            And impossible to write in portable code, which is why most (all?) implementations come with a code-walker you can use.

                                            1. 1

                                              If syntax is irrelevant, why even bother with Lisp ? If I just stick to using arrays in the native language I can also define functions like this and extend the array language to support new control flow structures

                                              ["begin",
                                                  ["define", "fib",
                                                      ["lambda", ["n"],
                                                          ["cond", [["eq", "n", 0], 0],
                                                                   [["eq", "n", 1], 1],
                                                                   ["T", ["+", ["fib", ["-", "n", 1]], ["fib", ["-", "n", 2]]]] ]]],
                                                  ["fib", 6]]
                                              
                                            2. 1

                                              Well, if your question is “Would you prefer a consistent, built-in way of extending the language, or a hacked together kludge of pre-processors?” then I’ll take the macros… ;-)

                                              Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time with doing pure code walking alone even though lisp is an easy language to parse.

                                              My first question would be whether or not it makes sense to highlight macros differently. The whole idea is that they extend the language transparently, and a lot of “built-in” constructs defined in the CL standard are macros.

                                              Assuming you really wanted to do this, though, I’d suggest looking at Emacs’ Slime mode. It basically lets the CL compiler do the work. It may not be ideal, but it works, and it’s better than what you’d get using Ragel, Swig, or Babel.

                                              FWIW, Emacs, as far as I know (and as I have it configured), only highlights symbols defined by the CL standard and keywords (i.e. :foo, :bar), and adjusts indentation based on cues like “&body” arguments.

                                              1. 1

                                                Btw there is already a syntax highlighter that uses a code walker and treats macros differently. The code walker may not be easy to write, but it can hardly be said that it is hard to use.

                                                https://github.com/scymtym/sbcl/blob/wip-walk-forms-new-marco-stuff/examples/code-walking-example-syntax-highlighting.lisp

                                          2. 5

                                            Yes, you absolutely want macros even if you Lex/Yacc and interpreters.

                                            Lex/Yacc (and parsers more generally), interpreters (and “full language compilers”), and macros all have different jobs at different stages of a language pipeline. They are complimentary, orthogonal systems.

                                            Lex/Yacc are for building parsers (and aren’t necessarily the best tools for that job), which turn the textual representation of a program into a data structure (a tree). Every Lisp has a parser, for historical reasons usually called a “reader”. Lisps always have s-expression parsers, of course, but often they are extensible so you can make new concrete textual notations and specify how they are turned into a tree. This is the kind of job Lex and Yacc do, though extended s-expression parsers and lex/yacc parsers generally have some different capabilities in terms of what notations they can parse, how easy it is to build the parser, and how easy it is to extend or compose any parsers you create.

                                            Macros are tree transformers. Well, M4 and C-preprocessor are textual macro systems that transform text before parsing, but that’s not what we’re talking about. Lisp macros transform the tree data structure you get from parsing. While parsing is all about syntax, macros can be a lot more about semantics. This depends a lot on the macro system – some macro systems don’t allow much more introspection on the tree than just what symbols there are and the structure, while other macro systems (like Racket’s) provide rich introspection capabilities to compare binding information, allow macros to communicate by annotating parts of the tree with extra properties, or by accessing other compile-time data from bindings (see Racket’s syntax-local-value for more details), etc. Racket has the most advanced macro system, and it can be used for things like building custom DSL type systems, creating extensible pattern matching systems, etc. But importantly, macros can be written one at a time as composable micro-compilers. Rather than writing up-front an entire compiler or interpreter for a DSL, with all its complexity, you can get most of it “for free” and just write a minor extension to your general-purpose language to help with some small (maybe domain-specific) pain point. And let me reiterate – macros compose! You can write several extensions that are each oblivious to each other, but use them together! You can’t do that with stand-alone language built with lex/yacc and stand-alone interpreters. Let me emphatically express my disagreement that “most macros merely suppress evaluation”!

                                            Interpreters or “full” compilers then work after any macro expansion has happened, and again do a different, complimentary job. (And this post is already so verbose that I’ll skip further discussion of it…)

                                            If you want to build languages with Lex/Yacc and interpreters, you clearly care about how languages allow programmers to express their programs. Macros provide a lot of power for custom languages and language extensions to be written more easily, more completely, and more compositionally than they otherwise can be. Macros are an awesome tool that programmers absolutely need! Without using macros, you have to put all kinds of complex stuff into your language compiler/interpreter or do without it. Eg. how will your language deal with name binding and scoping, how will your language order evaluation, how do errors and error handling work, what data structures does it have, how can it manipulate them, etc. Every new little language interpreter needs to make these decisions! Often a DSL author cares about only some of those decisions, and ends up making poor decisions or half-baked features for the other parts. Additionally, stand-alone interpreters don’t compose, and don’t allow their languages to compose. Eg. if you want to use 2+ independent languages together, you need to shuttle bits of code around as strings, convert data between different formats at every boundary, maybe serialize it between OS processes, etc. With DSL compilers that compile down to another language for the purpose of embedding (eg. Lex/Yacc are DSLs that output C code to integrate into a larger program), you don’t have the data shuffling problems. But you still have issues if you want to eg. write a function that mixes multiple such DSLs. In other words, stand-alone compilers that inject code into your main language are only suitable for problems that are sufficiently large and separated from other problems you might build a DSL for.

                                            With macro-based embedded languages, you can sidestep all of those problems. Macro-based embedded languages can simply use the features of the host language, maybe substituting one feature that it wants to change. You mention delaying code – IE changing the host language’s evaluation order. This is only one aspect of the host language out of many you might change with macros. Macro extensions can be easily embedded within each other and used together. The only data wrangling at boundaries you need to do is if your embedded language uses different, custom data structures. But this is just the difference between two libraries in the same language, not like the low-level serialization data wrangling you need to do if you have separate interpreters. And macros can tackle problems as large as “I need a DSL for parsing” like Yacc to “I want a convenience form so I don’t have to write this repeteating pattern inside my parser”. And you can use one macro inside another with no problem. (That last sentence has a bit of ambiguity – I mean that users can nest arbitrary macro calls in their program. But also you can use one macro in the implementation of another, so… multiple interpretations of that sentence are correct.)

                                            To end, I want to comment that macro systems vary a lot in expressive power and complexity – different macro systems provide different capabilities. The OP is discussing Common Lisp, which inhabits a very different place in the “expressive power vs complexity” space than the macro system I use most (Racket’s). Not to disparage the Common Lisp macro system (they both have their place!), but I would encourage anyone not to come to conclusions about what macros can be useful for or whether they are worthwhile without serious investigation of Racket’s macro system. It is more complicated, to be certain, but it provides so much expressive power.

                                            1. 4

                                              I mean, strictly, no - but that’s like saying “if you can write machine code, do you really need Java?”

                                              (Edited to add: see also Greenspun’s tenth rule … if you were to build a macro system out of such tooling, I’d bet at least a few pints of beer that you’d basically wind up back at Common Lisp again).

                                              1. 2

                                                First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully

                                                OCaml has first-class modules: https://ocaml.org/releases/4.11/htmlman/firstclassmodules.html

                                                I’m a lot more familiar with them than I am with CL packages though, so they may not be 100% equivalent.

                                                1. 2

                                                  I’m not claiming to speak for all lispers, but the question

                                                  Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

                                                  might be misleading. Obviously you don’t need macros, and everything could be done some other way, but macros are easy to use, while also powerful, can be dynamically created or restricted to a lexical scope. I’ve never bothered to learn lax/yacc, so I might be missing something.

                                                1. 2

                                                  TIL Common Lisp as aliases for car and cdr, first and rest respectively. I guess this was made as a minor change to be more approachable? Seems like kind of a pointless feature to be honest.

                                                  1. 5

                                                    I’ve often heard that first (second, third, …) and rest should be used by default and that c[ad]*r are just part of CL’s heritage, and are provided as legacy functions.

                                                    1. 8

                                                      After seeing how much my TLA+ students struggled with using /\ instead of &&, I’ve come to the opinion that any unnecessary naming differences should be ruthlessly purged.

                                                      1. 3

                                                        That struggle only lasts for a fraction of a generation — later generations may be confused why there’s & and && instead of and.

                                                        1. 1

                                                          As someone who somewhat-recently was a TLA+ student myself: the difference between those two is not unnecessary, because it avoids the classic student problem of “these symbols look similar, therefore the ideas must be similar” - /\ in TLA+ is different than && in JavaScript and C. It was very helpful for me to use the former instead of the latter.

                                                        2. 8

                                                          Not really. first, rest, etc should be used when dealing with lists. cons cells have more uses than to construct a list. (dequeue, trees, alists, etc). In those scenarios it is preferred to use car/cdr.

                                                      1. 1

                                                        I’ve write shell scripts in node in the past for one-off tasks when the project already uses node. ej. I’ve used it to write git precommit hooks to run ESLint or CSSLint on the changed files.

                                                        I say if you don’t know bash/sh and already know node you do you and continue using node for scripts. It doesn’t have a slow start-up and it is easy to read/write from/to STDOUT/STDIN.

                                                        1. 2

                                                          Well, I don’t know if I should, but I sure hope that someone writes a Wayland compositor in Common Lisp. Not Lisp bindings, but Lisp all the way down, or roughly as far the way down as I get today with StumpWM & X11. I like being able to dynamically extend by window manager in Lisp, and I would prefer to avoid C & C++. It is amazingly productive to be able to delve deeply into the call stack of a running Lisp system; being stuck using a bunch of bindings for a static library would be a real regression.

                                                          1. 2

                                                            So I would like to do so someday (I’ve given it a couple of Sundays last year.), Lisp all the way down et al. Except and approach like the one CLX took is not possible in Wayland. For one, because a compositor takes some responsibilities that the XServer had you need to allocate memory buffers for the client to draw in which means using FFI bidings for EGL and DRM. It appears that you also need to use libwayland for allocation 0 . Other than that you could write the protocol implementation in CL.

                                                            But more importantly what I’d really like is a desktop in CL

                                                          1. 10

                                                            Nice view on how the email flow works. Though, I don’t agree with some things.

                                                            The only reason that merge button was used on Github was because Github can’t seem to mark the pull request as merged if it isn’t done by the button.

                                                            No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push. In fact, I primarily host repos at GitLab and keep a mirror on GitHub. I accept PRs on GitHub (the mirror) as well to keep things easy for contributors. I manually merge these locally, and push the updated branch to GitLab. GitLab in turn syncs the GitHub mirror, and the PR on GitHub is marked as merged in a matter of seconds.

                                                            …we have to mess with the commits before merging so we’re always force-pushing to the git fork of the author of the pull request (which they have to enable on the merge request, if they don’t we first have to tell them to enable that checkbox)

                                                            Yes, of course you’ve to mess with them. But after doing that, don’t even bother to push to the contributors branch. Just merge it into the target branch yourself and push. Both GitLab and GitHub will instantly mark the PR as merged. It is the contributors job to keep his branch up to date, and he doesn’t even have to for you to be able to do your job.

                                                            I understand that you like the email workflow, which is great. But I don’t agree with some arguments for it that are made here.

                                                            Thanks for sharing though!

                                                            1. 7

                                                              No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push.

                                                              In the article they talk about wanting to rebase first. If you do that locally, GitHub has no way to know that the rebased commits you pushed originally came from the PR, so it can’t close them automatically. It does work when you push outside GitHub without rebasing tho.

                                                              1. 2

                                                                IIRC, can’t you rebase, (force) push to the PR branch, then merge and push and it’ll close? More work in that case but not impossible. Just if you rebase locally then push to ma(ster|in) then github has no easy way to know the pr is merged without doing fuzzy matching of commit/pr contents which would be a crazy thing to implement in my opinion.

                                                                1. 3

                                                                  Typically the branch is on someone else’s fork, not yours.

                                                                  1. 2

                                                                    In Github, you can push to anothers branch if they have made it a PR in your project. Not sure if force push works, never tried. But I still feel it’s a hassle, you need to set up a new remote in git.

                                                                    In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                    1. 3

                                                                      In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                      That is the case in GitHub as well. (Allow edit from maintainers). It is enabled by default so I’ve never had to ask someone to enable it. Maybe it is not enabled by default on GitLab?

                                                                      1. 1

                                                                        I can confirm that it is disabled by default on GitLab.

                                                            1. 4

                                                              It is a good guide. I do have some suggestions:

                                                              I fully agree with what @ane says about line-numbers. Line numbers are for machines not humans. In every editor I know you can jump to the path+line number in one mouse key or keystroke (in vim it is gf f/e).

                                                              Besides that, adding /usr/local/bin/ to the exec-path in your Emacs’ init file is a half measure. A better, more comprehensive solution would be to add it to the system-wide $PATHS by adding it to /etc/paths.d/.

                                                              Finally rainbow delimiters is not worth it. I used it for some years and it look nice, but besides highlighting the matching matching paren, which the built-in show-paren-mode already does, coloring each paren differently only adds noise.

                                                              1. 2

                                                                A better, more comprehensive solution would be to add it to the system-wide $PATHS by adding it to /etc/paths.d/

                                                                Doesn’t that not work inside Emacs anyway? IIRC, you need to jump through an extra hoop to make Emacs use the environment PATH.

                                                                1. 2

                                                                  IIRC, you need to jump through an extra hoop to make Emacs use the environment PATH.

                                                                  You don’t need to jump through any extra hoops. Emacs is a normal process.

                                                                  The issue people often run in Mac OS is the following. The place where developers tend to modify their path is in ~/.bashrc or ~/.profile. The problem with that those files are only run when you open a terminal. So everything works ok when you are running programs inside the terminal, including emacs (You can for example start the GUI version of Emacs from the terminal on the background $ emacs & and it will pick up your augmented $PATH).

                                                                  However when you start a program from spotlight, finder or the dock bar your path won’t be augmented by any code you wrote in ~/.bashrc/~/.profile. This is not specific to emacs. By adding the path to /etc/paths.d all the programs in your system will have /usr/local/bin in their $PATH.

                                                                  1. 2

                                                                    @ragnese is right. Adding /usr/local/bin to /etc/paths.d does not help Emacs to locate SBCL. In fact, /usr/local/bin is already in /etc/paths of macOS anyway, so adding it again to /etc/paths.d is redundant. Despite /usr/local/bin being present in both /etc/path and /etc/paths.d, Emacs exec-path still does not contain /usr/local/bin.

                                                                    1. 1

                                                                      I haven’t used Mac OS for a couple of years, but that I’ve never had to add anything into the exec-path. Especially when it is already in $PATH. /usr/loca/bin/ was not in /etc/paths when I did.

                                                                      Emacs exec-path still does not contain /usr/local/bin.

                                                                      Given that the default value from exec-path is initialized from $PATH I would find this surprising. Just to double check, If you run M-: (getenv "PATH") you see /usr/local/bin there?

                                                              1. 9

                                                                For those who believe that past decisions are an indicator for future decision making quality, this response to the issue …

                                                                Chrome is not interested in this. The XML parts of our pipeline are in maintenance mode and we would love to eventually deprecate and remove them, or at least replace them with something that generates less security bugs. Increasing the capabilities of XML in the browser runs counter to that goal.

                                                                … was written by the person of fantasy-land promises and Array.includes fame.

                                                                1. 3

                                                                  What’s wrong with Array#includes?

                                                                  1. 2

                                                                    I found the Array.includes proposal: https://github.com/tc39/Array.prototype.includes/

                                                                    I’m curious what the fantasy-land promises referred to are!

                                                                    1. 2

                                                                      “Fantasy-land promises” must be https://github.com/fantasyland/fantasy-promises, though I don’t know why they’re notable.

                                                                      1. 3

                                                                        They are probably referring to the origin of Fantasy land. It started as a response to this response iirc https://github.com/promises-aplus/promises-spec/issues/94#issuecomment-16176966

                                                                        1. 3

                                                                          The last few replies are great for people who don’t want to read the whole thread: https://github.com/promises-aplus/promises-spec/issues/94#issuecomment-415222385

                                                                          The things done wrong here are fundamental to programming/computation itself, so there will never be a time when it is not wrong. And probably never be a time when it doesn’t adversely effect programming in Javascript/Typescript.

                                                                  1. 8

                                                                    I see some constant trend to compare stuff in Emacs with external packages (e.g. Magit vs vc, Flycheck vs Flymake, Projectile vs project.el), which I find slightly bizarre given the trend in Emacs to move as many built-in packages as possible to GNU ELPA,

                                                                    Flymake and project.el are both ELPA packages, actually. vc has been around for three decades and it predates package.el.

                                                                    • drop the contributor agreement

                                                                    The FSF legal counsel has declared that they cannot enforce the GPL unless the copyright belongs entirely to the FSF. This is why the contributor agreement is needed, and anyway, it’s just one email to the FSF, sign a paper, email a scan of the paper (you can get away with using your smartphone).

                                                                    • discuss ideas in an (modern) issue tracker, instead of on a mailing list

                                                                    What’s wrong with mailing lists? Issue trackers like Github are quite bad for discussion, which is why you always need something like Gitter or IRC to provide a discussion forum. A mailing list can be used for both development (patches) and discussion.

                                                                    • apply less political activism and more pragmatism in the conversation around new ideas/features

                                                                    Being free software and adhering to the free software principles is an important aspect of Emacs. It is a good idea to consider the impact of ideas on user freedoms. It’s not done whimsically, freedom is important.

                                                                    1. 4

                                                                      What’s wrong with mailing lists?

                                                                      I think my main problem with mailing lists is the lack of tracability in the history. It’s not a logical consequence of mailing lists (you can certainly do it), but it doesn’t seem to happen.

                                                                      I spend a fair amount of time digging through GCC’s history and when I find a patch touching code that matters to what I’m looking at, the commit message is not enlightening. Why was the change made? Is this associated with a bug? What was the design discussion around it? (But ooo - Changelog was updated! “Ditto!”) Rarely are those anwered and even rarer is an identifer or link to a bug report or a mailing list discussion. I have to go digging through mailing list posts (with a really substandard search mechanism) to find some background. Even then, there might not be any.

                                                                      Say what you want about issue trackers, but having every commit associated with some background knowledge is really, really useful in the long run.

                                                                      1. 2

                                                                        Oh, issue trackers are absolutely essential. You definitely need one, a mailing list isn’t enough. The kind of discussion I was talking about and the way in which mailing lists make most sense is in informal, meta discussion. Places like github don’t have anything for that and its issue tracker is not good as a forum.

                                                                        1. 2

                                                                          Places like github don’t have anything for that

                                                                          They do, its called discussions.

                                                                          https://docs.github.com/en/free-pro-team@latest/github/building-a-strong-community/about-team-discussions

                                                                          But I agree that the flat comment hierarchy is not ideal for discussions that tend to branch out.

                                                                      2. 3

                                                                        Flymake and project.el are both ELPA packages, actually. vc has been around for three decades and it predates package.el.

                                                                        I’m not sure if you’re trying to contradict me here, because you’re just repeating what I said. I’m well aware they are not ELPA packages, even if this wasn’t always the case. That’s part of the trend to I mentioned that you actually quoted.

                                                                        The FSF legal counsel has declared that they cannot enforce the GPL unless the copyright belongs entirely to the FSF. This is why the contributor agreement is needed, and anyway, it’s just one email to the FSF, sign a paper, email a scan of the paper (you can get away with using your smartphone).

                                                                        Well, I’m not a legal expert but I do wonder what does mean for every other GPL project that doesn’t have an explicit copyright assignment agreement. I did sign the agreement a long time ago and I recall I waited so long for the confirmation that by the end I didn’t remember I had done it. Very few people are going to waste so much time to contribute to a project, unless they are really invested it in. At the very least the FSF that switch to instant digital signatures or something like that.

                                                                        What’s wrong with mailing lists? Issue trackers like Github are quite bad for discussion, which is why you always need something like Gitter or IRC to provide a discussion forum. A mailing list can be used for both development (patches) and discussion.

                                                                        Mailing lists provide no structure and people who branch out different threads from the main one create a total mess. You don’t need a chat to have a conversation - you can just use an issue’s comments (as one example). A lot of huge projects are doing this and it works fine for them. I’m not saying “drop email completely”, I’m saying “don’t use them for things they are not optimal for”.

                                                                        Being free software and adhering to the free software principles is an important aspect of Emacs. It is a good idea to consider the impact of ideas on user freedoms. It’s not done whimsically, freedom is important.

                                                                        True. But in the end of the day you also have to remember that you’re building software that is meant to solve certain real-world problems. Your political agenda will mean nothing if it drives an excellent piece of software into the ground. I used to be a big believer in RMS and FSF 20 years ago, but seeing how his leadership style and hardline vision affected negatively a lot of GNU projects, I’m not 100% that this is not the way to go. Great software needs users, it needs mindshare and it needs traction. It cannot exist on top of a political agenda alone.

                                                                        1. 3

                                                                          I’m not sure if you’re trying to contradict me here, because you’re just repeating what I said. I’m well aware they are not ELPA packages, even if this wasn’t always the case. That’s part of the trend to I mentioned that you actually quoted.

                                                                          You mean they are ELPA packages? I see now what you meant, but given the previous sentence I was under the assumption that you didn’t think they were ELPA packages.

                                                                          At the very least the FSF that switch to instant digital signatures or something like that.

                                                                          I think you can get away by just using PDF digital signatures. No need to print anything.

                                                                          Mailing lists provide no structure and people who branch out different threads from the main one create a total mess. You don’t need a chat to have a conversation - you can just use an issue’s comments (as one example). A lot of huge projects are doing this and it works fine for them. I’m not saying “drop email completely”, I’m saying “don’t use them for things they are not optimal for”.

                                                                          Well, that depends on your email client. Email supports threading just fine. A nice alternative is to use NNTP (news) via Gmane, because then you can just fetch the entire thread of a single message. You can do this in Emacs right now, hit M-x gnus RET B nntp RET news.gmane.io and pick emacs.devel from the list.

                                                                          You can also use sourcehut’s lists which provide an excellent web user interface for mailing lists, with support for threads, and replying on the web. It’s still just email underneath.

                                                                          True. But in the end of the day you also have to remember that you’re building software that is meant to solve certain real-world problems. Your political agenda will mean nothing if it drives an excellent piece of software into the ground. I used to be a big believer in RMS and FSF 20 years ago, but seeing how his leadership style and hardline vision affected negatively a lot of GNU projects, I’m not 100% that this is not the way to go. Great software needs users, it needs mindshare and it needs traction. It cannot exist on top of a political agenda alone.

                                                                          I would argue that GNU Emacs has survived close to 40 years (and Emacs itself close to 50 years) because it is free software. In the grand scheme of things adding support for LSP or native JSONRPC—although right now they are very welcome additions—will be yet another changelog entry and a historical footnote.

                                                                          Free software matters — a large chunk of the world relies on GNU technology (GCC, coreutils, autotools, etc.) to do their computing. They did not survive on technical merit alone, they survived because they were free software first, and good software second.

                                                                          I predict in 10 years things like Atom or Sublime Text will have fallen into obscurity, but Emacs, Vim, other free software will live on, and prosper.

                                                                          1. 2

                                                                            I’m sure that Emacs will be around for a very long time, but it’s already a niche editor (unlike vim) and with the way the project is stewarded that’s not going to change any time soon. There’s a big difference between surviving and thriving and my preference for Emacs would be the latter. As noted here VS Code is winning the hearts of most developers these days, and Emacs has been stuck at 5% for ages. That’s not going to get any better unless something fundamentally changes. Microsoft might have been an evil enterprise in the past, but today they are stewarding their open-source projects better than the FSF hands down. It’s really not all about money and resources - it’s about having the right mindset and the right attitude.

                                                                            1. 2

                                                                              I generally agree, though vscode is not my favorite thing.

                                                                              I’m reminded more of things like RMS wanting to not pull lldb support into gud because (I’m paraphrasing only slightly) “llvm was trying to undermine the gpl”. The FSF can do whatever it wants, and I generally am ok with things, but the past 20 years haven’t impressed me much with their ability to reflect upon their processes and attempt to bring more into the fold.

                                                                              I predict that GPL and even the FSF will take a back seat to more MIT/BSD/related licensed things due to their having more implicit freedom and less overall ideology drama to direct users of the code (programmers). Think like embedding llvm+clang into lldb kinda stuff. Things that gcc/gpl/fsf literally won’t allow for technically due to ideals. Imagine emacs with a builtin compiler for c/c++/rust/etc….

                                                                              Prosper? I can’t see it happening given the current mentality of the FSF. Its too extremist in its ideals. They may even be “right” for whatever definition of right that is. But it seems to me to be a case of winning the battle but losing the war with their current path.

                                                                              It used to be GPL/GNU software wasn’t good software “second”, it used to connote an air of quality on its own. Now however…. it just seems like being read into some ecclesial cult with some of the rituals involved.

                                                                              1. 1

                                                                                Emacs is not a popular editor by any means, but the goal of Emacs as the flagship editor of the GNU project is to be the best free software editor. That is its first goal, everything else is secondary. Once you understand Emacs is not going to sacrifice the respect of user freedoms in pursuit of popularity or technical superiority it makes sense how the project is governed. When it comes to good features Emacs does not usually reject them if they do not conflict with its free software ideals, e.g. see the recent development of native compilation of Elisp, native JSON-RPC etc.

                                                                                Emacs has been forked in the past in pursuit of technical or practical superiority, see XEmacs, SXEmacs, etc. Those projects are no longer alive.

                                                                                To me Emacs should stay in its pursuit of being free software first. I don’t think it’s particularly beneficial for Emacs to try to be the most popular editor if it means letting go of its founding principles: respecting user freedom.

                                                                        1. 3

                                                                          Note that AGPL can still be exploited by cloud providers, and unfortunately it doesn’t always play well with other open-source projects that have more permissive licenses.

                                                                          1. 9

                                                                            what do you mean that it “can still be exploited by cloud providers”?

                                                                            1. 2

                                                                              If you’re really curious, read on why mongodb, confluent and redis (among others) changed their license to ones that aren’t authorized by the OSI.

                                                                              1. 3

                                                                                so by “exploit” you mean they can benefit from it for free, while sharing any modifications

                                                                                1. 3

                                                                                  No, I mean they can drive the authoring entity out of business, without ever having to make any modification in the first place.

                                                                                  1. 1

                                                                                    Like re-implementing the project and offering as a service with compatible interface?

                                                                                    1. 3

                                                                                      Sounds like Google vs Oracle =)

                                                                              2. 1

                                                                                I think they are talking about one of the scenarios the article explicitly wants to avoid.

                                                                                We want to prevent corporations from offering Plausible as a service without contributing to the open source project

                                                                                With the AGPL, they are only forced to share the code, they can don’t have to actually improve it. After all plausible is selling a support service, not software. A bigger company can offer the same service.

                                                                                1. 4

                                                                                  plausible doesn’t seem to be worried about that, since they are moving to the AGPL. i would think the developer of a product would have an edge in the support market over other companies offering support for a product they don’t develop.

                                                                                  but i see how any use of a product could be considered “exploitation.” it’s just the nature of free software that anyone can use it and modify it as they wish.

                                                                                  1. 4

                                                                                    There doesn’t seem to be a license that’s accepted in the open source world and that prevents the cloud companies from offering the product as a service. What MongoDB and others did doesn’t seem to have been well received even though I do understand their concerns and think that’s there’s a need for a license like that.

                                                                                    AGPL at least makes the playing field a bit more even and fair as a large corporation cannot just take from us but have to be clear about the relationship, give us credit and open any of their modifications. Then it’s up to us to make sure we communicate well so people are aware of what’s happening and can take that into consideration when they’re making a choice who to use.

                                                                                    1. 1

                                                                                      Nothing stops someone from hosting a managed service and also releasing all of their changes. If the win is that actual hosting, then that is the actual value. The only thing actually stopping them is not wanting to release the code, which is kinda ironic.

                                                                                      1. 1

                                                                                        With AGPL the have to allow any users to get a copy of the source. This isn’t quite the same as “contributing” (upstreamin changes) but since any of those users could send the changes upstream many consider it “good enough for rock ’n roll” to say it requires contributing

                                                                                    2. 4

                                                                                      Do you have an example of other permissive licenses that conflict?

                                                                                      1. 3

                                                                                        To the best of my understanding, a project that’s MIT or Apache 2.0 cannot use a GPL or AGPL project, because xGPL licenses are copyleft and effectively turn any project that uses them into xGPL as well.

                                                                                        If the goal is mainly to prevent exploitation by the big players, then it’s a bit like burning your home to get rid of the ants. There have been attempts to produce licenses that are better suited for this purpose, however most of them end up doing it by “descriminating between fields of endeavor” (e.g. cloud hosting), and so the OSI deems them as “not open-source”, but rather “source available”.

                                                                                        1. 4

                                                                                          An MIT licensed project may have an AGPL dependency, but the distributed combination (or binary when linking, exact artifacts depend on stack) will be effectively AGPL. Some projects even have optional depndencies based on the license you want for your artifacts.

                                                                                          Having an artifact be AGPL is only an issue if you plan to distribute as “cloed source”.

                                                                                          1. 1

                                                                                            Yes, it means every project that depends on you must be open-source as well, including small start-ups that try to remains competitive using their unique technology. Perhaps that’s what you want, but it’s not necessarily the best scenario for the world of open-source, or the world in general.

                                                                                            1. 5

                                                                                              Really not sure how preventing a startup from taking our freely given work and using it to produce something that is not open source is bad for anyone? That seems like the goal. They can release their code, or spend the money to write their own and not steal from the public commons.

                                                                                              1. 4

                                                                                                i think the mindset is that anything that could prevent an entrepreneur from bringing a product to market could be bad because the product might end up helping people. some people have that mindset.

                                                                                                1. 1

                                                                                                  I could try to argue the point, but instead let me ask you: Why do the MIT and Apache licenses exist in the first place, and why are they so popular? And why have they been gaining popularity every year in the last decade? (see: https://resources.whitesourcesoftware.com/blog-whitesource/open-source-licenses-trends-and-predictions)

                                                                                                  According to your logic, most open-source code should choose to be GPL, no?

                                                                                                  1. 1

                                                                                                    because more and more open source projects are funded by tech companies that would like to use them in their proprietary projects

                                                                                                    1. 1

                                                                                                      So 70% of opensource is funded by commercial tech companies?

                                                                                                      1. 1

                                                                                                        i would think less

                                                                                            2. 2

                                                                                              Ah, yes, that sounds right. I was worried that there was maybe something I didn’t know about in case the licenses are combined the other way around. Ie. an (A)GPL project using a MIT/Apache 2.0 library should be fine, I think?

                                                                                              I understand the concern about using AGPL for libraries, frameworks, etc, but it doesn’t look like a bad pick for application-type stuff, like OP’s product. The only type of derivative would be a fork/branch.

                                                                                        1. 5

                                                                                          I wonder whether they got permission from all their open source contributors to re-license the code? Or maybe they use a CLA like Shopify and co. do, where you waive all your rights to the code you own once it’s merged to the main tree?

                                                                                          1. 12

                                                                                            It sounds like it was previously mit, and if I understand the law correctly you can make modifications to mit software and release the modified version under gpl without issue (so long as you preserve the original mit license text).

                                                                                            1. 3

                                                                                              Hmm… but relicensing code requires the permission of the code’s author, no? For the company’s own code that’s probably fine, but what about any outside contributors that might not agree with the license change? They might have the right to rescind their code.

                                                                                              1. 22

                                                                                                They gave that permission by using under the MIT License. It is when you go in the ‘other’ direction that you need to ask for everyone’s consent/permission. ej. Racket had a huge multiyear thread asking everyone if they were OK when changing from LGPL to MIT.

                                                                                                Btw I remember in the 00’s some BSD complained that Linux developers would take their driver code, use it and license it under the GPL, making it impossible to merge any improvements upstream.

                                                                                                https://opensource.stackexchange.com/a/5833

                                                                                                1. 8

                                                                                                  Btw I remember in the 00’s some BSD complained that Linux developers would take their driver code, use it and license it under the GPL, making it impossible to merge any improvements upstream.

                                                                                                  I mean, isn’t that exactly the purpose of MIT? “Here’s some code, do whatever you want with it, you don’t have to contribute improvements back”.

                                                                                                2. 12

                                                                                                  Technically the old code would still be MIT and the new code would be AGPL. However, since AGPL has more strict requirements the whole project is effectively AGPL. They’d still need to preserve the original MIT license text though.

                                                                                                  1. 7

                                                                                                    The code’s authors licensed their code under the MIT license, which allows that code to be relicensed by anyone else under new terms (such as the AGPL).

                                                                                                    1. 1

                                                                                                      No, re-licencing is not permitted. If I write fila A of project X, using MIT, and someone else writes file B under AGPL - then another user who gets A and B would get both under AGPL - however, they could still (in general) use A according to MIT.

                                                                                                      If this makes a difference or not will dependa lot on the projectas a whole, and the content of A.

                                                                                                      A could be a self contained c allocator, or a clever implementation of a useful ADT. Or it could be a smallpart of what B provides, like an implementationof a print macro/trait for Canadian post codes.

                                                                                                      1. 2

                                                                                                        Sure. But say you write some file and license it publicly under the MIT license. I can then take that same file and, in accordance with the terms of the former license, license it to someone else under the terms of the AGPL license. They will then not be able to use it under the terms of the MIT license.

                                                                                                        In practice, this is not such a big deal, since the original version is likely still available and indistinguishable from the version I provide. However if I change something small—like, say, the wording—then my changed version is distinct from your original, and if I license it as AGPL it won’t be possible to use it under the terms of the MIT license.

                                                                                                        1. 2

                                                                                                          No, as far as I understand this is not correct - a bsd or mit license is connected to copyright - and you need to make substantial changes in order to claim copyright. Without copyright you cannot re-license.

                                                                                                          Remember -in most jurisdictions, the default is copyright. If I write a poem here - you could quote me, but not publish my poem - you have no license to redistribute it. If I explicitly give you a license - you cannot change that license.

                                                                                                          This does get a bit muddy with the various viral licenses you point out -but as far as I understand mixing file A under mit, with file B under gpl (or AGPL)- does not really allow you, the distributor of A and B, and the recipient of A to re-license A.

                                                                                                          Your downstream users would/should still get A with mit copyright notice, and will be free to distribute/use A (and only A) under mit.

                                                                                                          Doing so would not make the GPL license for A and B invalid.

                                                                                                          Ie: you include an mit malloc in your “ls” utility. A user that gets the source from you, could go in and see that, ok, this malloc bit (assume it’s not modified)- I can use that as mit.

                                                                                                          This is because you as the distributor, do not have copyright to the upstream mit bit.

                                                                                                          People will claim differently, and I don’t think it’s been tested in court - but AFAIK this how the legal bits land.

                                                                                                          1. 8

                                                                                                            You don’t need to claim copyright over something to relicense it. You can grant a license to a copyrighted work if your own license to that work permits it, which MIT explicitly does.

                                                                                                            including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software

                                                                                                            1. 3

                                                                                                              Ah, thank you. I wasn’t aware of this difference between MIT and bsd. I suppose I’ll have to check apache too.

                                                                                                              Some more on mit vs bsd: https://opensource.stackexchange.com/questions/217/what-are-the-essential-differences-between-the-bsd-and-mit-licences

                                                                                                          2. 1

                                                                                                            Note that this is different from explicit grants of re-license, like GPL v2 (i think) has a provision “or any later version”.

                                                                                                            So if I get a gpl2 file I can choose to distribute it as gplv3.

                                                                                                1. 5

                                                                                                  This is what happens when somebody thinks that t-shirt cannons are fun.

                                                                                                  I’m now reflecting on the fact that the incentive structure is not meaningfully different from GSoC, which also sees lots of spam applications and low-quality contributions. GSoC is bad enough that some organizations, like X.org, run their own programs; X.org has their Endless Vacation of Code precisely to work around failures in GSoC.

                                                                                                  I wonder whether better incentive structures exist. For corporations, these outreach programs are meant to improve optics and increase the number of prospective job applicants. Even for X.org and other community groups, the code artifacts are secondary to the goal of promoting neophyte students into seasoned regular contributors. Perhaps we do not need to focus on production of code, then, as long as we encourage other aspects of being skilled at working with code. Skills like reading, debugging, formal (symbolic) analysis, knowing abstract algorithms and data structures, etc. could be promoted instead. Learning to write code and documentation would be part and parcel of a more holistic training regime.

                                                                                                  1. 2

                                                                                                    GSoC is bad enough that some organizations, like X.org, run their own programs; X.org has their Endless Vacation of Code precisely to work around failures in GSoC.

                                                                                                    AFAIU That is not the case, in fact it is the opposite. The endless vacation was started because of the success of GSoC. At least that is what I understood from this years XDC. There was a talk about GSoC/EVoC specifically. https://youtu.be/b2mnbyRgXkY?t=16753

                                                                                                    Besides that I’ve seen great work coming out of GSoC, like a new register allocator for SBCL or improving the unicode support (including different normalization algorithms).

                                                                                                    The structure is completely different from Hacktoberfest. First projects have to apply to GSoC which requires consent. Second the interaction is through a period of 3 months and with the help of a mentor, as a volunteer.

                                                                                                  1. 13

                                                                                                    IRC’s lack of federation and agreed-upon extensability is what drove me to XMPP over a decade ago. Never looked back.

                                                                                                    1. 12

                                                                                                      Too bad XMPP was effectively embraced/extended/extinguished by Google. In no small way thanks to lack of message acknowledgement in the protocol, which translated to lost messages and zombie presence, which was specially bad across servers, so it paid to be in the same server (which became typically google) as the other endpoint.

                                                                                                      I did resist that, but unfortunately most of my contacts were in the Google server, and I got isolated from them when Google cut the cord. Ultimately, I never adopted Google Talk (out of principle), but XMPP has never been the same after that.

                                                                                                      End to end encryption is also optional and not the default, which makes XMPP not much of an improvement over IRC. My hopes are with Matrix taking off, or a truly better (read: fully distributed) replacement like Tox gaining traction.

                                                                                                      1. 5

                                                                                                        Showerthought: decentralised protocols needs to have some kind of antinetwork effects baked into them somehow, where there’s some kind of reward for staying out of the monoculture. I dunno what this actually looks like, though. Feels like the sort of thing some of the blockchain people might have a good answer for.

                                                                                                        1. 7

                                                                                                          That’s a fascinating idea and I disagree. :D Network effects are powerful for good reason: centralization and economies of scale are efficient, both in resources like computer power, and in mental resources like “which the heck IRC network do I start a new channel on anyway”. What you do need is ways to avoid lock-in. If big popular network X starts abusing its power, then the reasonable response is to pick up your stakes and go somewhere else. So, that response needs to be as easy as possible. Low barriers to entry for creating new servers, low barriers to moving servers, low barriers to leaving servers.

                                                                                                          I expect for any human system your going to result in something like Zipf’s law governing the distribution of who goes where; I don’t have a good reason for saying so, it’s just so damn common. Look at the population of Mastodon servers for example (I saw a really good graphic of sizes of servers and connections between them as a graph of interconnected bubbles once, I wish I could find it again). In my mind a healthy distributed community will probably have a handful of major servers/networks/instances, dozens or hundreds of medium-but-still-significant ones, and innumerable tiny ones.

                                                                                                          1. 3

                                                                                                            More and more these days I feel like “efficiency” at a large enough scale is just another way to say “homogeneity”. BBSes and their store-and-forward message networks like FidoNet and RelayNet were certainly less efficient than the present internet, but they were a lot more interesting. Personal webpages at some-isp.com/~whoever might have been less efficient (by whatever metric you choose) than everyone posting on Facebook and Twitter but at least they actually felt personal. Of course I realize to some degree I’m over-romanticizing the past (culturally, BBSes and FidoNet especially, as well as the pre-social-media internet, were a lot more white, male, and cishet than the internet is today; and technologically, I’d gnaw my own arm off to not have to go back to dialup speeds), and having lowered the bar to publish content on the internet has arguably broadened the spectrum of viewpoints that can be expressed, but part of me wonders if the establishment of the internet monoculture we’ve ended up with, where the likes of Facebook basically IS the entire internet to the “average” person, was really necessary to get there.

                                                                                                          2. 3

                                                                                                            I think in a capitalist system this is never going to be enough. What we really need is antitrust enforcement to prevent giant corporations from existing / gobbling up 98% of any kind of user.

                                                                                                        2. 3

                                                                                                          This! Too bad XMPP never really caught on after the explosion of social media, it’s a (near) perfect protocol for real time text-based communication, and then some.

                                                                                                          1. 21

                                                                                                            It didn’t simply “not caught on”, it was deliberately starved by Facebook and Google by disabling federation between their networks and everyone else. There was a brief moment around 2010 when I could talk to all my friends on gTalk and Facebook via an XMPP client, so it did actually work.

                                                                                                            (This was my personal moment when I stopped considering Google to be “not evil”.)

                                                                                                            1. 3

                                                                                                              It was neat to have federatoion with gtalk, but when that died I finally got a bunch of my contacts off Google’s weak xmpp server and onto a better one, and onto better clients, etc. Was a net win for me

                                                                                                              1. 5

                                                                                                                What are “better clients” these days for XMPP? I love the IDEA of XMPP, but I loathe the implementations.

                                                                                                                1. 6

                                                                                                                  Dino, Gajim, Conversations. You may want to select a suitable server from (or check your server via) https://compliance.conversations.im/ for the best UX.

                                                                                                                2. 5

                                                                                                                  I don’t have that much influence over my contacts :-)

                                                                                                                  1. 6

                                                                                                                    This.

                                                                                                                    Network effects win out over the network itself, every time.

                                                                                                                    1. 1

                                                                                                                      I guess neither do I? That’s why it took Google turning off the server to make them switch

                                                                                                                  2. 3

                                                                                                                    IIRC it was Facebook that was a bad actor and started letting the communication go only one way to siphon users from gtalk and forced Google’s hand.

                                                                                                                    1. 5

                                                                                                                      Google was playing with Google+ at that moment and wanted to build a walled garden, which included a chat app(s). They even invented some “technical” reasons why XMPP wasn’t at all workable (after it has been working for them for years.)

                                                                                                                      1. 2

                                                                                                                        It was weird ever since Android was released. The server could federate with other servers just fine, but Google Talk for Android spoke a proprietary C2S protocol, because the regular XMPP C2S involves keeping a TCP connection perpetually open, and that can’t be done on a smartphone without unacceptable power consumption.

                                                                                                                        I’m not sure that truly counts as a “good” technical reason to abandon S2S XMPP, but it meant that the Google Talk server was now privileged above all other XMPP servers in hard-to-resolve ways. It made S2S federation less relevant, because servers were no longer interchangeable.

                                                                                                                        1. 1

                                                                                                                          I’m not sure the way GTalk clients talk to their server had anything to do with how the server talked to others. Even if it was, they could’ve treated as a technical problem needed solving rather than an excuse to drop the whole thing.

                                                                                                                          1. 2

                                                                                                                            Dropping federation was claimed at the time (fully plausibly, imo) to be about spam mitigation. There was certainly a lot of XMPP spam around that time.

                                                                                                                          2. 1

                                                                                                                            I have been using regular XMPP c2s on my phones over mobile data continuously since 2009 when I got my first smartphone. Battery life has never been an issue. I think if you have tonnes of TCPs the batterylife thing can be true, but for one XMPP session the battery impact is a myth

                                                                                                                        2. 3

                                                                                                                          AFAIK Facebook never had federated XMPP, just a slightly working c2s bridge

                                                                                                                          1. 1

                                                                                                                            To make sure my memory wasn’t playing any tricks on me I did a quick google search. It did.

                                                                                                                            To make Facebook Chat available everywhere, we are using the technology Jabber (XMPP), an open messaging protocol supported by most instant messaging software,

                                                                                                                            From: https://www.facebook.com/notes/facebook-app/facebook-chat-now-available-everywhere/297991732130/

                                                                                                                            I don’t remember the move they did on Google to siphon users though, but I remember thinking it was a scummy move.

                                                                                                                            1. 2

                                                                                                                              That link is talking about their c2s bridge. You still needed a Facebook account to use it. It was not federated.

                                                                                                                        3. 2

                                                                                                                          That might be your experience but I’m not sure it’s true for the majority.

                                                                                                                          From my contact list of like 30 people 20 weren’t using GTalk in the first place (and no one use used FB for this, completely separate type of folks) and they all stopped using XMPP independently, not because of anything Google. And yes, there were interop problems with those 5, but overall I see the problem of XMPP’s downfall in popularity kinda orthogonal to Google, not related.

                                                                                                                          1. 3

                                                                                                                            There’s definitely some truth to that, but still, my experience differs greatly. The majority of my contacts used Gtalk back in the day, and once that was off, they simply migrated to more popular, walled garden messaging services. That was the point in time where maintaining my own, self hosted XMPP VPS instance became unjustifiable in terms of the monthly cost and time, simply because there was no one I could talk to anymore.

                                                                                                                        4. 4

                                                                                                                          I often hear this, but I’ve been doing most of my communicating with XMPP continuously for almost 20 years and it just keeps getting better and the community contiues to expand and get work done.

                                                                                                                          When I first got a JabberID the best I could do was use an MSN gateway to chat with some highschool pals from Gaim and have them complain that my text wasn’t in fun colours.

                                                                                                                          Now I can chat with most of my friends and family directly to their JabberIDs because it’s “just one more chat app” to them on their Android phone. I can send and receive text and picture messages with the phone network over XMPP, and just this month started receiving all voice calls to my phone number over XMPP. There are decent clients for every non-Apple platform and lots of exciting ecosystem stuff happening.

                                                                                                                          I think good protocols and free movements are slower because there is so much less money and attention, but there’s also less flash in the pan fad adoption, less being left high and dry by corporate M&A, and over time when the apps you used to compete with are long gone you stand as what is left and still working.

                                                                                                                          1. 4

                                                                                                                            My experience tells me that the biggest obstacle of introducing open and battle-tested protocols to the masses is the insane friction of installing yet another app and opening yet another account. Most people simply can’t be bothered with it.

                                                                                                                            I used to do a lot of fun stuff with XMPP back in the day, just like you did, but nowadays, it’s extremely hard to make non-geek people around me join the bandwagon of pretty much anything outside the usual FAANG mainstream stuff. The concept of open protocols, federation, etc. is a very foreign concept to many ordinary people, for reasons I could never fully grasp.

                                                                                                                            Apparently, no one has ever solved that problem, despite many of them trying so hard.

                                                                                                                            1. 2

                                                                                                                              I don’t really use XMPP, but I know that “just one more chat app” never works with almost everyone in my circle of friends. Unfortunately I still have to use Facebook Messenger to communicate with some people.

                                                                                                                            2. 3

                                                                                                                              When I was building stuff with XMPP, I found it a little difficult to grasp. At its core, it was a very good idea and continues to drive how federation works in the modern world. I’m not sure if this has to do with the fact that it used XML and wasn’t capable of being transmitted using JSON, protobuf, or any other lightweight transport medium. Or whether it had to do with an extensive list of proposals/extensions in various states of completion that made the topology of the protocol almost impossible to visualize. But in my opinion, it’s not a “perfect” protocol by any means. There’s a good (technical) reason why most IM service operators moved away from XMPP after a while.

                                                                                                                              I do wish something would take its place, though.

                                                                                                                              1. 5

                                                                                                                                Meanwhile it takes about a page or two of code to make an IRC bot.

                                                                                                                                1. 4

                                                                                                                                  XMPP has gotten a lot better, to be fair – a few years ago, the situation really was dire in terms of having a set of extensions that enabled halfway decent mobile support.

                                                                                                                                  It isn’t a perfect protocol (XML is a bit outdated nowadays, for one) – but crucially, the thing it has shown itself to be really good at is the extensibility aspect: the core is standardized as a set of IETF RFCs, and there are established ways to extend the core that protocols like IRC and Matrix really lack.

                                                                                                                                  IRC has IRCv3 Capability Negotiation, sure, but that’s still geared toward client-server extensibility — XMPP lets you send blobs of XML to other users (or servers) and have the server just forward them, and provides a set of mechanisms to discover what anything you can talk to supports (XEP-0030 Service Discovery). This means, for example, you can develop A/V calls as a client-to-client feature without the server ever having to care about how they work, since you’re building on top of the standard core features that all servers support.

                                                                                                                                  Matrix seems to be denying the idea that extensibility is required, and think they can get away with having One True Protocol. I don’t necessarily think this is a good long-term solution, but we’ll see…

                                                                                                                                  1. 4

                                                                                                                                    Matrix has the Spec Proposal progress for moving the core spec forward. And it has namespacing (with “m.” reserved as the core prefix, rest should use reverse domain like “rs.lobste.*”) for extension. What do you think is missing?

                                                                                                                                    1. 1

                                                                                                                                      Okay, this may have improved since I last checked; it looks like they at least have the basics of some kind of dynamic feature / capability discovery stuff down.

                                                                                                                                    2. 2

                                                                                                                                      IRCv3 has client-to-client tags which can contain up to 4096 bytes per message of arbitrary data, which can be attached to any message, or be sent as standalone TAGMSG.

                                                                                                                                      This is actually how emoji reactions, thread replies, and stuff like read/delivery notifications are implemented, and some clients already made a prototype using it for handshaking WebRTC calls.

                                                                                                                                      1. 4

                                                                                                                                        Sure. However, message tags are nowhere near ubiquitous; some IRC netadmins / developers even reject the idea that arbitrary client-to-client communication is a good thing (ref).

                                                                                                                                        You can get arbitrary client-to-client communication with ircv3 in some configurations. My point is that XMPP allows it in every configuration; in fact, that’s one of the things that lets you call your implementation XMPP :p

                                                                                                                                      2. 1

                                                                                                                                        I have been using XMPP on mobile without issue since at least 2009

                                                                                                                                  2. 2

                                                                                                                                    How is IRC not federated? It’s transparently federated, unlike XMPP/Email/Matrix/ActivityPub/… that require a (user, server) tuple for identification, but it still doesn’t have a central point of failure or just one network.

                                                                                                                                    1. 3

                                                                                                                                      IRC is not federated because a user is required to have a “nick” on each network they want to participate in. I have identities on at least 4 different disconnected IRC networks.

                                                                                                                                      The IRC server to server protocol that allows networks to scale is very nice, and in an old-internet world of few bad actors having a single global network would have been great. But since we obviously don’t have a single global network, and since the network members cannot communicate with each other, it is not a federated system.

                                                                                                                                      1. 3

                                                                                                                                        Servers in a network federate, true. But it’s not an open federation like email, where anyone can participate in a network by running their own server.

                                                                                                                                    1. 7

                                                                                                                                      Compression is, in a way, equivalent to prediction. Given all of the bits you’ve seen so far, predict the next bit of the file. If you are very confident about your prediction, then you can use many bits to encode the unlikely alternative, and less than one bit to encode the likely alternative. If you know nothing, then 0 and 1 are equally likely, and you have to spend an equal number of bits on each possibility (which can’t be less than 1).

                                                                                                                                      So the compressor has some kind of model of what it expects to see. Predictable inputs produce small outputs, and surprising inputs produce large outputs. The better the model, the harder it is to surprise. Some algorithms like LZMA and PPM make this really explicit, but every algorithm does it in some way. The simplest Huffman codes say “I think that the next byte is drawn from some hard-coded distribution, just like every byte” or “I think that the probability of the next byte being X is proportional to how many Xes I’ve seen in the file so far”. LZ-type algorithms say “I think there’s a good chance that the next several bytes will be identical to a run of several bytes that I’ve already seen”.

                                                                                                                                      1. 2

                                                                                                                                        Indeed, in audio one way to compress is to use a polynomial as a predictor of the wave and then only encode the difference between the polynomial and actual value, which requires less bits to represent (and can later be further compressed, say using Huffman.

                                                                                                                                      1. 21

                                                                                                                                        Two things.

                                                                                                                                        1. An Emacs for the web – browser primitives, but with hooks and definitions that allow full user control over the entire experience, integrated with a good extension language, to allow for exploratory development. Bonus points if it can be integrated into Emacs;

                                                                                                                                        2. a full stack language development environment from hardware initialization to user interface that derives its principles (user transparency, hackability) from Smalltalk or LISP machines, instead of from the legacy of Unix.

                                                                                                                                        1. 5

                                                                                                                                          Nyxt maybe what you are looking for. More info here & here.

                                                                                                                                          1. 1

                                                                                                                                            Oooh, indeed. That is significantly closer to what I want.

                                                                                                                                          2. 4

                                                                                                                                            Re 2: Sounds like Mezzano https://github.com/froggey/mezzano apparently. Actually running on arbitrary hardware is even harder, of course, because all the hardware is always lying…

                                                                                                                                            1. 1

                                                                                                                                              That seems interesting!

                                                                                                                                              Really, you’d bootstrap on QEMU or something, and then slowly slowly expand h/w support. If you did this, you could “publish” a hardened image as a unikernel, which would be the basis of a deployment story that is closer to modern.

                                                                                                                                              ETA: I’m not sure I’d use Common Lisp as the language, but it’s certainly a worthwhile effort. The whole dream is something entirely bespoke that worked exactly as I want.

                                                                                                                                              1. 3

                                                                                                                                                Well, Mezzano does publish a Qemu image, judging from discussions in #lisp it is quite nice to inspect from within, and judging from the code it has drivers for some speicifc live hardware… A cautionary tale, of course, is that in Linux kernel most of the code is drivers…

                                                                                                                                                1. 4

                                                                                                                                                  Not something that Mezzano is currently trying to do afaik but there was a project, Vacietis to compile C to CL with the idea idea to be able to re-use BSD drivers that use the bus_dma API. From http://lisp-univ-etc.blogspot.com/2013/03/lisp-hackers-vladimir-sedach.html :

                                                                                                                                                  Vacietis is actually the first step in the Common Lisp operating system project. I’d like to have a C runtime onto which I can port hardware drivers from OpenBSD with the minimal amount of hand coding

                                                                                                                                            2. 3

                                                                                                                                              #1 emacs forever.

                                                                                                                                              1. 1

                                                                                                                                                Would something like w3.el be a starting point for this, or are you envisioning something that doesn’t really fit with any existing elisp package?

                                                                                                                                                1. 2

                                                                                                                                                  Like, I’ve used w3 in the past, but I’m thinking more like xwidgets-webkit, which embeds a webkit instance in Emacs. I should start hacking on it in my copious free time.

                                                                                                                                                  1. 1

                                                                                                                                                    That makes a lot of sense. This makes me think of XEmacs of old, ISTR it had some of those widget integrations built in and accessible from elisp.

                                                                                                                                                    Come to think of it, didn’t most of that functionality get folded into main line emacs?

                                                                                                                                                    I love emacs, a little TOO much, which is why I went cold turkey 4-5 years back and re-embraced vi. That was the right choice for me, having nothing at all to do with emacs, and everything to do with the fact that it represents an infinitely deep bright shiny rabbit hole for me to be distracted by :)

                                                                                                                                                    “If I can JUST get this helm-mode customization to work the way I want!” and then it’s 3 AM and I see that I’ve missed 3 text messages from my wife saying WHEN ARE YOU COMING TO BED ARE YOU INSANE? :)

                                                                                                                                                    1. 2

                                                                                                                                                      I feel seen. Yeah, I basically live in Emacs; it informs both of my answers above; basically, I want the explorability of Emacs writ large across the entirely of my computing.

                                                                                                                                              1. 13

                                                                                                                                                I try to write meaty commit messages. Working in FLOSS projects where the original author is long gone have taught me they are invaluable. But all the posts about writing good commit messages are missing the point. The main reason why people don’t make the effort to write a good commit message is because they don’t read them in the first place. If someone sees commits as a write-only medium why are they going to make an effort when writing them?

                                                                                                                                                Ask around, how do other people navigate the history of a file/function? Many of them will tell you they use GH’s web interface to navigate the history of a file, and that is awful way to do. The first step should be to teach them people how to use a tool similar to Emacs’ vc-annotate


                                                                                                                                                You should use the imperative mood when writing commit messages.

                                                                                                                                                I would add an additional reason, because git use imperative mood as well. ej. ‘merge pull requests’.

                                                                                                                                                Btw /u/lazau, there is a typo in the line above, (imperitive).

                                                                                                                                                1. 3

                                                                                                                                                  Thanks for the feedback. For context I work in a large codebase where the original author is usually also long gone :). I agree that most people see commits as a write-only medium. I think I could add more motivating examples so that people can see why good commit messages might be valuable.

                                                                                                                                                1. 3

                                                                                                                                                  Sun and NeXT had Display Postscript, and OSX had Display PDF with Quartz – so it’s still possible to add a high level graphics layer on top, including color space conversion. The new parameter that the article introduced is the need for device-specific rasterisation. In a multi-screen desktop where a window might span more than one device, we would need to have the corresponding subsets rasterised to match the specific device.

                                                                                                                                                  1. 7

                                                                                                                                                    DPS was great and technically standardized as part of PostScript, but more interesting IMHO was NeWS. It was almost like the modern web: a graphical application written in object-oriented PostScript ran in a separate process and communicated with a backend process. Stuff that could be done entirely in the GUI could run without having to round-trip to the “application logic” side of things.

                                                                                                                                                    1. 2

                                                                                                                                                      I’d love to see a modern reimagining of NeWS. Keep the PostScript drawing model (everyone does anyway), provide a compositing model, a resource management layer (for data movement to the display server), an audio interface, and an interface to run GPU shader programs on top. Use WebAssembly for distributing the bits of client-side code. You’d end up with something that could be implemented in a web browser with WebSockets, Canvas, audio, and WebGPU, so you get a remote display interface that anyone can connect to with software that they have installed but where people using it as their main display server can run something a lot more lighweight than a web browser.

                                                                                                                                                      1. 1

                                                                                                                                                        News always sounds interesting but I’ve never been able to find much information online about what was the specific API. ej. Did it allow applications to ‘claim’ parts of the screen?

                                                                                                                                                        1. 3

                                                                                                                                                          This may be interesting.

                                                                                                                                                          1. 1

                                                                                                                                                            Ha, did that pop up on the orange site? I downloaded it today but forgot from where. A piece of gold, like Taligent’s OpenDoc environment.