1. 3

    Most of these toolkits require GL and thus Cgo. It’s good that they exist, but sad that they’re dirty in this way.

    Though, the unikernel puzzles me.

    1. 4

      [I do not have a hat for it, but am a Gio maintainer]

      Yes, Gio does require GL and CGO for most platforms. On Windows it actually bypasses CGO, so you can cross compile a Gio application for windows from any other OS trivially. The requirement for CGO is less onerous than I expected it to be though. It’s been really easy to build and distribute Gio applications for all OSes in my experience.

      The unikernel was a demonstration that you can build special-purpose applications with GUIs easily. I think (though I’m having trouble remembering) that Elias’ goals there were both do demonstrate a way of sandboxing applications without running a whole virtualized OS, as well as a possible Kiosk-style deployment option. Looks like Elias talked about it during the First Community Call if you want to hear it from him.

      1. 2

        I’m still trying to figure out how I’m going to maintain the independence from cgo on Windows while incorporating my Rust-based AccessKit project. I don’t think reimplementing AccessKit in Go will be a reasonable solution, nor do I want to implement it solely in Go, as that would hinder adoption by code in other languages. And I do indeed want AccessKit to be used across multiple languages, so as not to spread accessibility efforts too thin. I’m not even sure that implementing UI Automation (the Windows accessibility API) in pure Go would be feasible anyway; UIA tends to make many calls into the COM interfaces implemented by the application, and we’d need to measure the current overhead of calling into Go from outside.

        I plan to provide a C ABI wrapper for AccessKit. So one option would be to compile that as a DLL, then call that DLL using Go’s syscall package. But that would require Gio users to distribute a DLL with their Windows applications, which they don’t have to do now. And if you make the DLL optional, you can bet that some developers will omit it, leading to inaccessible applications. I saw that happen when Qt implemented accessibility in a plugin. One of my goals with AccessKit is to eliminate as many excuses as possible for omitting accessibility, and make it impossible for downstream developers to turn off, including accidentally. So if we go with the DLL option, Gio would need to fail to run on Windows without that DLL, and I understand this may be unacceptable.

        Of course, one option is to simply require cgo on Windows. But that would require application developers to have a MinGW toolchain, which they don’t need now. That would lead to another excuse for omitting accessibility.

        Elias suggested that it might be possible to use a .syso file. But given that I’m going to be using Rust, it may require some elaborate toolchain hacking to produce a suitably self-contained .syso file. It would also likely require using a GNU toolchain, which AFAIK isn’t an option for Windows on ARM64. I don’t know if Gio is running on that combination of OS and architecture yet, but I know Go is.

        So I see no really good option right now.

      2. 1

        Is there a quick, obvious way to tell if a project requires cgo without having to grep the source or trying to build with cgo disabled? I’m allergic to cgo, and am often annoyed with how long it takes to figure out if an external thing needs it when I’m evaluating a long list of possible external things I might want to use.

      1. 10

        I’m compelled to point out here that Gio doesn’t yet support platform accessibility APIs, meaning Gio-based applications are completely unusable for people who require tools such as screen readers. I don’t say this to criticize the Gio developers. I’ve already discussed the issue with them, and I believe they’re waiting on me to do the heavy lifting here, which I will. I just point this out so that developers that are thinking about using Gio in their applications will take this limitation into consideration when making their choice.

        1. 7

          [I do not have a hat for it, but am a Gio maintainer]

          Indeed, we do lack platform accessibility support right now. We’re grateful that you’re stepping up to tackle this problem generally, and I’m hoping to contribute directly to your efforts in the future.

          Platform accessibility support is a requirement before Gio 1.0. We definitely take the lack of it seriously.

        1. 4

          I’ll spend some more time hacking on AccessKit, my new cross-platform GUI accessibility abstraction. Not sure if I’ll spend the whole weekend on that though.

          1. 1

            Thank you for working on this, and for engaging communities like gio as you do it!

          1. 4

            Good choice of name and domain name for the website. Their are used like the plague and negative connotation is most welcome, these days when people do things they way they are supposed to rather than having a specific reason for it.

            Why do people use floats? Honest question. I don’t know any reason for using floats in any situation.

            1. 18

              Numeric processing with high dynamic range is simpler with floating-point numbers than fixed-point numbers. In particular, they have the ability to temporarily exceed range limitations with a fair amount of headroom and only a modest loss of precision.

              1. 2

                I agree this is the kind of thing they are appropriate for. A rather specific use case.

                1. 16

                  I’m not sure that “any science, physics or simulation anywhere, ever” is a very specific use case. Just not one that overlaps much with current hip new computing tech much.

                  1. 14

                    High dynamic range = most graphics, so it’s not actually very specific

                2. 10

                  They require less memory and are adequate for some kinds of programming where higher precision isn’t necessary. For instance, https://gioui.org uses them for pixel offsets because fractional pixels don’t matter beyond a point. 0.00000000001 pixels isn’t usually worth worrying about in an application’s layout.

                  I also think that there are some processors on which float32 operations are faster than float64, but I don’t think that’s true of conventional x86_64 processors.

                  1. 3

                    I also think that there are some processors on which float32 operations are faster than float64, but I don’t think that’s true of conventional x86_64 processors.

                    It’s true that there are lots of cases where you won’t see a difference at all because you’re limited by something else (e.g. the cost and latency of arithmetic can be hidden by memory latency sometimes), but I would not state this with confidence.

                    When you’re cache or memory bandwidth limited, you can fit twice as many float32 numbers into each cache line.

                    Vector operations on float32s typically have twice the throughput. All the vector operations in SSE and SSE2 for example come in versions that work on float32 or float64 numbers packed into 128 bit registers. The 32 bit versions operate on twice as many numbers with the same or better latency and clocks-per-instruction (according to Intel’s documentation, at least).

                    A few operations (such as division) have slightly worse latency noted in Intel’s docs for float64 versions.

                    1. 2

                      In order to have an insignificant error like the example you guive, you are using up more memory, not less.

                      Having deltas order of magnitude smaller than the precision you need is an argument against floats. Not for floats. There is nothing positive into brute forcing the the maximal error by throwing useless bytes at it.

                      The do have have high precision around the range people use them. What they don’t have, and I suppose this is what people mean by precision is exactness. Given they are created by constructors accepting decimal notation in most programming languages. Most common decimal round numbers are not representable with such data types. And that is why I don’t understand why they are so ubiquitous.

                      1. 8

                        I don’t think most floats are created to represent decimal numbers. Some are, like when representing currency or in a calculator, but most floats are representing light or sound levels, other sensor readings, internal weights in neural networks, etc.

                        I’m guessing you may work in a domain like finance where decimal numbers seem ubiquitous, but you’re not considering the wider use cases.

                        1. 3

                          Yes, I do work in domains where decimal numbers are ubiquitous, floats are the plague. I see them even for representing natural numbers “in case we want to use a smaller unit”, and other such nonsense.

                          Even when used for store sensor readings (like image or sound) the only valid reason to use them is ifndividing your scale exponentially serves you better than linearly. Which I would argue It’s perhaps half the times or less.

                    2. 9

                      In machine learning, it’s common to optimize your parameters for space, since in those cases you typically don’t care about the precision loss compared to doubles and it lets you halve your parameter size, but you don’t want to use fixed point because your parameter range can be large. There are some approaches that involve 8-bit or 16-bit fixed point, but it’s not a universal thing at all.

                      In general, though, a lot of times they’re just Good Enough, and they save you from having to think about scaling constants or writing your own multiplication algorithms due to hardware support.

                      1. 7

                        Are you talking about the C float type, i.e. 32-bit IEEE floating-point, or all floating point types? If the latter, what commonly available data type should people use instead? Last I checked, few languages offer fixed-point types.

                        32-bit float is often used internally in audio code (for example Apple’s CoreAudio) because it has as much precision as a 24-bit integer but (a) gives you a lot more more dynamic range at low volume, and (b) doesn’t turn into garbage if a calculation overflows. (I don’t know if you’ve ever heard garbage played as PCM audio, but it’s the kind of harsh noise that can literally damage speakers or people’s hearing, or at least really startle the shit out of someone.)

                        A general reason for using floats is because a general purpose system — like the JavaScript language, or the SQLite database —doesn’t know the details of every possible use case, so providing FP math means it’s good enough for most use cases, and people with specialized needs can layer their own custom types, like BCD or fixed-point, on top of strings or integers.

                        1. 5

                          JavaScript is a typical case where floating point is a bad default. Typical use cases for numerics are user-facing values such as prices, not 3D graphics.

                          1. 2

                            I haven’t heard anyone say what should be used instead. Are you saying JavaScript should have provided a BCD DecimalNumber type instead of floating point? How would people doing any sort of numerics in JS have felt about this? Doing trigonometry or logarithms in BCD must be fun.

                        2. 5

                          I’ve gone through a personal rollercoaster in my relationship with IEEE floating-point, and my current sense is that:

                          a) I’d love to have computers support a better representation like Unums or Posits or something else.

                          b) What we have available in mainstream hardware is fairly decent and certainly worth using while it’s the only option. Overflow and underflow in floating-point isn’t that different from overflow in integers, and a whole lot less likely to be encountered by most of us given the far larger domain of floating-point numbers.

                          c) The real problem lies in high-level languages that hide processor flags from programmers. C and C++ have some bolted-on support with intrinsics that nobody remembers to use. Rust, for all its advances in other areas, sadly hasn’t improved things here. Debug mode is a ghastly atavism, and having release builds silently wrap is a step back from gcc’s (also sub-optimal) -ftrapv and -fwrapv flags.

                          1. 8

                            Haha as the implementor of unums and posits, I’d say unums are too much of a pain in the ass. Posits might have better performance, though if you need error analysis, it might be strictly worse. Posits had a fighting chance with the ML stuff going on but I think that ship has sailed.

                            As for ignored processor flags. I think zig is making an effort to make those sorts of intrinsics easily accessible as special case functions in the language, and hopefully they take on a strategy of making polyfilling easy for platforms that have partial support.

                            1. 3

                              I use floats for GPU based computer graphics. I’ve read “Beating Floating Point at its Own Game: Posit Arithmetic”, and posits sound amazing: better numerical properties and more throughput for a given amount of silicon. But I’ve not used them, and I will never use them unless they are adapted by GPU manufacturers. Which I guess won’t happen unless some new company disrupts the existing GPU ecosystem with superior new GPU tech based on posits. Something like Apple with the M1, but more analogous to Space-X with the Falcon and Starship. I don’t see any reason for the large entrenched incumbents to gamble on new float technology that is incompatible with existing graphics standards.

                              1. 4

                                Yeap. Sorry it didn’t work out. We tried though (I even have some verilog models for posit circuits).

                            2. 3

                              Swift’s default integer arithmetic operators panic on overflow. (There are alternate ones that ignore overflow, for performance hot spots.)

                              1. 1

                                Or when you actually need that behaviour, such as in hashing functions. But you don’t want your customer ids to actually wrap around silently.

                            3. 3

                              Why do people use floats? Honest question. I don’t know any reason for using floats in any situation.

                              They’re used to represent real numbers. It’s easy and convenient to have types like float that natively represent real numbers. It’s also nice to have statically allocated, roughly word-sized representation (as opposed to arbitrary precision).

                              1. 2

                                Why? What makes them more suited than integers for representing real numbers?

                                1. 1

                                  fractions, sqrt, etc fixed point arithmetic drops a huge range of precision at either the high or the low end, and is also slower for many operations.

                                  1. 1

                                    I don’t understand what you mean. Integers have uniform precision throughout b the scale. Choose the base unit as you see fit for the precision you want and that is what you get.

                                    It always “drops the same range of precision”. if you need the precision of a float around zero, then set your base unit to that and there you have, it.s your maximum error. Unlike with floats.

                                    When are integers slower and why? You always have to at least perform the same operation in the mantissa of your floats..?

                                    1. 5

                                      the problem with fixed point is that you have to choose one range of precision, otherwise you’re just inventing what is likely to be a suboptimal software version of floating point. While there are (were?) cases where fixed point is acceptable, in general floating point can do better, and is faster.

                                      The reasons fixed point is slower boils down to the lack of hardware support for fixed point, but there are a few other reasons - efficientlyand accurately computing a number of real functions often requires converting fixed point to some variant of floating point anyway.

                                      In general integer operations are faster for basic arithmetic (and I really mean the basics: +,-,*), complex functions are typically made “fast” in fixed point arithmetic by having lookup tables that approximate the results, because fixed point arithmetic is typically used in places where accuracy is less important.

                                      Multiplication, addition, subtraction of floating point is only marginally slower than integer arithmetic, and once you add in the shifts required for fixed point arithmetic floating point actually outperforms it.

                                      1. 1

                                        I have no idea what you mean by “lack of hardware support”. Manipulating integers is leterally everything a processor does at a low level.

                                        What are you referring to?

                                        1. 1

                                          It’s not a matter of just doing inter operations, because as you say everything is fundamentally integers in a cpu. The question is how many integer operations you have to do.

                                          If you’re doing fixed point arithmetic you have to do almost everything floating point logic requires only without hardware support. Fixed point arithmetic isn’t simply integer arithmetic, it’s integer arithmetic plus large integer work, plus shifts. Because there isn’t hardware support, which there isn’t because if you’re adding hardware you may as well do floating point which is more generally useful.

                                          1. 1

                                            No to be stubborn but I am still not getting your point.

                                            The question is how many integer operations you have to do.

                                            Less than half as if you use floats, obviously. Whatever operations your cpu does for integers, it needs do for the mantissa of your floats, plus handle the exponents plus moving stuff out of the way and back in.place.

                                            Fixed point arithmetic isn’t simply integer arithmetic

                                            I am not sure what you think I am suggesting but to be clear it is: reduce all you variables to integer and do only integer arithmetic. It is, in the end, everything a processor is capable of doing. Integer arithmetic. Everything builds on it.

                                            I think the confusion here is the notion of “point”. A computer is capable of representing a finite number of states. A point is useful for us humans to make things more readable. But for a computer, a number is always an element in a finite set. You suggest I need to meaa around with fixed point arithmetic because I reject floats. But what I mean is: unless you hit scale limitations, there is no reason for using anything else than integers.

                                            If the confusion is how the result is presented to the user… That is a non problem. Just format your number to whatever is most human readable.

                                            1. 2

                                              No to be stubborn but I am still not getting your point.

                                              no worries

                                              Ok, the first problem here is that you can’t reduce everything to integer arithmetic, if I am doing anything that requires fractional values I need to adopt either fixed point or floating point arithmetic. Fixed point is inherently too inflexible to be worth creating a hardware back end for in a general purpose CPU, so has to be done in software, that gives you multiple instructions for each operation. If you are comparing fixed point to floating point in software fixed point generally wins, but the reality is the floating point is in hardware, so the number of instructions you are dispatching (which for basic arithmetic is the bottleneck) is lower, and floating point wins.

                                              In this case point has nothing to do with what the human visible representation is. The point means how many fractional bits are available. It doesn’t matter what your representation is, floating vs fixed, the way you perform arithmetic is dependent on that decision. Fixed point arithmetic simplifies some of this logic which is why in software implementations it can beat floating point, but it does that by sacrificing range and precision.

                                              To help clarify things lets use concrete examples, how are you proposing 1.5 gets represented, and how do you perform 1.5 * 0.5 and represent the result. I need to understand what you are proposing :D

                                              1. 1

                                                I think the claim that precision and range are sacrificed doesn’t really hold. There is no silver bullet. The range of floats is larger because if has less precision as you get closer to the limits. Arguably, it has more precision where it is most useful, but this can be very deceiving. Include a large number in your computation and the end result might has less precision than what most people would think. They look at the decimal representation with a zillion decimal places and assume a great deal of precision. But you might have poluted your result with a huge error and it won’t show. This doesn’t happen with ints. You reach range limitations faster of course… But this isn’t very common with 64 bit ints.

                                                But your final question perfectly illustrates the problem. As a programmer, you need to decide what should happen ahead of time. If you mean those values as exact values then you pretty much need a CAS to handle fractions, roots and and so on. Which obviously has no use for floats. If you mean approximate values, you need to be explicit and be in charge of the precision you intend. 1.5*0.5 is 0.7 or 0.8. it doesn’t make sense to include more decimal places if you are no doing exact calculus.

                                                We learn this in school and my pocket TI calculator does this. If you set precision to automatic and insert 1/3, the result is zero. But if you inser 1/3.0, the result is 0.3. why would you want more decimal places if the number cannot possibly be stored with its exact value and is derived for numbers with less precision?

                                                If you write 1.000 kg, it doesn’t mean the same as 1kg. If you mean the first it means a precision to the gram, and the easiest when writing a computer program is to just reduce to grams and proceed with integer arithmetic.

                                                1. 3

                                                  the claim that precision and range are sacrificed doesn’t really hold

                                                  This is well studied. For example, I’ve seen the results of computational fluid dynamics simulation,, taking f128 to be “ground truth”, f64 gets far closer to the correct answer than any fixed64 representation.

                                      2. 3

                                        Consider something like 1 / x^2, where x >> 1. You have to calculate x², which will be a very small large, and then take the reciprocal, which will be a very small number. You can’t pick a single fixed-point to cover both, and there’s no opportunity in that one calculation to switch between two formats

                                        Situations like that are common in many scientific applications, where intermediate stages of computation are much bigger and small than both your inputs and final output.

                                        1. 1

                                          That is when one would use floats yes. But let.s be clear. They are comon in some scientific applications, specifically chemistry. The maxint or a 32 bit integer is plentiful for must usages.

                                          64 bit processors have been the standard for over a decade. Even those situations you mention hardly need a range larger than a 64 bit integer.

                                          1. 2

                                            That is when one would use floats yes. But let.s be clear. They are comon in some scientific applications, specifically chemistry. The maxint or a 32 bit integer is plentiful for must usages.

                                            I can’t think of a scientific field which wouldn’t prefer floats to 32 bit integers. What happens when you need to find a definite integral, or RK4 a PDE, or take the determinant of a matrix?

                                            64 bit processors have been the standard for over a decade. Even those situations you mention hardly need a range larger than a 64 bit integer.

                                            If we’ve got 64 bits, then why not use a double?

                                            1. 1

                                              Regarding your first paragraph. I don’t think you are getting that I am suggesting to adjust the base unit to whatever precision delta you intend. Otherwise I don’t understand your question. Could you be clear about what exactly happens if you use floats that wouldn’t happen otherwise? They are both a data type made of a descrete set representing point on the real number axis. What limitations exactly are you suggesting integers have other than their range?

                                              As for your second paragraph, isn’t it the other way around? Isn’t the point of floats to overcome integer range and precision limita and strike a ballance between both? Why would you need to that if you don’t have such limitations anymore. Floats were used all the time on 8 bit processors even for things you would integers because of range limitations. We don’t need to do that on our 32 and 64 bit processors.

                                              I think there is this wrong idea that ints are meant to be used for natural numbers and such only. Which is of course a misconception.

                                              1. 1

                                                Regarding your first paragraph. I don’t think you are getting that I am suggesting to adjust the base unit to whatever precision delta you intend. Otherwise I don’t understand your question. Could you be clear about what exactly happens if you use floats that wouldn’t happen otherwise? They are both a data type made of a descrete set representing point on the real number axis. What limitations exactly are you suggesting integers have other than their range?

                                                My point is that all three of those things involve working with both very large and very small numbers simultaneously. You can’t “just set the precision delta”. Or if you can, you’d have provide a working demonstration, because I believe it’s much harder than you’re claiming it is.

                                                Also, lots of science involves multiplying very small by very large numbers directly, such as with gravitational force.

                                                As for your second paragraph, isn’t it the other way around? Isn’t the point of floats to overcome integer range and precision limita and strike a ballance between both? Why would you need to that if you don’t have such limitations anymore. Floats were used all the time on 8 bit processors even for things you would integers because of range limitations. We don’t need to do that on our 32 and 64 bit processors.

                                                I think we use them for lots of reasons, and one is that you don’t need to pick a basis in advance of computation, like you do with fixed width.

                                  2. 1

                                    Floating-point numbers can only represent (binary) fractions, but many real numbers need to be represented by computations which emit digits.

                                  3. 3

                                    One of the most important reasons is that floats are invariably literals whereas “proper” decimals are usually not

                                    1. 1

                                      How so?

                                      1. 1

                                        eg in Python

                                        # literal reals in python are IEEE floats
                                        >>> 0.2 + 0.1
                                        0.30000000000000004
                                        

                                        vs

                                        # Decimal is a wrapper around the GMP library - ie proper numbers
                                        >>> from decimal import Decimal
                                        >>> Decimal("0.2") + Decimal("0.1")
                                        Decimal('0.3')
                                        

                                        Extra syntax and extra library (even though it’s in the stdlib!) is a huge barrier. I have seen a number of real world systems be written to use floats - and suffer constant minor bugs - simply because it was easier.

                                        Once or twice I have ripped out floats for decimals. It’s not too hard but you do need a typechecker to keep things straight.

                                    2. 2

                                      Precision degrades much more gracefully with floating point operations (which round to approximate values or saturate to 0 or inf) than with integer or fixed width operations (which truncate or overflow).

                                      If you have to do work with real numbers then floats are usually best of those three options.

                                    1. 7

                                      If one thing from urbit could escape that project and be used elsewhere, I’d like it to be their monosyllabic pronunciation for ASCII symbols: https://urbit.org/docs/hoon/hoon-school/hoon-syntax/#reading-hoon-aloud

                                      I think this would be really nice to have as a shared language with other programmers, though there are some symbols that already have short names in no need of replacement.

                                      1. 6

                                        I think Talon voice is a better precedent as highlighted in speaking code

                                      1. 1

                                        This is neat! I’ve seen some comments about wanting to be able to do this interactively in a way that permits saving and editing. I think kakoune does a pretty good job there.

                                        If you open the text of the log in kakoune, you can type % to select the whole buffer, and then you can pass it through an external filter like ripgrep: |rg -v <term>. The result is that your selection (the whole file) is replaced by the output of your filter (only lines that match). The great thing is that you can repeat this operation any number of times without your regular expression needing to get any longer, and it’s trivial to save the current matching contents of the file to a new file (:w name) or an in-memory buffer (%y:e -scratch<ret>p) at any point in the process.

                                        1. 3

                                          Ah, I’ve been wondering what people mean when they say that Clojure has a bad license. Thanks for this.

                                          On a separate note, I don’t see the benefit of using the MIT license over an even more permissive license, like the Unlicense. I would love to hear some arguments for using the former, as I personally am quite unsure of what to license my own projects (they’re either not been licensed at all, or using the Unlicense).

                                          1. 6

                                            I think Google and other big corps don’t allow contributing to or using unlicensed projects because public domain is not legally well defined in some states lawyer pedantry, which to me seems like a positive thing :^)

                                            Personally I go with Unlicense for one-off things and projects I don’t really want/need to maintain, MIT or ISC (a variant of MIT popular in the OCaml ecosystem) if I’m making a library or something I expect people to actually use because of the legal murkiness of the Unlicense, and if I were writing something like the code to a game or some other end-user application I’d probably use the GPLv3, for example if it was a mobile app to discourage people from just repackaging it and adding trackers or ads and dumping it on the play store.

                                            1. 4

                                              Yes! “Copyleft is more appropriate to end-user apps” is my philosophy as well. Though actually I end up using the Unlicense for basically all the things.

                                              legal murkiness of the Unlicense

                                              Isn’t that kinda just FUD? The text seems good to me, but IANAL of course.

                                              1. 2

                                                Isn’t that kinda just FUD?

                                                Reading the other comments seems like it is, I guess I was just misinformed. I still prefer MIT because, as others have said, it’s more well known.

                                              2. 2

                                                This is somewhat off-topic, but I never thought the ISC license was really popular in the OCaml ecosystem. For a crude estimate:

                                                $ cd ~/.opam/repo/default/
                                                $ grep -r 'license: "ISC"' . | wc -l
                                                1928
                                                $ grep -r 'license: "MIT"' . | wc -l
                                                4483
                                                
                                                
                                                1. 2

                                                  I think it’s more popular than in most other ecosystems at least.

                                                  1. 3

                                                    Might be. It would be interesting to get some stats about language/package ecosystem and license popularity.

                                                    1. 2

                                                      Here it is for Void Linux packages; not the biggest repo but what I happen to have on my system:

                                                      $ rg -I '^license' srcpkgs |
                                                        sed 's/license="//; s/"$//; s/-or-later$//; s/-only$//' |
                                                        sort | uniq -c | sort -rn
                                                         1604 GPL-2.0
                                                         1320 MIT
                                                          959 GPL-3.0
                                                          521 LGPL-2.1
                                                          454 BSD-3-Clause
                                                          392 Artistic-1.0-Perl, GPL-1.0
                                                          357 Apache-2.0
                                                          222 BSD-2-Clause
                                                          150 GPL-2
                                                          133 ISC
                                                          114 LGPL-3.0
                                                          104 Public Domain
                                                           83 LGPL-2.0
                                                           83 GPL-2.0-or-later, LGPL-2.1
                                                           63 GPL-3
                                                           50 MPL-2.0
                                                           47 OFL-1.1
                                                           41 AGPL-3.0
                                                           36 Zlib
                                                           31 BSD
                                                           26 GPL-2.0-or-later, LGPL-2.0
                                                           23 Unlicense
                                                           21 Artistic, GPL-1
                                                           20 Apache-2.0, MIT
                                                           19 ZPL-2.1
                                                           19 BSL-1.0
                                                      [...]
                                                      

                                                      It groups the GPL “only” and “-or-later” in the same group, but doesn’t deal with multi-license projects. It’s just a quick one-liner for a rough indication.

                                                2. 1

                                                  This sounds like a nice scheme for choosing a license. Thanks for you explanations regarding choosing each one of them.

                                                3. 4

                                                  On a separate note, I don’t see the benefit of using the MIT license over an even more permissive license, like the Unlicense

                                                  It’s impossible to answer the question without context. No license is intrinsically better or worse than another without specifying what you want to achieve from a license. With no license, you prevent anyone from doing anything, so any license is a vector away from this point, defining a set of things that people can do with your code. For example:

                                                  • Do you want to allow everyone to modify and redistribute your code? If not, then you don’t want a F/OSS license.
                                                  • Do you want to allow people to modify and redistribute your code without giving their downstream[1] the code and rights to do the same? If so, you want a copyleft license of some kind.
                                                  • Do you want to allow people to modify and redistribute your code linked to any other open source code? If so, then you want either a permissive license or a copyleft license with specific exemptions (making something that is both copyleft and compatible with both GPLv2 and Apache 2 is non-trivial, for example).
                                                  • Do you want people who are not lawyers to be able to understand what they can do with your code, when composed with whatever variations on copyright law apply in their particular jurisdiction? Then you want a well-established license such as BSD/MIT, one of the Creative Commons family, Apache, or GPL, for which there are a lot of human-readable explanations.
                                                  • Do you want to be able to take contributions from other folks and still use the code in other projects under any license? If so, then you want a permissive license or to require copyright assignment.
                                                  • Do you intend to sue people for violating your license? If not, then you probably won’t gain anything from a license with terms that are difficult to comply with because unscrupulous people can happily violate them, unless you assign copyright to the FSF or a similar entity[2].
                                                  • Do you want to allow people to pretend that they wrote your code? If so, then you want to avoid licenses with an attribution requirement and go for something like Unlicense.

                                                  From this list, there are two obvious differences between the MIT license and Unlicense: MIT is well-established and everyone knows what it means, so there’s no confusion about what it means and what a court will decide it means, and it requires attribution and so I can’t take an MIT-licensed file, put it in my program / library and pretend that I wrote it. Whether these are advantages depends on what you want to allow or disallow with your license.

                                                  [1] There’s a common misconception that the GPL and similar licenses require people to give back. They don’t, they require people to give forwards, which amounts to the same thing for widely-distributed things where it’s easy to get a copy but is not so helpful to the original project if it’s being embedded in in-house projects.

                                                  [2] Even then, YMMV. The FSF refused to pursue companies that were violating the LGPL for GNUstep. Being associated with the FSF was a serious net loss for the project overall.

                                                  1. 2

                                                    I should have clarified that I don’t care about attribution. Thank you for the informative and well structued overview.

                                                    1. 3

                                                      Looking at the text of Unlicense, it also does not contain the limitations of liability or warranty. That’s probably not a problem - when the BSD / MIT licenses were written there was a lot of concern about implied warranty and fitness for purpose, but I think generally that’s assumed to be fine for things that are given away for free.

                                                      You might want to rethink the attribution bit though. It can be really useful when you’re looking for a job to have your name associated with something that your employer is able to look at. It is highly unlikely that anyone will choose to avoid a program or library because it has an attribution clause in the library, so the cost to you of requiring attribution is negligible whereas the benefits can be substantial.

                                                      If you’re looking for people to contribute to your projects, that can have an impact as well.

                                                      1. 3

                                                        I don’t care about attribution mainly for philosophical reasons. I dislike copyright as a concept and want my software to be just that, software. People should be able to use it without attributing the stuff to me or anyone else.

                                                        1. 2

                                                          Attribution is more closely related to moral rights than IP rights, though modern copyright has subsumed both. The right of a creator to be associated with their work predates copyright law in Europe. Of course, that’s not universal: in China for a long time it was considered rude to claim authorship and so you got a lot of works attributed to other people.

                                                          1. 2

                                                            Right, I don’t want to claim authorship of much of the stuff I create. I simply want to have it be a benefit to the people who use it. I don’t have a moral issue with not crediting myself, so I won’t.

                                                      2. 2

                                                        Perhaps you would like the ZLib license, then? Unlike MIT, it does not require including the copyright and license text in binary distributions.

                                                      3. 2

                                                        I’m no lawyer, but as I understand it, authorship is a “natural right” that cannot be disclaimed at least within U.S. law. It is separate from copyright. The Great Gatsby is in the public domain, but that doesn’t mean that I get to say that I wrote it. You probably can’t press charges against me for saying so as an individual, but plagiarism is a serious issue in many industries, and may have legal or economic consequences.

                                                        My point is that the Unlicense revokes copyright, but that someone claiming to have created the work themselves may still face consequences of a kind. Whether that is sufficient protection of your attribution is a matter of preference.

                                                        1. 3

                                                          My understanding is that it’s a lot more complex in the US. Authorship is under the heading of ‘moral rights’, but these are covered by state law and not federal. There are some weird things, such as only applying to statues in some states.

                                                      4. 3

                                                        Not licensing make the product proprietary, even when the source is publicly shown no one can use it without your permission. IANAL but Unlicense (just like CC0) aren’t really legally binding in some countries (you cannot make your work public domain without dying and waiting). So MIT is not that bad choice as the only difference is that you need to be mentioned by the authors of the derivative work.

                                                        1. 3

                                                          0-BSD is more public domain-like as it has zero conditions. It’s what’s known as a “public-domain equivalent license”.

                                                          https://en.wikipedia.org/wiki/Public-domain-equivalent_license

                                                          1. 3

                                                            The Unlicense is specifically designed to be “effectively public domain” in jurisdictions that don’t allow you to actually just put something in the public domain, by acting as a normal license without any requirements.

                                                            That’s, like, the whole point of the Unlicense :) Otherwise it wouldn’t need to exist at all.

                                                            1. 2

                                                              I have heard that the Unlicense is still sometimes not valid in certain jurisdictions. 0-BSD is a decent alternative as it’s “public-domain equivalent”, i.e. it has no conditions.

                                                            2. 2

                                                              Right, I’ve heard there’s some legal issues with it before, thanks for reminding me.

                                                              EDIT: Looks like there’s no public domain problems with the Unlicense after all, so I’m not worried about this.

                                                              1. 1

                                                                Looks like there’s no public domain problems with the Unlicense after all

                                                                Where did you see this?

                                                              2. 2

                                                                The whole point of the CC0 is to fully disclaim all claims and rights inherent to copyright to the fullest extent possible in jurisdictions where the concept of Public Domain does not exist or cannot be voluntarily applied. There’s very little reason to suspect that choosing the CC0 is less legally enforceable than MIT.

                                                                1. 3

                                                                  CC0 seems fine but is somewhat complex. I prefer licenses that are very simple and easy to digest.

                                                              3. 3

                                                                I don’t see the benefit of using the MIT license over an even more permissive license, like the Unlicense

                                                                Purely pragmatically, the MIT license is just better known. Other than that: the biggest difference is that the MIT requires attribution (“The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.”), and the Unlicense doesn’t.

                                                                As for the concerns over “public domain”, IMHO this is just lawyer pedantry. The text makes it plenty clear what the intent is, and I see no reason why it shouldn’t be upheld in court. The gist of the first paragraph is pretty much identical as MIT except with the “the above copyright notice and this permission notice shall be included in all copies” omitted. If it would only say “this is public domain”: then sure, you could land in to a conflict what “public domain” means exactly considering this isn’t a concept everywhere. But that’s not the case with the Unlicense.

                                                                1. 2

                                                                  This is comforting to know. Thank you for the clarification!

                                                                2. 1

                                                                  I’ve released a lot of code under a dual MIT/Unlicense scheme. ripgrep is used by millions of people AFAIK (through VS Code) and is under this scheme. I have heard zero complaints. The purpose of such a thing is to make an ideological point with the Unlicense, while also providing the option to use something like the MIT which is a bit more of a known quantity. Prior to this, I was releasing code with just the Unlicense and I did receive complaints about it. IIRC, from big corps but also individuals from jurisdictions that don’t recognize public domain. It wasn’t so much that they definitively said they couldn’t use the Unlicense, but rather, that it was too murky.

                                                                  IANAL although sometimes I play one on TV. In my view, the Unlicense is just fine and individuals and big corps who avoid it are likely doing it because of an overly conservative risk profile. Particularly with respect to big corps, it’s easy to see how the incentive structure would push them to do conservative things with respect to the law for something like software licenses.

                                                                  While the dual licensing scheme seems to satisfy all parties from a usage perspective, I can indeed confirm that this prevents certain big corps from contributing changes back to my projects. Because those changes need to be licensable under both the MIT and Unlicense. To date, I do not know the specific reasons for this policy.

                                                                  1. 2

                                                                    I never really understood how dual-licensing works, can you explain a bit? Do users pick the license that they want or can they even cherry pick which clauses of each license they want to abide by?

                                                                    1. 2

                                                                      AIUI, you pick one of the licenses (or cascade the dual licensing scheme). That’s how my COPYING file is phrased anyway.

                                                                      But when you contribute to a project under a dual licensing scheme, your changes have to be able to be licensed under both licenses. Otherwise, the dual license choice would no longer be valid.

                                                                  2. 1

                                                                    As I state in the article, I don’t think the EPL is a “bad license.” Clojure Core uses the EPL for very good reasons — it’s just that most of those reasons are unlikely to apply to Random Clojure Library X.

                                                                    EDIT: I had replied regarding The Unlicense, but I see other folks have done a more thorough job below, so I’m removing that blurb. Thanks all.

                                                                    1. 1

                                                                      I should have expressed myself more clearly. I’ve heard people mention Clojure’s license as a downside to the language, and now that I’ve read your article I have an idea of what they’re talking about.

                                                                    2. 0

                                                                      Unlicense

                                                                      I recommend against using this license, because making the ambiguous license name relevant anywhere makes everyone’s life harder. It makes it hard to distinguish between “CC0’d” and “in license purgatory”:

                                                                      “What is this code’s license?”
                                                                      “It’s Unlicensed
                                                                      <Person assumes it’s not legally safe to use, because it’s unlicensed>

                                                                      I wish that license would either rename or die.

                                                                    1. 2

                                                                      Does anyone know why a GPG signing key was accessible to the CI environment in the first place? That strikes me as a little odd.

                                                                      1. 2

                                                                        I think it’s not uncommon for CI/CD systems to sign releases. It’s hard to make manual signatures with offline keys scale to really frequent releases. That being said, this is exactly why the practice is dangerous.

                                                                        1. 2

                                                                          When I set up CI/CD for a package to be published to Maven Central a few years ago, the requirement to sign gave me some anxiety from a security perspective but the benefits outweighed the risks.

                                                                          I wonder if there are any package management systems that support a secondary signature/attestation of the package. That is, CI/CD automation signs and releases but maintainers can also sign later as a second, human layer of authenticity.

                                                                          1. 1

                                                                            I think https://sigstore.dev/ is aiming at that kind of use case

                                                                            1. 1

                                                                              I wonder if there are any package management systems that support a secondary signature/attestation of the package. That is, CI/CD automation signs and releases but maintainers can also sign later as a second, human layer of authenticity.

                                                                              This would be really cool given that for OpenPGP it’s trivial to create multi signature file (just concatenate individual signatures). It’s also possible to notarize existing signature (signing a signature) e.g. when people want to certify that the CI signature is valid.

                                                                              For builds that are completely reproducible it would be enough for the developer to sign their build and publish the signature with the artifact from the CI. Since the build is reproducible the signature would be over the same data.

                                                                        1. 1

                                                                          This didn’t end up having much to do with JavaScript, which surprised me.

                                                                          1. 1

                                                                            Yeah, their title is kind of misleading, it’s more about programming in general than JavaScript. That’s why I didn’t put the JavaScript tag ;)

                                                                          1. 10

                                                                            I wonder about several courses of action.

                                                                            We could double down on the concept of copyleft. Treating corporations as people has led to working-for-hire, a systematic separation of artists from their work. We could not just extend the Four Freedoms to corporations, but also to artists, by insisting that corporations are obligated to publish their code; they could stop publishing their code only if they stopped taking that code from artists and claiming it as their own. A basic transitional version of this is implemented by using licenses like AGPLv3, and it does repel some corporations already.

                                                                            We could require decentralization. Code is currently delivered in a centralized fashion, with a community server which copies code to anybody, for free. This benefits corporations because of the asymmetric nature of their exploitation; taking code for free is their ideal way of taking code. Systems like Bittorrent can partially remedy the asymmetry by requiring takers to also be sharers.

                                                                            We could work to make languages easier to decompile and analyze. This partially means runtime code auditing, but it also means using structured language primitives and removing special cases. In the future, corporations could be forced to choose between using unpleasant eldritch old languages or using languages that automatically publish most of their API and logic in an auditable and verifiable manner to any requestor. And they’ll do anything we recommend, as long as it comes with an ecosystem; look at how many folks have actively embraced C, PHP, JS, etc. over the years.

                                                                            1. 15

                                                                              I don’t think that there is anything that can be accomplished by messing around with licenses and in fact trying to keep those sane and not too exotic is the one good thing big tech has done in my opinion.

                                                                              What’s missing is something different that can break the “get vc money”, “acquire users”, “build a moat”, “return money to vc” dance. I personally have no idea what that could be. The one thing I know, is that it produces unlovable software and that there are enough people out there that can do better with a fraction of the money.

                                                                              I also don’t think that the answer lies in more Free software zealotry.

                                                                              1. 14

                                                                                What’s missing is something different that can break the “get vc money”, “acquire users”, “build a moat”, “return money to vc” dance. I personally have no idea what that could be.

                                                                                I would encourage you to look into the model of worker-owned cooperatives.

                                                                                1. 8

                                                                                  I would encourage you to look into the model of worker-owned cooperatives.

                                                                                  I’ve seen this work for consultancies. Everyone involved has to actually be of like mind though, and that can be harder than it appears to actually manifest and sustain.

                                                                                  1. 4

                                                                                    Why would that change the quality of the software? In order to do it right, you’d need money coming in and a small enough team, and if you have a small team and money coming in you can probably make quality software, no matter who holds the company shares.

                                                                                  2. 8

                                                                                    I appreciate your thoughts. I agree that things could be better.

                                                                                    As the saying goes, “if there were no copyright, then there would be no need for copyleft.” (Who first said this? I think it’s from an RMS essay.) The current focus on copyleft is because of the judo which licensing allows us to leverage against corporations. As another saying goes, “corporations are golems before the law;” licenses are made of the same substance as corporations, and form effective weapons.

                                                                                    There are two implicit ideas in your post. I don’t fully understand them and I don’t want to put words in your mouth or construct strawmen, so instead I’ll have to be vague. The first idea starts with compensating folks for producing code. Since compensation requires a market of purchasers, and software can be copied at marginal cost, there is a natural misalignment of incentives: The producers want to be paid for the labor of production, which tends to increase; but the purchasers want to only pay for the cost of copying, which tend to decrease.

                                                                                    The second idea is that the politics of programming languages are important. On one hand, anybody can create a programming language. On the other hand, there are tendencies for big things to get bigger, including communities expressing themselves with popular languages. But on the gripping hand, every new language is built from the tools of old languages. Rather, it’s a question of which possible new languages we choose to build, and which lessons we choose to learn from the past, and those choices are political since people write expressive code in order to communicate with each other.

                                                                                    The answer to breaking the cycle of capitalism involves either democratizing ownership of the cloud hardware, or democratizing the development and maintenance of the cloud software. The only thing which keeps the capitalists in control is the ownership of property. Free Software is optional but hard to avoid if we want to do anything about it.

                                                                                    1. 3

                                                                                      The current focus on copyleft is because of the judo which licensing allows us to leverage against corporations. As another saying goes, “corporations are golems before the law;” licenses are made of the same substance as corporations, and form effective weapons.

                                                                                      True, but I think that this is a war that is being played on many more levels and IMO, as effective as a license can be, Free software is losing the war on all other fronts. One example is branding. “Open Source” is a much more popular term than “Free software” and surely big tech has helped make that happen.

                                                                                      My point is that big tech has already learned how to win against licenses and it’s through marketing and a myriad of other activities. The FSF from my perspective has no chance at beating that unless it becomes willing to rebuild itself from the ground up, and we’re seeing that that’s not the case.

                                                                                      The answer to breaking the cycle of capitalism involves either democratizing ownership of the cloud hardware, or democratizing the development and maintenance of the cloud software.

                                                                                      Democratizing the development of software IMO has little to do with capitalism nowadays, and a lot more with being competent at shaping up communities around “principled” software projects and by keeping software simple and clean, so that new generations can quickly ramp up and fight against bad software. I leave it to you to judge how GNU is doing in that regard.

                                                                                      1. 4

                                                                                        “Open Source” is a much more popular term than “Free software” and surely big tech has helped make that happen.

                                                                                        The change happened a little earlier, but it’s not surprising that corporations would endorse a corporate-friendly bastardization of a community-grown concept. That’s what it means to be exploitative.

                                                                                        My point is that big tech has already learned how to win against licenses and it’s through marketing and a myriad of other activities.

                                                                                        You have no evidence with which to support this assertion. I intentionally linked to my pile of evidence that corporations systematically avoid certain licenses, including licenses which cover popular software with ecosystems of users. As I have previously explained:

                                                                                        The goal is to enumerate those licenses which are well-known, as a matter of folklore and experience, to designate Free Software which corporations tend to avoid. None of the information that I linked in my answer is new information, but it is cited and sourced so that folks cannot deny the bulk of the thesis for lack of evidence.

                                                                                        The FAANGs are indebted to GNU/Linux, for example, and while they have made efforts to get rid of GNU userlands, they are not yet ready to get rid of Linux. As I said at the beginning of the thread, we asked corporations to use C, and they used C; they chose irrationally because they’re not actually capable of technical evaluations, and this shackled them to our kernel of choice.

                                                                                        Democratizing the development of software IMO has little to do with capitalism nowadays…

                                                                                        This will have to be where we agree to disagree. You have pointed out twice, in your own words, that capitalism matters to producing software. First, you noted that the current cycle is driven by venture capitalists; these are the same sorts of capitalists that, a century ago, were funding colonial projects and having political cartoons made about them. Second, you surely would admit that it’s not possible to develop software without development hardware, which forms operating capital; whoever owns the computers has great control over software developers.

                                                                                    2. 6

                                                                                      There are other capitalistic software endeavors that are considerably more gentle than the current VC insanity. For example the SQLite folks. I’m not even sure it’s incorporated. But dr. Hipp is definitely doing it for the money.

                                                                                      Edit: looked it up, SQLite is incorporated as a limited partnership of a small number of people. Corporate contribution results in a very well defined boundary of “what you get”. The “funny” thing about SQLite is that it’s unlicensed. SQLite is In the public domain.

                                                                                      1. 1

                                                                                        But dr. Hipp is definitely doing it for the money

                                                                                        Eh this is very simplistic, and I’m not sure how you can be so certain that you know the motives of others. Do you know him?

                                                                                        If you look around, you can find various origin stories behind sqlite, which shed some light on the matter. Any project, and particularly large/long-lived ones, are going to have a mix of motivations, and they can change over time. Money could be one reason but it’s certainly not the whole story.

                                                                                        1. 2

                                                                                          Dr. Hipp has said so himself in some of his lectures. So, sure, he could be lying or saying it for dramatic effect, but I’m going to take him at his word, and I find it hard to believe that “making money” is zero percent of his incentive for doing it. That money wasn’t the only motivation is exactly my point. Just that “making money” does not automatically taint a project, in fact in many cases it’s a good signal that you are at least building something that someone wants. We are just living in times where other societal superstructures make it so that the type of capitalism that Loris is talking about is favored. My personal take is that it’s ironic that some of the factors that brought about what we have now were concieved of specifically to restrict or “strategically guide” capitalism, and have either spectacularly backfired or had some gnarly unintended, but perfectly predictable, if you were listening to the right people, consequences.

                                                                                      2. 2

                                                                                        My theory is that saying the problem is “profit motive” is almost right - the fundamental problem is trying to sell anything other than “what the user wants”, and receiving money from anywhere except directly from the user.

                                                                                        For instance, the “try free” button mentioned in the article is usually from someone trying to fund software development with cloud-services revenue. Cloud services revenue is not the software (or rather, it’s the software plus some other stuff), so they need to maintain the not-software that is not necessarily what the users need, and that distracts and gets in the way.

                                                                                        Ads/tracking, open core, all fall into the fundamental problem of prioritizing not-software over software.

                                                                                        So basically, I’m saying the future is patreon or liberapay or a libre app store.

                                                                                        There are two main ways we can make this happen:

                                                                                        1. We make paying for Free Software more convenient. There’s a lot of low-hanging fruit here. For instance, open up F-Droid on your phone, and look for an app called Phonograph. It’s GPL3, and offers a paid version ($5) called Phonograph Pro. P Pro is available from the github (if you compile it yourself) or the Google Play store, but not from F-Droid. F-Droid doesn’t support purchasing Free Software nor conditionally-available binaries, see. Selling Free Software is about selling convenience, so we damn well better make it convenient to buy Free Software. But more than that, it’s hardto figure out who or where to give money or even if it’s possible. I like Mesa, if I want to give them money I should be able to do so before the random impulse wears off.

                                                                                        And to go even further, if we’re ambitious, in the long term we should try to handle identity and payment on the desktop (which come to think of it is too long for this paragraph or post, I’ll gladly elaborate though) so as to make it easier in the long term for people to pay. 2. We should foster an attitude of “if you like it, put money towards it. anything.” Because IIRC, currently only 0.01%ish of users donate money. That is insanely low.

                                                                                        This is super weird and tightropey, since freedoms aren’t supposed to be conditional and realistically Free Software is fundamentally tied to voluntarism, and we really don’t want to make room for people to justify proprietary software by saying “well you ought to be paying anyway, and as long as you’re paying you’re not losing anything anyway”.

                                                                                        So, we need people to voluntarily pay within an order of magnitude or two of what the proprietary alternatives receive. I don’t see how anyone can sustainably compete on quality with Google, unless their revenue is at least 1% of Google’s. I just don’t see a primarily volunteer-programmer project ever scaling that high.

                                                                                        1. 1

                                                                                          Yeah, I’m with you there. I’m searching for an alternative path as well for https://arbor.chat. It takes money to grow your software, but there has to be a better model for funding than the traditional one. We’re thinking we might establish a nonprofit that accepts donations, but also provides a hosted set of infrastructure with a sourcehut-style subscription. I’d love to talk more about this kind of thing with anyone who is interested.

                                                                                          1. 2

                                                                                            As both an owner in a free software small business and also a small-time investor with a software freedom bent, I’m very interested in these kinds of topics and more collaboration between the people/projects/companies trying to find the way.

                                                                                            1. 1

                                                                                              I found this to be an interesting approach: https://squidfunk.github.io/mkdocs-material/insiders/

                                                                                              It seems like it’s working for them. For a theme, it seems to have quite a bit of financial support.

                                                                                              1. 1

                                                                                                Thanks for sharing that! I’m not yet sure how I feel about the approach taken, but it’s certainly a very interesting data point.

                                                                                          2. 7

                                                                                            We could double down on the concept of copyleft. Treating corporations as people has led to working-for-hire, a systematic separation of artists from their work. We could not just extend the Four Freedoms to corporations, but also to artists, by insisting that corporations are obligated to publish their code; they could stop publishing their code only if they stopped taking that code from artists and claiming it as their own. A basic transitional version of this is implemented by using licenses like AGPLv3, and it does repel some corporations already.

                                                                                            Doubling-down on the concept of copyleft is basically the agenda of the free software movement, which is the thing that @kristoff states is a “disaster on too many fronts and its leadership has failed so badly that I don’t even want to waste words discussing it”. I don’t think it’s obvious that the free software movement has failed - certainly not so obvious that it’s not worth words discussing it. But certainly it’s the case that lots of software is not published under copyleft licenses, some free and some non-free. If the free software movement is a failure so long as anyone at all is publishing non-free software or even free but non-copyleft software, then sure, it’s a failure so far; but that seems like an awfully stringent requirement for success.

                                                                                            We could require decentralization. Code is currently delivered in a centralized fashion, with a community server which copies code to anybody, for free. This benefits corporations because of the asymmetric nature of their exploitation; taking code for free is their ideal way of taking code. Systems like Bittorrent can partially remedy the asymmetry by requiring takers to also be sharers.

                                                                                            We already have this. Redis is a BSD-licensed piece of free software whose source code is publicly-available here on GitHub. Anyone can legally fork this and redistribute it, without asking anyone’s permission and without even doing all that much work. If GitHub deplatforms the project for any reason, it’s very easy to set up alternative git hosting on some other service. If someone really doesn’t like the fact that the official redis website has too big of a try free button, nothing is stopping them from setting up a website for their own fork of redis that doesn’t have that button.

                                                                                            We could work to make languages easier to decompile and analyze. This partially means runtime code auditing, but it also means using structured language primitives and removing special cases. In the future, corporations could be forced to choose between using unpleasant eldritch old languages or using languages that automatically publish most of their API and logic in an auditable and verifiable manner to any requestor. And they’ll do anything we recommend, as long as it comes with an ecosystem; look at how many folks have actively embraced C, PHP, JS, etc. over the years.

                                                                                            A lot of organizations using unpleasant eldrich old languages are stable and stodgy ones that have been around for decades, and aren’t necessarily even for-profit corporations. MUMPS is primarily used by hospitals, and COBOL has plenty of use in banks and government bureaucracies. A lot of the reason for this is that these organizations have software requirements that don’t change very much, and have made the trade-off that having a software stack that few people understand is better than updating that software stack and risking introducing bugs. Corporations that haven’t gotten big and institutional yet have more incentives to use newer technology stacks - and if they refuse to anyway and that choice contributes to the company failing in the marketplace, whatever, it’s just one more of many failed companies.

                                                                                            1. 1

                                                                                              We already have [decentralization]. Redis is a BSD-licensed piece of free software whose source code is publicly-available here on GitHub. Anyone can legally fork this and redistribute it, without asking anyone’s permission and without even doing all that much work. If GitHub deplatforms the project for any reason, it’s very easy to set up alternative git hosting on some other service.

                                                                                              This is the “we have food at home” fallacy. To use words more carefully: GitHub is the “community server” from which “code is currently delivered in a centralized fashion”. You are saying that if one point of centralization vanishes, then the community can establish another. Yes, but it takes time and effort, and the community is diminished in the meantime; removing those centralized points is damage to the communities.

                                                                                              A properly-decentralized code-delivery service would not be so fragile. It would not have any Mallory who could prevent a developer from obtaining code, save for those folks in control of the network topology. (A corollary is that network topologies should be redundantly connected and thickly meshed, with many paths, to minimize the number of natural Mallory candidates.) Any developer who wanted to use a certain library would only need to know a cryptographic handle in order to materialize the code.

                                                                                              Note that these services would only work as long as a majority of participants continue to share-alike all code. So corporations have a dilemma: Do they join in the ecosystem and contribute proportional resources to maintaining the service while gaining no control over it, or do they avoid the ecosystem and lose out on using any code which relies upon it? Of course they could try to cheat the network, but cryptography is a harsh mistress and end-to-end-encrypted messages are black boxes.

                                                                                              1. 3

                                                                                                This is the “we have food at home” fallacy. To use words more carefully: GitHub is the “community server” from which “code is currently delivered in a centralized fashion”. You are saying that if one point of centralization vanishes, then the community can establish another. Yes, but it takes time and effort, and the community is diminished in the meantime; removing those centralized points is damage to the communities.

                                                                                                GitHub isn’t the community server. There is no the community. Lots of separate open-source projects with their own communities exist, and they can individually choose to host the authoritative version of their code on whatever git platform they want, whether that’s GitHub, Gitlab, Gitea, the ssh-based hosting built into git, or some other option.

                                                                                                I agree that if a given open-source project deliberately chooses to host their code and issues and documentation and so on on GitHub, rather than on a platform that they have control over, they are vulnerable to community disruption and damage if GitHub decides to stop serving them. And insofar as GitHub is popular, lots of projects exist that are making this choice. I agree that this is a bad idea, and that these projects shouldn’t do this. Personally, I no longer host my own open-source code on GitHub, and I only interact with it in order to contribute to projects that do use it.

                                                                                                But individually getting a lot of separate organizations to switch away from a useful-but-nonfree software platform to a free one that maybe doesn’t have as much UI polish as the nonfree choice is a hard collective action problem (it’s actually pretty much the same problem as getting people to switch from Mac OS or Windows to Linux on their desktop computers). You can’t compel large numbers of people to value freedom from GitHub’s disruptive product choices over the value they currently get from GitHub. You can’t compel a bunch of different people to do the work to switch off GitHub all at once.

                                                                                                A properly-decentralized code-delivery service would not be so fragile. It would not have any Mallory who could prevent a developer from obtaining code, save for those folks in control of the network topology. (A corollary is that network topologies should be redundantly connected and thickly meshed, with many paths, to minimize the number of natural Mallory candidates.) Any developer who wanted to use a certain library would only need to know a cryptographic handle in order to materialize the code.

                                                                                                Radicle is a great idea, I’m a fan. If some project currently using GitHub as their authoritative git repo decided to switch to Radicle and abandon their GitHub-based infrastructure, I think that would be great.

                                                                                            2. 3

                                                                                              One route that has been under-explored is to pay for software distribution.

                                                                                              On some level software has the same issue as music, copying it is super easy. It doesn’t matter if the source is open or not if the distribution is made convenient enough that people are willing to pay for it.

                                                                                              1. 2

                                                                                                I like to call this model “libre-non-gratis” and there have been a small but strong set of examples over the years. Conversations (android app) is one currently active example

                                                                                            1. 85

                                                                                              It all sucks. It’s just a matter of prioritizing.

                                                                                              Windows:

                                                                                              • Cheaper computers than Apple machines
                                                                                              • Maximum software availability/compatibility
                                                                                              • Pretty janky UX, in my opinion

                                                                                              Mac:

                                                                                              • A lot of people really like the design (UX and aesthetic)
                                                                                              • The unixy stuff is more “built in” than WSL on Windows
                                                                                              • Shit’s expensive
                                                                                              • Still have vast software availability
                                                                                              • Apple really talks a good talk on privacy and such

                                                                                              Linux:

                                                                                              • Everything has really rough edges
                                                                                              • All desktop environments are full of papercuts/bugs
                                                                                              • Counter to the above point, there are a TON of UX choices, which is neat
                                                                                              • Minimal software availability
                                                                                              • The only option that is (mostly) free software, where you actually own and control your own machine

                                                                                              I don’t mean to rub anyone the wrong way and I’m not preaching, but the last bullet point is the one that matters the most to me and I suspect it always will be the most important for me, personally. If I can get a machine that runs Linux, even in a fairly hobbled way, I’m going to take it over other options where I don’t know whether my machine is going to just not let me use it one day (literally happened to my ex with Windows Vista) or whether it’s going to send information about what software I’m running on it to headquarters, etc. No amount of polish and convenience is really worth it to me. But for others, the calculus is different. That’s okay, too.

                                                                                              1. 31

                                                                                                the penultimate point, about software availability, depends a lot on what software you need. both my work and hobby coding are in the area of programming languages and tooling, and the communities and development process around a lot of them are very much linux-first.

                                                                                                1. 19

                                                                                                  I like this breakdown too. As someone who literally wrote every line of code that powers their desktop environment (sans the X11 server, but including the X11 client, WM, pager and so on), I really appreciate the ability to build and customize my own bespoke UX from the ground up. Every other environment I’ve tried just seems to be locking things down more and more, making it harder to customize things to my liking.

                                                                                                  1. 2

                                                                                                    As someone who has been using your X11 WM for a few years, I appreciate your efforts! Now moved back to macOS mostly because of annoying inconsistencies in ecosystem around Linux, but I really miss the “don’t raise window on click and focus” feature. So far I’m not aware of any alternatives out there.

                                                                                                    Also, I’ve been impressed by how well Android x86 (that counts as desktop Linux too, right?) integrates with MBP 2015 touchpad. There was no force touch support, but multitouch experience was on par with that of an Android running on a phone with touch screen. That is to say that it’s way better than macOS, and goes in no comparison with multitouch support in GNOME and other Linux-based distros—e.g. I’ve had a lot of issues with scrolling being apparently designed only for mouse wheel scroll-by-line interactions.

                                                                                                    1. 1

                                                                                                      Oh wow, I didn’t realize anyone besides my wife and I used Wingo. Neat!

                                                                                                      and goes in no comparison with multitouch support in GNOME and other Linux-based distros—e.g. I’ve had a lot of issues with scrolling being apparently designed only for mouse wheel scroll-by-line interactions

                                                                                                      Yes, the state of affairs is truly terrible compared to Mac. I’m happy when two finger scrolling works at all. But like, I have to disable tap-to-click on my touchpad because otherwise the software (or hardware?) isn’t good enough to detect my palm and I start getting spurious clicks. I’d love to have a clicky trackpad, but so few laptops have those.

                                                                                                      1. 2

                                                                                                        Have y’all been following this effort to improve Linux touchpads to match Mac usability? https://bill.harding.blog/2021/02/11/linux-touchpad-like-a-mac-update-firefox-gesture-support-goes-live/

                                                                                                        1. 3

                                                                                                          Oh, didn’t realize they were accepting financial contributions. I’m now a sponsor too! Thanks again for the heads up.

                                                                                                          1. 1

                                                                                                            I’ve been vaguely aware of that project, yes. I think I just assumed that I would eventually benefit from it automatically, but maybe that’s a wrong assumption. I’ll take a closer look, thanks.

                                                                                                          2. 2

                                                                                                            I used wingo for about a year myself, and I quite liked it! I mostly switched away because I wanted to explore the landscape, and then I started trying to run Wayland desktops exclusively. It’s a great little project though (wingo). Thanks for spending the time on it!

                                                                                                          3. 1

                                                                                                            t I really miss the “don’t raise window on click and focus” feature

                                                                                                            oh yeah that is such a killer improvement. When I switched to Blackbox WM back around 2007 I left on sloppy focus and click doesn’t raise. A lot of people look at one without the other but I think they need to be used together… and then it is just so much nicer than anything else out there.

                                                                                                            I’ve heavily customized my copy of blackbox (and wrote my own taskbar, terminal emulator, and many other things) over the years and I never want to leave it. Always painful to have to use other systems without it.

                                                                                                        2. 11

                                                                                                          As a mostly happy Mac user, I 100% concur with these bullets.

                                                                                                          1. 6

                                                                                                            Thank you, I’ve been thinking about this since reading your comment. It has convinced me to move back to using linux as a daily driver.

                                                                                                            Such horrendous behavior we have come to expect from Microsoft Windows, Google Android, and many aspects of iOS and Mac OS, as well as our web-browsers and mobile applications. Well, we don’t have to. CentOS is far superior, because this means my computer is no longer going to actively spy on me for the sake of profits or more nefarious reasons.

                                                                                                            Of course, I’ll be starting out with centos 7 for stability’s sake, but if I need to upgrade over the years for compatibility I may use fedora, as long as all the software is free and, here’s the biggest factor, as long as it does not actively spy and log my activitiees.

                                                                                                            1. 2

                                                                                                              Xfce4 is simple, stable and polished. The only rough edge I can think of is alsa vs jack vs pulseaudio. This will hopefully be resolved by pipewire.

                                                                                                              1. 3

                                                                                                                XFCE introducing CSDs give rise to concerns though – if I wanted to have unreliable window decorations and behavior, I could pick Gnome?

                                                                                                                I hope that gets corrected in the future. (Along with introducing the ability to use a better language to write XFCE applications.)

                                                                                                            1. 1

                                                                                                              Big fan of https://github.com/Immediate-Mode-UI/Nuklear . Very glad there is some new flavors in this field, will check it out.

                                                                                                              1. 1

                                                                                                                It’s not in C, but have you already checked out https://gioui.org?

                                                                                                              1. 2

                                                                                                                This is an important thing to think about. I’m trying to develop an open, interoperable chat platform myself, but it’s tricky while the project is small. I’d welcome someone developing another implementation to interoperate with, but I lack the free time to build one. We have been pretty careful with the spec, so I think it shouldn’t be hard to build another implementation.

                                                                                                                If you’re interested in a young chat platform that aspires to these ideals, you can find us at https://arbor.chat

                                                                                                                I certainly hope to steer clear of many of the traps described here though. My dream is to achieve financial sustainability for the project while also providing a fantastic collaboration tool.

                                                                                                                1. 2

                                                                                                                  If you’re interested in a young chat platform that aspires to these ideals, you can find us at https://arbor.chat

                                                                                                                  Thanks, bookmarked. I’ll watch its development with interest.

                                                                                                                  1. 1

                                                                                                                    You’re welcome to drop in and say hi! https://man.sr.ht/~whereswaldon/arborchat/getting-started.md

                                                                                                                1. 13

                                                                                                                  I can understand that this is frustrating from a packager’s point of view, but I personally really like the reliability of static linking. Additionally, some of the modern languages provide really good tooling for managing this stuff. Want to figure out which Go binaries need to be updated for a security vulnerability? You can list the exact dependency version built into them with go version -m ./path/to/binary. Go modules explicitly prevents modules other than the top-level one from pinning dependency versions, which also limits the extent of the update-pinned-versions nightmare (though it doesn’t completely eliminate it).

                                                                                                                  1. 7

                                                                                                                    go version won’t tell you if any of the packages have problems: you need to manually look up every dependency, which can be some amount of work if there are a whole bunch. NPM, for example, will warn you on npm install. I think some more work on the Go tooling is needed here.

                                                                                                                    But yeah, the path forward is clearly in better tooling, not reverting everything to C-style development anno 1992.

                                                                                                                    1. 4

                                                                                                                      I think we’re starting from a distributor’s position of “I have a bag of binaries, and a bag of vulnerable package versions. How do I match them up?” The dependency lookup was already happening, or is a given.

                                                                                                                  1. 2

                                                                                                                    This is pretty cool! I’m building a similar tool as a desktop application, though it’s nowhere near as mature. My use-case is mostly around exploration as well, but I wanted to be able to do that outside of my browser. Perhaps there’s opportunity to collaborate?

                                                                                                                    My project lives here: https://github.com/whereswaldon/binnacle

                                                                                                                    1. 2

                                                                                                                      I’m going to keep debugging a click event routing problem in the latest release candidate of sprig, the gio-based reference GUI client for Arbor, a tree-based chat system.

                                                                                                                      If you’re curious about what tree-based chat might feel like, feel free to grab a client and drop by!

                                                                                                                      1. 10

                                                                                                                        Caveat: this is an alpha-level project.

                                                                                                                        I’m really enjoying using Gio. It let’s you write cross platform GUIs in pure Go and essentially sits on top of each platform’s EGL implementation.

                                                                                                                        I’ve found it to be extremely easy to write simple interfaces that run reliability on all desktop oses and on mobile.

                                                                                                                        It definitely has some missing features (and big ones, like accessibility and common widgets), but it’s developing rapidly.

                                                                                                                        1. 4

                                                                                                                          For those like me who didn’t know about it, here’s a description of EGL from the official website:

                                                                                                                          EGL is an interface between Khronos rendering APIs such as OpenGL ES or OpenVG and the underlying native platform window system. It handles graphics context management, surface/buffer binding, and rendering synchronization and enables high-performance, accelerated, mixed-mode 2D and 3D rendering using other Khronos APIs.

                                                                                                                          EGL provides mechanisms for creating rendering surfaces onto which client APIs like OpenGL ES and OpenVG can draw, creates graphics contexts for client APIs, and synchronizes drawing by client APIs as well as native platform rendering APIs.

                                                                                                                          1. 1

                                                                                                                            I’ve been poking around Gio also, and it seems really pleasant to use. I would love to be able to write “native” desktop applications in pure Go.

                                                                                                                            1. 1

                                                                                                                              It looks really cool, especially the wasm output. The missing accessibility sucks though, I don’t need it (yet), but I feel irresponsible using a tool where it’s missing.

                                                                                                                            1. 1

                                                                                                                              I gave a talk on terminal tools that I can’t live without last year. if you’re interested, a recording is available here: https://youtu.be/hsf9FWT9-gY

                                                                                                                              I’ve seen some of them listed in the comments here, but there might be a few new ones.

                                                                                                                              1. 1

                                                                                                                                I’m not saying that it’s on the same level of abstraction or maturity as the other frameworks discussed, but I’ve been really impressed with Gio. I can write my interface once in Go and run it as a desktop app on every major OS, as a phone native application on iOS and Android, and in the browser using WASM and WebGL.

                                                                                                                                1. 1

                                                                                                                                  I used Vim full-time for about six years until I discovered kakoune. Now that serves as my daily choice. as others have mentioned, once you add LSP support to any editor it becomes about as feature-ful as an IDE, so the choice really comes down to things like how you navigate the interface.