This is interesting, because it’s based on narrowing and folding org files. For some reason, that doesn’t jive with how I want to do presentations.
So I’ve written an Emacs presentation mode based on Org mode files as well. It aims to take an org file written to be a presentation (e.g., you’re not presenting an org file that you previously wrote; you’re writing an org file to present), and present it in a clean way.
I haven’t been sure how to explain the project – it’s a thing where a bunch of items drop and interact with each other. I’ve been going with “physics simulator toy”, but am open to other ideas. Hopefully it’s interesting enough – it’s been fun to play with, but I wonder how much of that is the joy in playing with something I made.
I’ve seen many of these things over the years, often implemented with “graphics.” Probably “sand simulator” is the most widely used name. Though, with anti-matter, this doesn’t quite fit the mold.
A number of implementations have gone by the name “Falling Sand Game”, which was the name of a Java version that may have been the “original” of the genre.
Yeah, it is less about directly corresponding to reality, and more about adding fun items. It definitely doesn’t make actual sense, like the antimatter you mentioned.
I’m trying to get an Emacs package ready for wider distribution. It’s a little physics toy, where sand (and other items) drop down to the bottom of a grid. They can interact in various ways: rocks push aside sand, antimatter annihilates whatever it touches, balloons actually go up.
It’s not exactly a game, not exactly a tool. Just something I’ve been having fun working on. If you want to try it out, it’s here. The only thing it needs that isn’t part of Emacs is ht.el.
If you click the links for the response data, none of the formats return any valid responses, only nulls…
Is this a bug with that part of the system, or did the entire survey lose all my data?
On reddit, one of the people behind the survey commented:
in everybody’s case so far who I’ve checked, their responses did make it into the database, but there is a glitch that seems to be occurring in some cases with the exporting of results.
Ahh cool, thanks. I don’t care if the export doesn’t work as long as the data is safe. (Not because my data is important, but if this were happening to more people than me.)
Good to hear it’s fine! 👍
Safety cover at the sailing club on Sunday for racing, will be the first time I’ve been at the club in a couple of months so looking forward to it. Might even make it across and sail in the next week or so (🙀).
Garage conversion is continuing at a decent pace, I need to extend the existing lighting circuit into two and wire up some more sockets this weekend ready for the builder to continue next week. Given I have most of a day free and it’s not supposed to be raining, I should be able to get the lawn cut and the Z4 back on the road too. It’s practically criminal how long that’s been sat on the drive undriven.
Safety cover at the sailing club on Sunday for racing, will be the first time I’ve been at the club in a couple of months so looking forward to it. Might even make it across and sail in the next week or so (🙀).
I’m taking my first sailing class this weekend! Two days, gonna go for ASA 101 certification tomorrow. Today was fun, even if I got a little seasick for a bit.
I’ve also written an rss feed generator. I think there are some things I need to fix (e.g., relative links should be made absolute), but it’s working! Code here.
The article praises the decision to expose buffers to the end-user, and I agree that it’s powerful to be able to use the same commands in basically every context… but it leads to a lot of confusion and awkwardness that I think needs to be addressed as well. My case in point: Several times a week, I need to rename a file I’m editing. Here’s what it looks like:
M-x rename-file
M-x rename-buffer
In a different text editor, it looks like this:
It could look like that in Emacs as well, but if you go looking around you find that it’s not in the core command set, and you have to dig up some script someone has already put together and then shove that into your init.el
. Only then can you have workflow #2.
Emacs knows which buffers are file-backed, and could very well offer a command like rename-file-buffer
. I don’t know why it doesn’t, in the Year of Our Lord 2022. Maybe some bikeshedding over what counts as a file-backed buffer, or naming the function, or some internals thing I don’t know about. But it probably has something to do with “everything’s a buffer” and not trying too hard to smooth over that.
While I agree with you about frustrations on awkward interfaces surrounding buffers, I’m not sure that I follow your example. For your example, I’d normally do
C-x C-w
(or M-x write-file
)It seems like it follows your desired path, accomplishes your objectives, and only uses built-in commands and default keybindings? Is there something that I’m missing?
This was my first thought. I gave saturn the benefit of the doubt here because C-x C-w
copies the file. It doesn’t rename it. But both dired-rename-file
and write-file
accomplish what you want: changing the name of both the file and the buffer.
The abundance of options is not necessarily a good thing. It hampers discoverability. I realize that saying things like that arguably makes me a bad emacs user, but we do exist.
True but in this is a case where the functionality is all of obvious, easy to code, and absent from the core product. I figure that the reason this reature is absent is because core emacs already has two better ways to get the workflow done. I don’t remember when I discovered the write-file
method. I’d bet that it was early on in my use of emacs though so we’re talking early ’90s. I came to dired mode
pretty late but learned very quickly how powerful it was.
I agree. I used write-file
for years before I discovered dired
mode. I have to admit that in my case, the extra file hanging aroung is usually not a problem for me but I use emacs as a IDE/code editor. Emacs is not a lifestyle for me.
I always keep dired buffers of my working directories and clean up from there. Best dired key command might be ~ (flag all backup files).
That’s absolutely true and it’s interesting that they haven’t done this already.
How much do you want to bet there there aren’t a million and one “rename-buffer-and-file” functions floating around in an equal number of .emacs files? :)
For me, while I really truly do appreciate the power and wisdom of having my editor also be an capable programming environment unto itself, I think exactly this kind of lack of polish is going to continue to erode emacs adoption over the long haul.
Emacs not only knows when a buffer is attached to a file, it also does the right thing when it performs operations on the file from dired mode. I have a file I want to rename. I open the directory that it’s in with dired mode by pressing: c-x c-f enter
from the buffer visiting the file. I press 'R'
then fill out the new_filename in dialog. After the rename is finished, i press 'F'
and I’m taken back to the buffer visiting the file. Emacs renames the file and automatically renames the buffer as you intended. Also note that the buffer never changes. Your new position is exactly the same as the old.
That got me thinking. I use dired to rename files (with the r
keybinding) and that does update the buffer names.
r
is bound to dired-do-rename
which calls dired-rename-file
which calls set-visited-file-name
on buffers that are visiting the file.
Ah! It sounds like dired
is the thing I should have been using. I always wrote it off as a “power tool” for when you need to do heavier rearranging of files and directories – multi-renames, whatever – but maybe that’s what all the experience users are actually doing for renames?
This doesn’t address the larger point, but it does the pain in your workflow. You can achieve the same in one logical step by using dired’s write mode (Wdired) to do the rename in the dired buffer.
As to why rename-file-buffer doesn’t also rename the buffers that are visiting that file, I’m guessing it is because it is written in C, and the code is already hairy enough to complicate it further with additional responsibilities.
Especially as there are edge cases. Buffers don’t have to have the same name as the file they are visiting. For example when you use clone-indirect-buffer-other-window
, which some people use heavily in conjunction with narrowing. Should we rename all buffers visiting the file only where there is an exact match between the buffer and file name? what about when the file is part of the buffer name ej. foo.txt<3>
or foo.txt/fun-name<2>
? I think it is a reasonable choice to have rename-file do only one thing and let users implement a more dwim version by themselves.
I wrote a function to do that (“some script someone has already put together”). Once my work signs Emacs’s employer disclaimer of rights, I’m going to try to get this into Emacs proper.
This doesn’t address your actual point, but adding just in case it’s useful to someone. Pretty sure I stole this from Steve Yegge years ago
(defun rename-file-and-buffer (new-name)
"Rename both file and buffer to NEW-NAME simultaneously."
(interactive "sNew name: ")
(let ((name (buffer-name))
(filename (buffer-file-name)))
(if (not filename)
(message "Buffer '%s' is not visiting a file." name)
(if (get-buffer new-name)
(message "A buffer named '%s' already exists." new-name)
(progn
(rename-file name new-name 1)
(rename-buffer new-name)
(set-visited-file-name new-name)
(set-buffer-modified-p nil))))))
Reading this made me realize that I can add a little function to my .emacs
to do this (my strategy has tended to be to do it in a vterm
session and then re-open the file when I need to do this once every blue moon).
I do think there should be “a thing” (though stuff like editing of remote files have to be answered). I do wonder how open GNU Emacs is to simple QoL improvements like that.
I’m wrapping up my time on a very frustrating team, in preparation for next week starting on what I believe to be a much more healthy team.
Also trying to finish some API endpoints for a music-library site I’m making, to replace the now defunct Google Music.
Let's compare for example
(defn lookup-users-i []
(query (get-db-conn)
'[:find [?user ...] :in $ :where [?user :user/name _]]))
to
(defn lookup-users-ii [db-conn]
(query db-conn
'[:find [?user ...] :in $ :where [?user :user/name _]]))
The first version is easier to invoke via the REPL because you offload any db connection setup logic to the get-db-conn function. You don't need to worry about building a connection and passing it in. On the flip-side, at the lookup-user-i call-sites you don't have arguments going in, which provides folks reading the code with less context regarding the function's behavior.
One could make two different arities of the function:
(defn lookup-users-iii
([] (lookup-users-iii (get-db-conn)))
([db-conn]
(query db-conn
'[:find [?user ...] :in $ :where [?user :user/name _]])))
This way, one gets the best of both worlds, at the cost of a little extra typing upfront.
My current pattern is a map of override functions. I can give them defaults using :or
, and overwrite side effecting functions, or define a function in my scope. This works for me so far, and enables tests easily (since part of a side effecting function is the contract by which you call the side effects) and allows you to call a different function for repl side work.
It would be interesting if you could change specs. Say you wanted a laptop like the one you currently have, but brighter and with more RAM.
I find it disappointing that it doesn’t compare performance metrics. It’s easy enough to pick a benchmark that’s more or less accepted by the industry to measure CPU and GPU performance, and map those out as well.
Notebookcheck has a decent selections of benchmarks for components https://www.notebookcheck.net/Mobile-NVIDIA-GeForce-GTX-1060-Laptop-Benchmarks-and-Specs.169547.0.html and laptops https://www.notebookcheck.net/Alienware-x14-Review-The-world-s-thinnest-gaming-notebook-requires-compromises.608607.0.html
Good idea. I guess I would have to ask them if using their performance values on Product Chart is ok with them.
I second notebookcheck as a source for GPU benchmarks. Their methodology is pretty solid, they cover a lot of devices, and they’ve been collecting benchmarks for more than 15 years now.
They also cover other interesting metrics, such as display brightness and color accuracy, which is not always available on a spec sheet.
It would be hard to get the scores for a specific laptop, given that the TDP and heat management are often very different across models with the same specs.
For CPU I’d personally go with the Passmark CPU benchmarks, since they are a pretty good indicator for the type of CPU loads I run as a developer (they tend to map linearly to compile times for example).
For GPU, I think the problem is a little bit trickier, but one starting point could be selecting a number of common benchmarks from Notebookcheck.com, which does a lot of in depth laptop benchmarking and reviews, and average them out.
It wouldn’t be perfectly reliable, but it would be a lot better than nothing.
This works! Watch out for numeric overflow:
let a = 500000000000000000001
undefined
let b = 500000000000000000001
undefined
a = a + b
1e+21
b = a - b
500000000000000000000
a = a - b
500000000000000000000
The first thing I went to look at didn’t even work. I just wanted to find the a tag. I’m pretty sure the search used to work better.
I’m excited for this. I started looking into pijul recently, to see if I could use it as my main VCS. I think there’s some differences in how the Pijul devs think about version control – at least, different from how I do.
They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.
It seems to be very much written for people who understand the pijul internals. doing a pijul diff
shows metadata needed if…you are making a commit out of the diff?
I would think a “what’s changed in this repository” is a pretty base-level query. They seem to not think it’s especially important; the suggested replacement of pijul diff --short
works but is not documented for this. For example, it shows information that is not in pijul diff
– namely, commits not added to the repository yet.
I also want to see if I can replicate git’s staging area, or have a similarly safe, friendly workflow for interactive committing. It seems like most VCSs other than git don’t understand the use cases for the staging area.
They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.
Curious about where you got that from, I even wrote the most painful thing ever, called Sanakirja, just so we could fork databases and have branches in Pijul.
Now, branches in Git are the only way to work somewhat asynchronously. Branches have multiple uses, but one of them is to keep your work separate and delay your merges. Pijul has a different mechanism for that, called patches. It is much simpler and more powerful, since you can cherry-pick and rebase patches even if you didn’t fork in the first place. In other words, you can “branch after the fact”, to speak in Git terms.
I would think a “what’s changed in this repository” is a pretty base-level query
So do the authors, they just think slightly differently from Git’s authors. pijul diff
shows a draft of the patch you would get if you recorded. There is no real equivalent of that in Git, because a draft of a commit doesn’t make sense.
I also want to see if I can replicate git’s staging area
One thing you can do (which I find easier than the index) is record and edit your records in the text editor before saving.
(Thanks pmeunier for the interesting work!)
I found the discussion of branches in your post rather confusing. (I use git daily, and I used darcs heavily years ago and forgot large parts of it.) And in fact I’m also confused the About channels mention in the README, and the Channels documentation in the manual. I’m trying to explain this here in case precise feedback can be useful to improve the documentation.
Your explanation, here and in the manual, focuses on differences in use-cases between Git branches and channels. This is confusing because (1) the question is rather “how can we do branches in Pijul?”, not “what are fine-grained differences between what you do and git branches?”, and because (2) the answer goes into technical subtleties or advanced ideas rather quickly. At the end I’m not sure I have understood the answer (I guess I would if I was very familiar with Pijul already), and it’s not an answer to the question I had.
My main use of branches in git is to give names to separate repository states that correspond to separate development activities that should occur independently of each other. In one branch I’m trying to fix bug X, in another branch I’m working on implementing feature Y. Most branches end up with commits/changes that are badly written / buggy / etc., that I’m refining other time, and I don’t want to have them in the index when working on something else.
So this is my question: “how do you work on separate stuff in Pijul?”. I think this should be the main focus of your documentation.
There are other use-cases for branches in git. Typically “I’m about to start a difficult rebase/merge/whatever, let me create a new branch foo-old
to have a name for what I had before in case something blows up.”, and sometimes “I want to hand-pick only commit X, Y and Z of my current work, and be able to show them separately easily”. I agree that most of those uses are not necessary in patch-based systems, but I think you shouldn’t spend too much answer surface to point that out. (And I mostly forget about those uses of branches, because they are ugly so I don’t generally think about them. So having them vaguely mentioned in the documentation was more distracting than helfpul.)
To summarize:
The Pijul documentation writes: “However, channels are different from Git branches, and do not serve the same purpose.”. I think that if Channels are useful for the “good use case” given above, then we should instead consider than they basically serve the same purpose as branches.
Note: the darcs documentation has a better explanation of “The darcs way of (non-)branching”, showing in an example-based way a situation where talking about patches is enough. I think it’s close to what you describe in your documentation, but it is much clearer because it is example-based. I still think that they spend too much focus on this less-common aspect of branches.
Finally a question: with darcs, the obvious answer to “how to do branches?” is to simply use several clones of the same repository in different directories of my system, and push/pull between them. I assume that the same approach would work fine with pijul. What are the benefits of introducing channels as an extra concept? (I guess the data representation is more compact, the dcvs state is not duplicated in each directory?) It would be nice if the documentation of channels would answer this question.
So this is my question: “how do you work on separate stuff in Pijul?”
This all depends on what you want to do. The reason for your confusion could be because Pijul doesn’t enforce a strict workflow, you can do whatever you want.
If you want to fork, then so be it! If you’re like me and don’t want to worry about channels/branches, you can as well: I do all my reviewing work on main, and often write drafts of patches together in the same channel, even on independent features. Then, I can still push and pull whatever I want, without having to push the drafts.
However, if you prefer to use a more “traditional” Git-like way of working, you can too. The differences between these two ways isn’t a huge as a Git user would think.
Edit: I do use channels sometimes, for example when I want to expose two different versions of the same project, for example if that project depends on an fast-moving library, and I want to have a version compatible with the different versions of that library.
But if you work on different drafts of patches in the same channel, do they apply simultaneously in your working copy? I want to work on patches, but then leave them on the side and not have them in the working copy.
Re. channels: why not just copy the repository to different directories?
They do apply to the same working copy, and you may need multiple channels if you don’t want to do that.
Re. channels: why not just copy the repository to different directories?
Channel fork copies exactly 0 byte, copying a repository might copy gigabytes.
I use git and don’t typically branch that much. All a branch is a sequence of patches and since git lets me chop and slice patches in whatever way I want to, it seems like its usually overkill to create branches for things. Just makes your changes and build the patch chains you want when you want to, how you want to.
Then you might feel at home with Pijul. Pijul will give you the additional ability to push your patches independently from each other, potentially to different remote channels. Conversely, you’ll be able to cherry-pick for free (we simply call that “pulling” in Pijul).
They seem to not think it’s especially important; the suggested replacement of pijul diff –short works but is not documented for this.
A bit lower in the conversation the author agrees that a git status
command would be useful but they don’t have the time to work on it at the time of writing. My guess is that it is coming and the focus is on a working back-end at the moment.
I’m auditioning for improv house teams. I’ve been out of practice during the pandemic, so I don’t know how it will go. I’m just gonna try to have fun.
This seems to make intuitive sense to me. This is not a criticism of the article.
For every span of n
numbers, exactly one number there has n
as a divisor. If you know nothing more about these numbers, you would expect each number to have a 1/n
chance of having n
as a divisor.
So for such a span containing two twin primes, you would expect each number other than the twin primes to have a greater chance – that is, 1/(n-2)
– of having n
as a divisor. So it would have a 1/3 chance of having 5 as a divisor, when a randomly selected number has a 1/5 chance.
I would also be interested if the numbers directly before and after the twin primes also have more factors on average. They wouldn’t have the advantage of having 3 as a divisor.
This makes operator precedence a partial order rather than a total order. And Guy Steele mentioned that the Fortress language behaved like this:
https://www.youtube.com/watch?v=EZD3Scuv02g
mentioned briefly here: https://www.oilshell.org/blog/2016/11/01.html
fortress’s operator precedence system is one of my favourite pieces of language design, for the way in which it gets things so obviously right in terms of mathematical design that every other system feels clunky in comparison.
i would not have thought of the point myself before reading a description of fortress, but once I did I felt like of course if you have operator overloading you are essentially using the same symbols as different operators in different contexts, and that an “operator” is actually a combination of a symbol and the context in which its meaning is defined. it is therefore absurd to insist that precedence rules attach to the symbol rather than the operator, the way pretty much every other language that has precedence rules at all does.
I dunno, letting a + b * c
resolve as (a + b) * c
or a + (b * c)
dependent on the types of the operands seems pretty confusing to me.
There’s plenty of unicode symbols that are suitable for infix operators, why not just use some of those if you want different precedences?
because that mirrors their usage in established mathematical or scientific domains. e.g. if you are coding up formulae in a system where * is a low precedence operator, it would be nice not to have to treat it as high precedence and use unnecessary parentheses just because the C world uses it for multiplication.
And Guy Steele mentioned that the Fortress language behaved like this:
It begins at this timestamp: https://youtu.be/EZD3Scuv02g?t=1884. He doesn’t go into detail, though.
Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.
If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.
If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.
when LetsEncrypt stops working
That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!
LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.
But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?
I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)
It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.
If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.
This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.
I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.
Interesting! I have written a parser for a subset of Org syntax, and I agree with the statement that Org syntax is complicated. As one example, let’s look at ways to make task lists. First, a headline can be marked as TODO. But plain lists can’t. Plain lists can be checkboxes, but headlines can’t.
These two concepts are very similar, but are separate. They don’t work well together.
Seems like relatively large holes. It exempts “educational institution[s]”. Does Google count, because YouTube has tutorials?
Also, you get a perpetual license if the licensor does not “offer a fair commercial license”. What is a “fair license”? It’s later defined as “for a fair price, on reasonable terms”. What is reasonable to you might not be reasonable to me. If Apple objects to your price, does Tim Cook get to use your code forever? A “fair price” is later defined as, basically, “anything I can get someone to pay”. But how do you bootstrap this? If no one has paid for my BTPL code, there’s no fair price?
It’s also very unclear if this license permits modification of the code.
It exempts “educational institution[s]”. Does Google count, because YouTube has tutorials?
It’s important to remember disputes are handled by human judges that try to apply a mix of legal analysis and common sense to such interpretations. The common use of educational institution is an organization that trains people, usually issuing credentials. A judge with common sense would probably treat Google as a massively-profitable, advertising business. Youtube itself is also hitting people with more ads now to push Premium. They even imply they’re doing that in the Premium advertisements. That could easily show Youtube is both an advertising and paid-streaming business.
Cases have been lost or won over a comma. I wouldn’t assume that an untested license will be interpreted the way you prefer.
Moreover there are countries where the law says that Google just isn’t an “educational institution”, because a company has at most one activity sector and Google’s main activity isn’t about education. That’s how it works in my country: companies can’t easily cheat here.
I upvoted you, because you have good points, but this license was written by a lawyer. I would hope he knows what he’s doing.
Agreed. I had a lot the same questions. It all seems a little too vague to be workable without more clarifications added given a context. I suppose we need to consult a lawyer about this lawyer’s blog post about his license.
I think the point of this sort of thing is that those clarifications are basically hookable access points for discussing/settling things in court. The “fair price” part is something that you can argue in court and get precedent on.
But how do you bootstrap this?
My understanding is that the fair price clause does not preclude the licensor from selling the software at any other price. The idea is that you first sell to one big business who is paying some money despite the lack of a known fair price, and this becomes your fair price. Afterwards you can force everyone after that to pay the same price.
Obviously if the whole industry forms some kind of alliance to never buy from you, you’d be stuck. But in reality, if your software is minimally commercially viable, you’ll eventually find someone willing to pay. (And if you find the whole industry working against you, you’re in bigger trouble :)
The “market price” wording also provides a fallback option. If you are still struggling to sell the first license, you can sell it at a price at an obviously low market rate, and that would automatically be considered a fair price. This doesn’t preclude you from raising the price to other licensees later.
But it seems realistic that the first commercial customer is one who seeks to upgrade from a non-commercial license. From that point, you have less than 32 days to determine a fair price. That seems difficult.
The “market price” in the world of FOSS seems hard to determine to be anything other than 0. If I’m making a server focused kernel under BTPL, is the “market price” $0, since that’s the price for the biggest players in the space? If I make a compiler toolchain, is the market price $0 since all the world-class compiler tool chains everyone uses are free? Or do we determine the “market price” based on the price of the commercial players in the space which nobody actually uses? And even if you use the price of the commercial players in the compilers market (to the degree that they exist), you’d end up with a price that’s seriously depressed thanks to the competition from free players, right? Is that price really “fair”?
You have great points.
Personally, I think businesses would be more inclined to pay if they could get the same FOSS license as everybody, but purchase liability protection on top of that because just about every FOSS license disclaims liability.
And honestly, if someone’s paying you for your software, shouldn’t you, in return absorb some or all liability for that software? In my opinion, this is something the industry got entirely wrong.
I assume you intend that the project would be carrying insurance for this, but I (naively, as I know zilch about this kind of insurance…) wonder who would insure it credibly for any reasonable amount without direct access to whoever is using/deploying the software to understand what kind of risk/reward they’re playing with?
I’m having a hard time imagining any blanket liability protection arrangement being durable, so I imagine the liability would have to be extensively lawyered. At that point, I definitely wouldn’t be liking my odds, with corporate lawyers on both ends, of drawing anything but the shortest straw.
I think you are correct. I guess my vision about accepting liability also hinges on us being able to develop practices that we call “professional,” like professional engineers. Engineers don’t have liability for something they couldn’t have known, for example, but they are liable for harm caused by negligence.
That makes sense. I do feel like it would align incentives toward quality, testing, and humility.
I think I’m mostly just commenting from a software-is-weird-in-ways-that-seem-hard-to-insure perspective.
I can imagine an engineering firm being able to procure liability insurance for a bridge-building project, and I can imagine that relationship getting really weird the first time there’s a claim and the insurer realizes it’s for a copy of the bridge carrying hazmat trucks over the Museum of Irreplaceable Cultural Artifacts and Huggable Puppies. And they investigate a little and realize they’re on the hook for thousands of these bridges.
Or the first time they refuse to pay out because dependencies have been updated since the agreement.
Your comment was both very funny and very true.
Yes, I agree that software might be hard to insure. I just hope we can work toward a world where it makes sense to.
Thank you for this discussion; it helped me realize some blind spots I have.
The “market price” in the world of FOSS seems hard to determine to be anything other than 0.
The fact that you can use a piece of software for free does not make its market price 0. For example, you can use public roads for free, but there is a very real price on how expensive roads are. The “market price” of FOSS can be approximated by the price of similar non-FOSS software, or how much companies who hire full-time FOSS developers are paying them. Of course my interpretation is not the only possible interpretation, but the author being a lawyer, it is probable that they know that this is how the court will likely think too. Law is not pure philosophy.
There is another mechanism that will determine the market price. If your software is really popular, sooner or later a big company with deep pockets will be willing to pay, just to write off the legal liability. At that point they will compare the cost of your software with other similar commercial software and the cost of developing the solution in-house, and they will be willing to pay as long as your price is lower than the alternatives.
Again, all of this is based on a somewhat optimistic outlook of the software market and some faith in the big companies. In fact, the author pointed out that the entire non-commercial clause is based on the honor system. As long as there are enough honest customers you’d be good.
Another thing is the context to interpret this license in. I keep feeling that the author must still have projects like Babel (which he dealt with in Sell Babel 8) in mind. This license is not optimized for maximizing your profit, because many companies simply won’t pay. It is a compromise intended to let developers of very popular libraries to reap a somewhat proportional profit - which will not be as high compared to selling commercial software, but definitely much more than voluntary donations.
Got fired on Friday, so I’m taking some time to relax. Maybe going to try to publish some blog posts, maybe going to try to work on my music backup site. If I’m feeling especially active, I’ll try to contribute some code to Emacs.
Suck that you got fired, but I’m sure it’ll workout long-term!
Best of luck with your next ventures!
Obligatory “stay strong” <3