Threads for zck

  1. 1

    If you click the links for the response data, none of the formats return any valid responses, only nulls…

    Is this a bug with that part of the system, or did the entire survey lose all my data?

    1. 2

      On reddit, one of the people behind the survey commented:

      in everybody’s case so far who I’ve checked, their responses did make it into the database, but there is a glitch that seems to be occurring in some cases with the exporting of results.

      1. 2

        Ahh cool, thanks. I don’t care if the export doesn’t work as long as the data is safe. (Not because my data is important, but if this were happening to more people than me.)

        Good to hear it’s fine! 👍

    1. 2

      Safety cover at the sailing club on Sunday for racing, will be the first time I’ve been at the club in a couple of months so looking forward to it. Might even make it across and sail in the next week or so (🙀).

      Garage conversion is continuing at a decent pace, I need to extend the existing lighting circuit into two and wire up some more sockets this weekend ready for the builder to continue next week. Given I have most of a day free and it’s not supposed to be raining, I should be able to get the lawn cut and the Z4 back on the road too. It’s practically criminal how long that’s been sat on the drive undriven.

      1. 2

        Safety cover at the sailing club on Sunday for racing, will be the first time I’ve been at the club in a couple of months so looking forward to it. Might even make it across and sail in the next week or so (🙀).

        I’m taking my first sailing class this weekend! Two days, gonna go for ASA 101 certification tomorrow. Today was fun, even if I got a little seasick for a bit.

      1. 2

        I’ve also written an rss feed generator. I think there are some things I need to fix (e.g., relative links should be made absolute), but it’s working! Code here.

        1. 9

          The article praises the decision to expose buffers to the end-user, and I agree that it’s powerful to be able to use the same commands in basically every context… but it leads to a lot of confusion and awkwardness that I think needs to be addressed as well. My case in point: Several times a week, I need to rename a file I’m editing. Here’s what it looks like:

          1. M-x rename-file
          2. Choose the file I want to rename
          3. Type in the new name and confirm
          4. M-x rename-buffer
          5. Enter the new name

          In a different text editor, it looks like this:

          1. (some keybinding or sequence)
          2. Enter the new name

          It could look like that in Emacs as well, but if you go looking around you find that it’s not in the core command set, and you have to dig up some script someone has already put together and then shove that into your init.el. Only then can you have workflow #2.

          Emacs knows which buffers are file-backed, and could very well offer a command like rename-file-buffer. I don’t know why it doesn’t, in the Year of Our Lord 2022. Maybe some bikeshedding over what counts as a file-backed buffer, or naming the function, or some internals thing I don’t know about. But it probably has something to do with “everything’s a buffer” and not trying too hard to smooth over that.

          1. 7

            While I agree with you about frustrations on awkward interfaces surrounding buffers, I’m not sure that I follow your example. For your example, I’d normally do

            1. C-x C-w (or M-x write-file)
            2. Enter the new name

            It seems like it follows your desired path, accomplishes your objectives, and only uses built-in commands and default keybindings? Is there something that I’m missing?

            1. 3

              This was my first thought. I gave saturn the benefit of the doubt here because C-x C-w copies the file. It doesn’t rename it. But both dired-rename-file and write-file accomplish what you want: changing the name of both the file and the buffer.

              1. 5

                The abundance of options is not necessarily a good thing. It hampers discoverability. I realize that saying things like that arguably makes me a bad emacs user, but we do exist.

                1. 2

                  True but in this is a case where the functionality is all of obvious, easy to code, and absent from the core product. I figure that the reason this reature is absent is because core emacs already has two better ways to get the workflow done. I don’t remember when I discovered the write-file method. I’d bet that it was early on in my use of emacs though so we’re talking early ’90s. I came to dired mode pretty late but learned very quickly how powerful it was.

                2. 2

                  write-file is good to know about! I still have to then use delete-file, but it is shorter.

                  1. 2

                    I agree. I used write-file for years before I discovered dired mode. I have to admit that in my case, the extra file hanging aroung is usually not a problem for me but I use emacs as a IDE/code editor. Emacs is not a lifestyle for me.

                    1. 1

                      I always keep dired buffers of my working directories and clean up from there. Best dired key command might be ~ (flag all backup files).

                3. 4

                  That’s absolutely true and it’s interesting that they haven’t done this already.

                  How much do you want to bet there there aren’t a million and one “rename-buffer-and-file” functions floating around in an equal number of .emacs files? :)

                  For me, while I really truly do appreciate the power and wisdom of having my editor also be an capable programming environment unto itself, I think exactly this kind of lack of polish is going to continue to erode emacs adoption over the long haul.

                  1. 7

                    Emacs not only knows when a buffer is attached to a file, it also does the right thing when it performs operations on the file from dired mode. I have a file I want to rename. I open the directory that it’s in with dired mode by pressing: c-x c-f enter from the buffer visiting the file. I press 'R' then fill out the new_filename in dialog. After the rename is finished, i press 'F' and I’m taken back to the buffer visiting the file. Emacs renames the file and automatically renames the buffer as you intended. Also note that the buffer never changes. Your new position is exactly the same as the old.

                  2. 4

                    That got me thinking. I use dired to rename files (with the r keybinding) and that does update the buffer names.

                    r is bound to dired-do-rename which calls dired-rename-file which calls set-visited-file-name on buffers that are visiting the file.

                    1. 1

                      Ah! It sounds like dired is the thing I should have been using. I always wrote it off as a “power tool” for when you need to do heavier rearranging of files and directories – multi-renames, whatever – but maybe that’s what all the experience users are actually doing for renames?

                      1. 1

                        dired is how I browse the filesystem to see what’s there.

                    2. 2

                      This doesn’t address the larger point, but it does the pain in your workflow. You can achieve the same in one logical step by using dired’s write mode (Wdired) to do the rename in the dired buffer.

                      1. C-x C-j (opens dired in the directory the current buffer)
                      2. C-x C-q
                      3. The file in question inside the dired buffer
                      4. C-c C-c

                      As to why rename-file-buffer doesn’t also rename the buffers that are visiting that file, I’m guessing it is because it is written in C, and the code is already hairy enough to complicate it further with additional responsibilities.

                      Especially as there are edge cases. Buffers don’t have to have the same name as the file they are visiting. For example when you use clone-indirect-buffer-other-window, which some people use heavily in conjunction with narrowing. Should we rename all buffers visiting the file only where there is an exact match between the buffer and file name? what about when the file is part of the buffer name ej. foo.txt<3> or foo.txt/fun-name<2>? I think it is a reasonable choice to have rename-file do only one thing and let users implement a more dwim version by themselves.

                      1. 2

                        I wrote a function to do that (“some script someone has already put together”). Once my work signs Emacs’s employer disclaimer of rights, I’m going to try to get this into Emacs proper.

                        1. 1

                          This doesn’t address your actual point, but adding just in case it’s useful to someone. Pretty sure I stole this from Steve Yegge years ago

                          (defun rename-file-and-buffer (new-name)
                            "Rename both file and buffer to NEW-NAME simultaneously."
                            (interactive "sNew name: ")
                            (let ((name (buffer-name))
                                  (filename (buffer-file-name)))
                              (if (not filename)
                                  (message "Buffer '%s' is not visiting a file." name)
                                (if (get-buffer new-name)
                                    (message "A buffer named '%s' already exists." new-name)
                                  (progn
                                    (rename-file name new-name 1)
                                    (rename-buffer new-name)
                                    (set-visited-file-name new-name)
                                    (set-buffer-modified-p nil))))))
                          
                          1. 1

                            Reading this made me realize that I can add a little function to my .emacs to do this (my strategy has tended to be to do it in a vterm session and then re-open the file when I need to do this once every blue moon).

                            I do think there should be “a thing” (though stuff like editing of remote files have to be answered). I do wonder how open GNU Emacs is to simple QoL improvements like that.

                          1. 1

                            I’m wrapping up my time on a very frustrating team, in preparation for next week starting on what I believe to be a much more healthy team.

                            Also trying to finish some API endpoints for a music-library site I’m making, to replace the now defunct Google Music.

                            1. 3
                              Let's compare for example
                              (defn lookup-users-i []
                                (query (get-db-conn)
                                       '[:find [?user ...] :in $ :where [?user :user/name _]]))
                              
                              to
                              (defn lookup-users-ii [db-conn]
                                (query db-conn
                                       '[:find [?user ...] :in $ :where [?user :user/name _]]))
                              The first version is easier to invoke via the REPL because you offload any db connection setup logic to the get-db-conn function. You don't need to worry about building a connection and passing it in. On the flip-side, at the lookup-user-i call-sites you don't have arguments going in, which provides folks reading the code with less context regarding the function's behavior.
                              

                              One could make two different arities of the function:

                              (defn lookup-users-iii
                                ([] (lookup-users-iii (get-db-conn)))
                                ([db-conn]
                                 (query db-conn
                                        '[:find [?user ...] :in $ :where [?user :user/name _]])))
                              

                              This way, one gets the best of both worlds, at the cost of a little extra typing upfront.

                              1. 2

                                My current pattern is a map of override functions. I can give them defaults using :or, and overwrite side effecting functions, or define a function in my scope. This works for me so far, and enables tests easily (since part of a side effecting function is the contract by which you call the side effects) and allows you to call a different function for repl side work.

                              1. 3

                                It would be interesting if you could change specs. Say you wanted a laptop like the one you currently have, but brighter and with more RAM.

                                1. 2

                                  I find it disappointing that it doesn’t compare performance metrics. It’s easy enough to pick a benchmark that’s more or less accepted by the industry to measure CPU and GPU performance, and map those out as well.

                                  1. 2

                                    Which benchmark would you pick and how would you get that data for specific laptop models?

                                      1. 1

                                        Good idea. I guess I would have to ask them if using their performance values on Product Chart is ok with them.

                                        1. 2

                                          I second notebookcheck as a source for GPU benchmarks. Their methodology is pretty solid, they cover a lot of devices, and they’ve been collecting benchmarks for more than 15 years now.

                                          They also cover other interesting metrics, such as display brightness and color accuracy, which is not always available on a spec sheet.

                                      2. 1

                                        It would be hard to get the scores for a specific laptop, given that the TDP and heat management are often very different across models with the same specs.

                                        For CPU I’d personally go with the Passmark CPU benchmarks, since they are a pretty good indicator for the type of CPU loads I run as a developer (they tend to map linearly to compile times for example).

                                        For GPU, I think the problem is a little bit trickier, but one starting point could be selecting a number of common benchmarks from Notebookcheck.com, which does a lot of in depth laptop benchmarking and reviews, and average them out.

                                        It wouldn’t be perfectly reliable, but it would be a lot better than nothing.

                                  1. 2

                                    This works! Watch out for numeric overflow:

                                    let a = 500000000000000000001
                                    undefined
                                    let b = 500000000000000000001
                                    undefined
                                    a = a + b
                                    1e+21
                                    b = a - b
                                    500000000000000000000
                                    a = a - b
                                    500000000000000000000
                                    
                                    1. 3

                                      in other words, it doesn’t work ;-)

                                      1. 8

                                        Specifically it doesn’t work with number systems that don’t obey group axioms, such as IEEE754.

                                    1. 1

                                      The first thing I went to look at didn’t even work. I just wanted to find the a tag. I’m pretty sure the search used to work better.

                                      1. 2

                                        I’m excited for this. I started looking into pijul recently, to see if I could use it as my main VCS. I think there’s some differences in how the Pijul devs think about version control – at least, different from how I do.

                                        They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.

                                        It seems to be very much written for people who understand the pijul internals. doing a pijul diff shows metadata needed if…you are making a commit out of the diff?

                                        I would think a “what’s changed in this repository” is a pretty base-level query. They seem to not think it’s especially important; the suggested replacement of pijul diff --short works but is not documented for this. For example, it shows information that is not in pijul diff – namely, commits not added to the repository yet.

                                        I also want to see if I can replicate git’s staging area, or have a similarly safe, friendly workflow for interactive committing. It seems like most VCSs other than git don’t understand the use cases for the staging area.

                                        1. 3

                                          They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.

                                          Curious about where you got that from, I even wrote the most painful thing ever, called Sanakirja, just so we could fork databases and have branches in Pijul.

                                          Now, branches in Git are the only way to work somewhat asynchronously. Branches have multiple uses, but one of them is to keep your work separate and delay your merges. Pijul has a different mechanism for that, called patches. It is much simpler and more powerful, since you can cherry-pick and rebase patches even if you didn’t fork in the first place. In other words, you can “branch after the fact”, to speak in Git terms.

                                          I would think a “what’s changed in this repository” is a pretty base-level query

                                          So do the authors, they just think slightly differently from Git’s authors. pijul diff shows a draft of the patch you would get if you recorded. There is no real equivalent of that in Git, because a draft of a commit doesn’t make sense.

                                          I also want to see if I can replicate git’s staging area

                                          One thing you can do (which I find easier than the index) is record and edit your records in the text editor before saving.

                                          1. 7

                                            (Thanks pmeunier for the interesting work!)

                                            I found the discussion of branches in your post rather confusing. (I use git daily, and I used darcs heavily years ago and forgot large parts of it.) And in fact I’m also confused the About channels mention in the README, and the Channels documentation in the manual. I’m trying to explain this here in case precise feedback can be useful to improve the documentation.

                                            Your explanation, here and in the manual, focuses on differences in use-cases between Git branches and channels. This is confusing because (1) the question is rather “how can we do branches in Pijul?”, not “what are fine-grained differences between what you do and git branches?”, and because (2) the answer goes into technical subtleties or advanced ideas rather quickly. At the end I’m not sure I have understood the answer (I guess I would if I was very familiar with Pijul already), and it’s not an answer to the question I had.

                                            My main use of branches in git is to give names to separate repository states that correspond to separate development activities that should occur independently of each other. In one branch I’m trying to fix bug X, in another branch I’m working on implementing feature Y. Most branches end up with commits/changes that are badly written / buggy / etc., that I’m refining other time, and I don’t want to have them in the index when working on something else.

                                            So this is my question: “how do you work on separate stuff in Pijul?”. I think this should be the main focus of your documentation.

                                            There are other use-cases for branches in git. Typically “I’m about to start a difficult rebase/merge/whatever, let me create a new branch foo-old to have a name for what I had before in case something blows up.”, and sometimes “I want to hand-pick only commit X, Y and Z of my current work, and be able to show them separately easily”. I agree that most of those uses are not necessary in patch-based systems, but I think you shouldn’t spend too much answer surface to point that out. (And I mostly forget about those uses of branches, because they are ugly so I don’t generally think about them. So having them vaguely mentioned in the documentation was more distracting than helfpul.)

                                            To summarize:

                                            • There is a “good” use-case for branches, namely keeping track of separate development activities on the same repository that should remain independent, and some “bad” use-cases, namely all the rest.
                                            • I think that when people ask “how do we do branches?”, they have the good use-case in mind, so please start by answering about this clearly
                                            • It’s okay to mention that the bad use-cases are mostly not needed in Pijul anymore, but I think to most people they are an afterthought so I wouldn’t focus on that.

                                            The Pijul documentation writes: “However, channels are different from Git branches, and do not serve the same purpose.”. I think that if Channels are useful for the “good use case” given above, then we should instead consider than they basically serve the same purpose as branches.

                                            Note: the darcs documentation has a better explanation of “The darcs way of (non-)branching”, showing in an example-based way a situation where talking about patches is enough. I think it’s close to what you describe in your documentation, but it is much clearer because it is example-based. I still think that they spend too much focus on this less-common aspect of branches.

                                            Finally a question: with darcs, the obvious answer to “how to do branches?” is to simply use several clones of the same repository in different directories of my system, and push/pull between them. I assume that the same approach would work fine with pijul. What are the benefits of introducing channels as an extra concept? (I guess the data representation is more compact, the dcvs state is not duplicated in each directory?) It would be nice if the documentation of channels would answer this question.

                                            1. 2

                                              So this is my question: “how do you work on separate stuff in Pijul?”

                                              This all depends on what you want to do. The reason for your confusion could be because Pijul doesn’t enforce a strict workflow, you can do whatever you want.

                                              If you want to fork, then so be it! If you’re like me and don’t want to worry about channels/branches, you can as well: I do all my reviewing work on main, and often write drafts of patches together in the same channel, even on independent features. Then, I can still push and pull whatever I want, without having to push the drafts.

                                              However, if you prefer to use a more “traditional” Git-like way of working, you can too. The differences between these two ways isn’t a huge as a Git user would think.

                                              Edit: I do use channels sometimes, for example when I want to expose two different versions of the same project, for example if that project depends on an fast-moving library, and I want to have a version compatible with the different versions of that library.

                                              1. 2

                                                But if you work on different drafts of patches in the same channel, do they apply simultaneously in your working copy? I want to work on patches, but then leave them on the side and not have them in the working copy.

                                                Re. channels: why not just copy the repository to different directories?

                                                1. 1

                                                  They do apply to the same working copy, and you may need multiple channels if you don’t want to do that.

                                                  Re. channels: why not just copy the repository to different directories?

                                                  Channel fork copies exactly 0 byte, copying a repository might copy gigabytes.

                                            2. 1

                                              I use git and don’t typically branch that much. All a branch is a sequence of patches and since git lets me chop and slice patches in whatever way I want to, it seems like its usually overkill to create branches for things. Just makes your changes and build the patch chains you want when you want to, how you want to.

                                              1. 1

                                                Then you might feel at home with Pijul. Pijul will give you the additional ability to push your patches independently from each other, potentially to different remote channels. Conversely, you’ll be able to cherry-pick for free (we simply call that “pulling” in Pijul).

                                            3. 1

                                              They seem to not think it’s especially important; the suggested replacement of pijul diff –short works but is not documented for this.

                                              A bit lower in the conversation the author agrees that a git status command would be useful but they don’t have the time to work on it at the time of writing. My guess is that it is coming and the focus is on a working back-end at the moment.

                                            1. 2

                                              I’m auditioning for improv house teams. I’ve been out of practice during the pandemic, so I don’t know how it will go. I’m just gonna try to have fun.

                                              1. 2

                                                Good luck!

                                              1. 5

                                                This seems to make intuitive sense to me. This is not a criticism of the article.

                                                For every span of n numbers, exactly one number there has n as a divisor. If you know nothing more about these numbers, you would expect each number to have a 1/n chance of having n as a divisor.

                                                So for such a span containing two twin primes, you would expect each number other than the twin primes to have a greater chance – that is, 1/(n-2) – of having n as a divisor. So it would have a 1/3 chance of having 5 as a divisor, when a randomly selected number has a 1/5 chance.

                                                I would also be interested if the numbers directly before and after the twin primes also have more factors on average. They wouldn’t have the advantage of having 3 as a divisor.

                                                1. 11

                                                  This makes operator precedence a partial order rather than a total order. And Guy Steele mentioned that the Fortress language behaved like this:

                                                  https://www.youtube.com/watch?v=EZD3Scuv02g

                                                  mentioned briefly here: https://www.oilshell.org/blog/2016/11/01.html

                                                  1. 6

                                                    fortress’s operator precedence system is one of my favourite pieces of language design, for the way in which it gets things so obviously right in terms of mathematical design that every other system feels clunky in comparison.

                                                    i would not have thought of the point myself before reading a description of fortress, but once I did I felt like of course if you have operator overloading you are essentially using the same symbols as different operators in different contexts, and that an “operator” is actually a combination of a symbol and the context in which its meaning is defined. it is therefore absurd to insist that precedence rules attach to the symbol rather than the operator, the way pretty much every other language that has precedence rules at all does.

                                                    1. 5

                                                      I dunno, letting a + b * c resolve as (a + b) * c or a + (b * c) dependent on the types of the operands seems pretty confusing to me.

                                                      There’s plenty of unicode symbols that are suitable for infix operators, why not just use some of those if you want different precedences?

                                                      1. 2

                                                        because that mirrors their usage in established mathematical or scientific domains. e.g. if you are coding up formulae in a system where * is a low precedence operator, it would be nice not to have to treat it as high precedence and use unnecessary parentheses just because the C world uses it for multiplication.

                                                    2. 2

                                                      And Guy Steele mentioned that the Fortress language behaved like this:

                                                      https://www.youtube.com/watch?v=EZD3Scuv02g

                                                      It begins at this timestamp: https://youtu.be/EZD3Scuv02g?t=1884. He doesn’t go into detail, though.

                                                    1. 2

                                                      Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.

                                                      1. 12

                                                        If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.

                                                        If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.

                                                        1. 2

                                                          when LetsEncrypt stops working

                                                          That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!

                                                          1. 5

                                                            LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.

                                                          2. 2

                                                            But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?

                                                            I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)

                                                          3. 6

                                                            It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.

                                                            If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.

                                                            This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.

                                                            1. 3

                                                              There isn’t any reason. The author says they did it only because they can.

                                                              1. 2

                                                                I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.

                                                                1. 2

                                                                  To collect expiration notification emails from all the CAs! :D

                                                                  1. 2

                                                                    The article says “Just because I can and just because I’m interested”.

                                                                  1. 4

                                                                    Interesting! I have written a parser for a subset of Org syntax, and I agree with the statement that Org syntax is complicated. As one example, let’s look at ways to make task lists. First, a headline can be marked as TODO. But plain lists can’t. Plain lists can be checkboxes, but headlines can’t.

                                                                    These two concepts are very similar, but are separate. They don’t work well together.

                                                                    1. 13

                                                                      Seems like relatively large holes. It exempts “educational institution[s]”. Does Google count, because YouTube has tutorials?

                                                                      Also, you get a perpetual license if the licensor does not “offer a fair commercial license”. What is a “fair license”? It’s later defined as “for a fair price, on reasonable terms”. What is reasonable to you might not be reasonable to me. If Apple objects to your price, does Tim Cook get to use your code forever? A “fair price” is later defined as, basically, “anything I can get someone to pay”. But how do you bootstrap this? If no one has paid for my BTPL code, there’s no fair price?

                                                                      It’s also very unclear if this license permits modification of the code.

                                                                      1. 3

                                                                        It exempts “educational institution[s]”. Does Google count, because YouTube has tutorials?

                                                                        It’s important to remember disputes are handled by human judges that try to apply a mix of legal analysis and common sense to such interpretations. The common use of educational institution is an organization that trains people, usually issuing credentials. A judge with common sense would probably treat Google as a massively-profitable, advertising business. Youtube itself is also hitting people with more ads now to push Premium. They even imply they’re doing that in the Premium advertisements. That could easily show Youtube is both an advertising and paid-streaming business.

                                                                        1. 5

                                                                          Cases have been lost or won over a comma. I wouldn’t assume that an untested license will be interpreted the way you prefer.

                                                                          1. 2

                                                                            Moreover there are countries where the law says that Google just isn’t an “educational institution”, because a company has at most one activity sector and Google’s main activity isn’t about education. That’s how it works in my country: companies can’t easily cheat here.

                                                                          2. 2

                                                                            I upvoted you, because you have good points, but this license was written by a lawyer. I would hope he knows what he’s doing.

                                                                            1. 3

                                                                              Agreed. I had a lot the same questions. It all seems a little too vague to be workable without more clarifications added given a context. I suppose we need to consult a lawyer about this lawyer’s blog post about his license.

                                                                              1. 2

                                                                                I think the point of this sort of thing is that those clarifications are basically hookable access points for discussing/settling things in court. The “fair price” part is something that you can argue in court and get precedent on.

                                                                            2. 1

                                                                              But how do you bootstrap this?

                                                                              My understanding is that the fair price clause does not preclude the licensor from selling the software at any other price. The idea is that you first sell to one big business who is paying some money despite the lack of a known fair price, and this becomes your fair price. Afterwards you can force everyone after that to pay the same price.

                                                                              Obviously if the whole industry forms some kind of alliance to never buy from you, you’d be stuck. But in reality, if your software is minimally commercially viable, you’ll eventually find someone willing to pay. (And if you find the whole industry working against you, you’re in bigger trouble :)

                                                                              The “market price” wording also provides a fallback option. If you are still struggling to sell the first license, you can sell it at a price at an obviously low market rate, and that would automatically be considered a fair price. This doesn’t preclude you from raising the price to other licensees later.

                                                                              1. 3

                                                                                But it seems realistic that the first commercial customer is one who seeks to upgrade from a non-commercial license. From that point, you have less than 32 days to determine a fair price. That seems difficult.

                                                                                The “market price” in the world of FOSS seems hard to determine to be anything other than 0. If I’m making a server focused kernel under BTPL, is the “market price” $0, since that’s the price for the biggest players in the space? If I make a compiler toolchain, is the market price $0 since all the world-class compiler tool chains everyone uses are free? Or do we determine the “market price” based on the price of the commercial players in the space which nobody actually uses? And even if you use the price of the commercial players in the compilers market (to the degree that they exist), you’d end up with a price that’s seriously depressed thanks to the competition from free players, right? Is that price really “fair”?

                                                                                1. 2

                                                                                  You have great points.

                                                                                  Personally, I think businesses would be more inclined to pay if they could get the same FOSS license as everybody, but purchase liability protection on top of that because just about every FOSS license disclaims liability.

                                                                                  And honestly, if someone’s paying you for your software, shouldn’t you, in return absorb some or all liability for that software? In my opinion, this is something the industry got entirely wrong.

                                                                                  1. 2

                                                                                    I assume you intend that the project would be carrying insurance for this, but I (naively, as I know zilch about this kind of insurance…) wonder who would insure it credibly for any reasonable amount without direct access to whoever is using/deploying the software to understand what kind of risk/reward they’re playing with?

                                                                                    I’m having a hard time imagining any blanket liability protection arrangement being durable, so I imagine the liability would have to be extensively lawyered. At that point, I definitely wouldn’t be liking my odds, with corporate lawyers on both ends, of drawing anything but the shortest straw.

                                                                                    1. 2

                                                                                      I think you are correct. I guess my vision about accepting liability also hinges on us being able to develop practices that we call “professional,” like professional engineers. Engineers don’t have liability for something they couldn’t have known, for example, but they are liable for harm caused by negligence.

                                                                                      1. 2

                                                                                        That makes sense. I do feel like it would align incentives toward quality, testing, and humility.

                                                                                        I think I’m mostly just commenting from a software-is-weird-in-ways-that-seem-hard-to-insure perspective.

                                                                                        I can imagine an engineering firm being able to procure liability insurance for a bridge-building project, and I can imagine that relationship getting really weird the first time there’s a claim and the insurer realizes it’s for a copy of the bridge carrying hazmat trucks over the Museum of Irreplaceable Cultural Artifacts and Huggable Puppies. And they investigate a little and realize they’re on the hook for thousands of these bridges.

                                                                                        Or the first time they refuse to pay out because dependencies have been updated since the agreement.

                                                                                        1. 2

                                                                                          Your comment was both very funny and very true.

                                                                                          Yes, I agree that software might be hard to insure. I just hope we can work toward a world where it makes sense to.

                                                                                          Thank you for this discussion; it helped me realize some blind spots I have.

                                                                                  2. 2

                                                                                    The “market price” in the world of FOSS seems hard to determine to be anything other than 0.

                                                                                    The fact that you can use a piece of software for free does not make its market price 0. For example, you can use public roads for free, but there is a very real price on how expensive roads are. The “market price” of FOSS can be approximated by the price of similar non-FOSS software, or how much companies who hire full-time FOSS developers are paying them. Of course my interpretation is not the only possible interpretation, but the author being a lawyer, it is probable that they know that this is how the court will likely think too. Law is not pure philosophy.

                                                                                    There is another mechanism that will determine the market price. If your software is really popular, sooner or later a big company with deep pockets will be willing to pay, just to write off the legal liability. At that point they will compare the cost of your software with other similar commercial software and the cost of developing the solution in-house, and they will be willing to pay as long as your price is lower than the alternatives.

                                                                                    Again, all of this is based on a somewhat optimistic outlook of the software market and some faith in the big companies. In fact, the author pointed out that the entire non-commercial clause is based on the honor system. As long as there are enough honest customers you’d be good.

                                                                                    Another thing is the context to interpret this license in. I keep feeling that the author must still have projects like Babel (which he dealt with in Sell Babel 8) in mind. This license is not optimized for maximizing your profit, because many companies simply won’t pay. It is a compromise intended to let developers of very popular libraries to reap a somewhat proportional profit - which will not be as high compared to selling commercial software, but definitely much more than voluntary donations.

                                                                              1. 5

                                                                                Way to nerd-snipe me.

                                                                                Using regex:

                                                                                (map (juxt first count)
                                                                                     (str/split "aaaabbbcca" #"(?<=(.))(?!\1)"))
                                                                                

                                                                                The regex is a little weird – basically, we want to split the string on any boundary where the characters before and after the boundary are different. So “aaaabbbcca” becomes four strings: [“aaaa” “bbb” “cc” “a”]. It does this by using a lookbehind for any character – the (?<=(.)) – and then a negative lookahead for not that character – (?!\1). We don’t want to include either of those characters in the match, so the characters are returned as part of the new strings.

                                                                                The rest is relatively simple: juxt returns a function that takes a single argument, then returns a vector with the results of calling the two functions separately on that argument.

                                                                                I don’t know that I’d write this in production code, but it is kind of fun.

                                                                                1. 3

                                                                                  Kind of the same idea but in Raku:

                                                                                  "aaaabbbcca".subst( /(.) $0*/ , {"({$0},{$/.chars})" }, :g )
                                                                                  

                                                                                  The regex for the substitution is /(.)$0*/ where $0 is the first character matched by the joker grouped that could appears zero or more times. the second argument is a block where a string to follow the format (letter, number of times in a row) is followed. $0 is the character and $/ contains the match objects. The chars method returns the number of characters in the match object by grapheme. The last argument :g applies the substitution globally, without it the method will stop after the first ‘a’.

                                                                                  my $str = "aaaabbbcca";
                                                                                  $str ~~ s:g/(.) $0+/{ "({$0}, {$/.chars})" }/
                                                                                  

                                                                                  is the same with in-place substitution using s/// construct instead of the subst methods.

                                                                                1. 2

                                                                                  Complete the pattern:

                                                                                  3 2 1 0 -1 -2 -3

                                                                                  Odd Even Odd ??? Odd Even Odd

                                                                                  Even if you as a programmer knows that 0 is technically not even, you will still probably check if a number is even with (x % 2 == 0).

                                                                                  So they will miss out on seeing how a line of reasoning can be used to deduce things, they will listen to random ramblings, unconnected and emotive sentences, and be convinced by “logic” that is non-existent.

                                                                                  People say 0 is even because in the vast majority of real world applications for the test, it is appropriate to treat zero as even, not because they are afraid of your big brained maths.

                                                                                  1. 6

                                                                                    Even if you as a programmer knows that 0 is technically not even, you will still probably check if a number is even with (x % 2 == 0).

                                                                                    That is the actual test to see if a number is even. To be more precise, an integer n is even if and only if it can be written as n = 2*m, where m is also an integer. So 6 is even, because we can set m to 3: 6 = 2*3. 0, then, is even, because 0 = 2*0.

                                                                                    The definition of odd, on the other hand, is m = (2*n) + 1, with m and n both integers.

                                                                                    1. 4

                                                                                      Even if you as a programmer knows that 0 is technically not even

                                                                                      0 is “technically even”. Did you read the post? This isn’t about discussing whether or not it is, it’s about why some people don’t realise it is, and how maths could be better taught/communicated to rectify this.

                                                                                      1. 1

                                                                                        a is divisible by a nonzero b if there exists a whole number x such that a = x•b. Then we call b a divisor of a. Clearly every whole number is a divisor of 0, and since 2 is a divisor of 0, 0 is even.

                                                                                      1. 6

                                                                                        My five favorite packages:

                                                                                        1. Ivy. The list-and-narrow model has radically changed how I use Emacs and I think this model should be emulated more widely. It’s a great way to deal with hundreds even thousands of options in an easy manner.

                                                                                        (I tried selectrum and I like it and am philosophically more aligned with it, but Ivy is what I know and it requires fewer configs to get it to work the way I want it. Ivy also allows matching from external programs, for example ripgrep, which I use constantly.)

                                                                                        1. Magit. No suprise there, everyone who uses Emacs pretty much loves Magit, it’s just an awesome git porcelain.

                                                                                        2. dumb-jump. For programming modes where I don’t want to bother with configuring LSP, dumb-jump offers a very workable jump-to-def approach. It uses regular expressions to find lines that look like they may define the word under the cursor. Not always completely accurate, but it requires 0 configuration and works well enough for most cases.

                                                                                        3. move-dup and shift-text. I’m cheating here by including both, but they do similar jobs and allow me to manipulate lines of text similarly to how I got used to in Eclipse (Ctrl+Alt+Down/Up to copy the current line down/up, Alt+Down/Up to move a line down/up, etc.)

                                                                                        4. deadgrep. When using counsel-rg gives to many results, deadgrep offers a very clean interface to view and navigate the results of a search. I especially love that by default it finds the root of the project (a Git project for example) so it DWIMs pretty well.

                                                                                        1. 2

                                                                                          Great choices!

                                                                                          I’ve been using ag for ages, but I should try rg one of those days. I’m always using such search packages via Projectile, though, as they seem to be most useful within a project context.

                                                                                          Btw, how fast is the dumb-jump for you on bigger projects?

                                                                                          1. 2

                                                                                            I tried dumb-jump on GCC with ripgrep and it was unbearably slow. I reverted to ggtags as it is considerably faster.

                                                                                          2. 2

                                                                                            I keep trying magit but I can never get good enough at it to the point where it makes sense to actually use it. The documentation is sort of laughably impenetrable too (I say this as a long time emacs user, but not a power user). The git blame docs for instance are just hilariously unhelpful.

                                                                                            Anywhoo, would be interested in giving it yet another go and would appreciate any resources you’re aware of that might be helpful.

                                                                                            1. 1

                                                                                              Anywhoo, would be interested in giving it yet another go and would appreciate any resources you’re aware of that might be helpful.

                                                                                              I found this introductory presentation from Howard Abrams helpful. Mostly though, once I bound C-x g to magit-status and got into the habit of using it, the combination of my existing (basic) git knowledge and the ? key-bind allowed me to learn by doing. Hope that helps :-)

                                                                                              1. 1

                                                                                                Thanks, I’ll give it a watch! I pretty much do the same thing but it never seems to work out (for example, I still can’t find blame anywhere).

                                                                                                1. 1

                                                                                                  I’ve used magit for many years yet I rely on the ? shortcut for using it.

                                                                                            2. 1

                                                                                              I tried selectrum and I like it and am philosophically more aligned with it, but Ivy is what I know and it requires fewer configs to get it to work the way I want it.

                                                                                              Selectrum is new to me! I’ve been using Helm, but am considering switching, as Helm’s display doesn’t quite seem to be configurable enough – sometimes when switching buffers, it truncates the buffer name! Ridiculous!

                                                                                              Is this the philosophy you’re talking about?

                                                                                              The design philosophy of Selectrum is to be as simple as possible, because selecting an item from a list really doesn’t have to be that complicated, and you don’t have time to learn all the hottest tricks and keybindings for this. What this means is that Selectrum always prioritizes consistency, simplicity, and understandability over making optimal choices for workflow streamlining. The idea is that when things go wrong, you’ll find it easy to understand what happened and how to fix it.

                                                                                              How does that differ from Ivy, which says it “aims to be more efficient, smaller, simpler, and smoother to use yet highly customizable”? I haven’t tried Ivy either, so high-level info is great.

                                                                                              1. 2

                                                                                                The section on Ivy in the Selectrum README is where they go into most of the detail. Essentially, Ivy was originally designed for Swiper, and got abstracted out of that some point along the way, but not very cleanly, resulting in a lot of hardcoded special cases in the code for different functions. Selectrum on the other hand only wants to plug in to completing-read and let a user choose from that list in a more convenient way, which allows the codebase to be something like ten times smaller.

                                                                                              2. 1

                                                                                                deadgrep was really nice, thanks for sharing!

                                                                                              1. 5

                                                                                                I’ve got to record my Emacsconf talk. I started yesterday, and am having massive problems with ffmpeg on Linux. Right now, it’s running my machine out of memory in 90 seconds. :(

                                                                                                1. 2

                                                                                                  Depending on your machine, real-time encoding might be the bottle-neck?

                                                                                                  1. 1

                                                                                                    That certainly would make sense! I was trying to record desktop video + webcam + microphone at the same time, to three output files. When I switched to webcam + microphone, it was fine. And desktop video by itself was fine.

                                                                                                    As a note for anyone else reading this, it ended up being far easier to take screenshots of the fullscreen presentation, and assemble them into a video using Openshot Video Editor. It actually makes it pretty easy to have the transitions be exactly when you want them to be.

                                                                                                    Unfortunately, I do have a part of it that’s more of a live demo, so I still need to screen-record that, but if I only do that without recording the webcam and microphone, it’s fine. Plus, I want to edit in the webcam video in the lower right.

                                                                                                    If I want to do it again, I’ll look into how I can simplify the real-time encoding, reduce the CPU usage.