Threads for cosarara

  1. 1

    Good article. I noticed there are several places where you say million when you meant billion, for IPS.

    1. 1

      I don’t think so? I use million IPS as a unit consistently, and sometimes that number is in the thousands.

      1. 2

        I think mperham refers to this sentence near the beginning.

        That means my laptop runs 856 million bytecode instructions per second (“IPS”) on average, and my desktop runs 1.1 million instructions per second.

        The table right below shows 1059 M IPS.

        1. 2

          Oh, they said there were several so I assumed there was a misunderstanding of the numbers in the tables. I corrected the case you pointed out, thanks

          1. 1

            I’ve never seen this formatting for numbers 5'882 million. It’s more standard in English to say 5.882 billion.

            1. 1

              The ' is meant to be a thousands separator. I chose it because 1) , and . are the standard thousands separators but are ambiguous, and 2) ' is what C++ and a few other places uses and doesn’t already mean “decimal point”. Maybe it wasn’t as unambiguous as I had hoped.

              1. 2

                Unfortunately there’s no good solution, but thanks for at least trying.

                1. 1

                  ' as a thousands separator is fine and unambiguous.

                  1. 1

                    In spain, ’ is used as a decimal separator in handwriting. It also caused confusion in this very same thread. A space seems better in writing (5 882).

                    1. 2

                      ISO recommends a space as a thousands separator: https://en.wikipedia.org/wiki/Decimal_separator#Current_standards, but a comma or a period is also allowed.

                      Swedish typographical best practice is to use a thin space. It should of course also be non-breaking.

                      1. 1

                        Rats. I had not heard of that. Thanks. The only argument that I knew of concerning thousands separators was between English and Germanc speakers using “.” and “,” in opposite orders.

        1. 8

          The problem does not seem to be that TCP_NODELAY is on, but that the packets are sent carry only 50 bytes of payload. If you send a large file, then I would expect that you invoke send() with page-sized buffers. This should give the TCP stack enough opportunity to fill the packets with an reasonable amount of payload. Or am I missing something?

          1. 6

            I disagree. It shouldn’t matter how big or small the application’s write() calls are. Any decent I/O stack should buffer these together to efficiently fill the pipe. I don’t know the details of git-lfs, but it sounds like it’s more complex than just shoveling the bytes of a file into the socket, so there’s probably a valid reason for it to issue small writes.

            1. 14

              I disagree. It shouldn’t matter how big or small the application’s write() calls are. Any decent I/O stack should buffer these together to efficiently fill the pipe

              It’s not clear to me that this is the right solution. I’d expect some buffering, but the more buffering that you do, the more latency and (often more importantly) jitter you introduce. If I send 4 MTU-sized things and then a one-byte thing, do I want the kernel to buffer that until it has a full MTU, or do I want it to send it immediately? Often, I want the latter because I’m sending the one-byte thing because I don’t have anything more to send.

              TCP_NODELAY is intended specifically for the latter case. The use case in the man page is X11 mouse events: you definitely want these sent as soon as possible because the latency will kill usability far more than a reduction in throughput. It definitely shouldn’t be the default.

              I think the root of the problem is that the Berkeley socket interface doesn’t have a good way of specifying intent on each write. I want to be able to say, per packet, whether I care most about latency, throughput, or jitter and then let the network stack do the right thing to optimise for this. If I send a latency-sensitive block through a stream behind a throughput-sensitive one then it should append it and flush the buffer. If I send only throughput-sensitive ones, it should do as much buffering as it likes. If I send jitter-sensitive ones, then it should ensure that the data in the buffer is flushed before it reaches a certain age. Ideally, it should also dynamically tune some of these heuristics based on RTT, for example ensuring that, for latency-sensitive packets, the latency imposed by buffering is not more than 0.5 RTT or similar.

              The problem with sending small writes goes beyond the buffering though. Each write requires the kernel to allocate a new mbuf (or mbuf fragment) to hold the data (Linux calls these skbufs). For small fragments, if you’re buffering, these need to be copied into a kernel heap allocation. In contrast, if you do writes that are a multiple of the page size then the kernel can just pin the page for the duration of the syscall (or AIO operation) and DMA directly from that. With KTLS, it can then copy-and-encrypt directly from the direct map into a fixed (per NIC send ring) buffer, unless the hard has TLS offload, in which case it is a single DMA directly from userspace memory to the device, with no overhead (this, in combination with aio_sendfile and some NUMA-awareness is how Netflix manages 80+ GiB/s of TLS traffic from a single host).

              1. 4

                If I send 4 MTU-sized things and then a one-byte thing, do I want the kernel to buffer that until it has a full MTU, or do I want it to send it immediately?

                Isn’t that what flushing the stream conveys? Why should the kernel need to guess?

                the Berkeley socket interface doesn’t have a good way of specifying intent on each write.

                Neither does the Go standard library — the ubiquitous Writer interface just has a vanilla Write() method, not Flush. IIRC (don’t have docs handy at the moment) there isn’t a standard interface adding Flush(), though there are structs like BufferedWriter that have it as part of their implementation.

                That means Go can’t really do buffering at the lower levels like the TcpConn struct, instead everything higher level that writes to a stream has to decide whether to do its own buffering or not. The WebSocket library we use has a buffer for assembling a message, but when the message is complete it just calls Write to send it to the Conn …

                TL;DR I think Go’s stream interfaces may have erred on the side of simplicity, making them more difficult to fine-tune for optimal network performance. By contrast, look at Swift’s Combine library, which has really rich support for flow control and backpressure, at the expense of having lots more moving parts.

                1. 3

                  Isn’t that what flushing the stream conveys?

                  It can, but that’s an oddly asymmetric interface: you can flush the buffer, but you can’t tell it to keep buffering.

                  Neither does the Go standard library

                  This isn’t surprising, the Go library is based on Plan 9 which, in my opinion, is a system that takes the worst ideas in UNIX to their logical conclusion.

                  1. 2

                    a system that takes the worst ideas in UNIX to their logical conclusion

                    I’m curious, which ideas from unix are the worst, and why are they bad? Do you have an example?

                    1. 2

                      Everything is an unstructured stream of bytes. If you want to make everything an X, pick an X with useful properties, such as introspection and typed interfaces. Even something like COM is better. For example, UNIX ended up with an isatty function to prove whether something is a terminal, but no isasocket or isapipe, and in fact neither of those are what I actually want, I want things like is this persistent storage? In a COM system, for example (and COM is awful, I’m using it as an example because of COM is better than what you have then you’re in a really bad place), most uses of isatty would be replaced by a cast to an IColoredOutputStream or similar. If this succeeded, then the destination would provide functions for writing formatting commands interleaved with text-writing commands.

                      At the lowest level, most UNIX devices provide an interface based on ioctl, which is a completely opaque interface. You could make this better by at least providing some standard ioctls that let you query what ioctls a device supports and, in addition, what ioctls the argument types are. Unfortunately, because ioctl takes a 32-bit integer (64 on ILP64 platforms, but that basically means Alpha), you end up with different devices using the same ioctl commands for entirely different things.

                      Plan 9 made some of these things a bit better by replacing some devices with directories containing individual files for each command but that massively increases the amount of kernel state that you need to communicate with the device.

                      Much as I dislike DBUS, it would be a much cleaner way of interfacing with a lot of things. The ZFS interfaces started moving in a sensible direction, with a library for creating name-value lists with typed values and having each ioctl take a serialised one of these. These, at least, let you have errors for type confusion rather than the kernel just corrupting userspace memory in arbitrary ways. They also meant that 32-bit compat is easy because the nvlist serialised format is consistent across architectures.

                    2. 1

                      Sick burn, bro! (I agree with your elucidation below.)

                2. 3

                  There is an use case for NODELAY. Just like there is a use case for DELAY. So any discussion about the default behavior appears to be pointless.

                  And I don’t see why applications performing a bulk transfer of data by using “small” (a few bytes) write is anything but a bad design. Not writing large (e.g., page-sized) chunks of data into the file descriptor of the socket, especially when you know that there multiple more of this chunks are to come, just kills performance on multiple levels.

                  If I understand the situation the blog post describes correctly, then git-lfs is sending a large (50 MiB?) file in 50 bytes chunks. I suspect this is because git-lfs issues writes to the socket with 50 bytes of data fr the file. And I am genuine curious about potential valid reasons to issue small writes in such cases.

                  1. 2

                    The point of discussing the default is that Go’s implementers decided to use the opposite default value than what a Unix developer is used to. Both are valid, but one tends to assume the socket uses Nagle buffering unless told otherwise. Not so in Go, and this isn’t really documented … I’ve been using Go since 2012 and didn’t learn this until last month.

                    I’m curious about the 50-byte writes too. I’m guessing they make some sense at the high level (maybe the output of a block cipher?) and the programmer just assumed the stream they wrote them to had some user-space buffering, only it didn’t. So yeah, application error, but the NODELAY made the effects worse.

                    1. 1

                      It seems to be forgotten in this discussion that Nagel’s algorithm was not created to fix those kinds of programming mistakes. So why are the platform defaults relevant to this discussion?

                      Otherwise, I also believe that it is likely that the 50 byte writes are probably due to an unbuffered stream being used, when it should have been a buffered one. Which makes the relevant question if Go makes it easier to make such errors, e.g., because the default is unbuffered.

                  2. 1

                    Any decent I/O stack should buffer these together to efficiently fill the pipe

                    It seems like this should happen in the userspace part of the stack though? Going to the kernel to just memcpy bytes seems wasteful?

                    1. 1

                      The normal TCP stack in Linux resides (mostly) in the kernel space. Hence sending data will involve a copy from user space to kernel space. (There are various ways to optimize that, including moving the TCP/IP stack into user space and exposing parts of the network interface card to user space directly, but that is not relevant to this discussion).

                      1. 1

                        Yes. As that’s all I have to say here, I feel like either I don’t understand what you are trying to say, or vice verse? :)

                        To expand on this, the problem with in-kernel buffering is not that you need memcpy (you need to regardless, unless you do something very special about this), but that you need to repeatedly memcpy small bits. As in, it’s much cheaper to move 4k from userspace to kernel space in one go, rather than byte-at-a-time.

                        I guess the situation is similar to file io? You generally don’t want to write(2) to file directly, you want to wrap that into BufferedStreamWriter or whatnot.

                        1. 3

                          Yes. As that’s all I have to say here, I feel like either I don’t understand what you are trying to say, or vice verse? :)

                          I am sorry, I think you replied to my comment, when you actually quoted snej’s comment. At least that is the visual impression I get. And this seemed to confused me.

                          Yes, it looks like we are on the same page. Always try to perform large writes for optimal performance, e.g, less syscalls and it gives the TCP/IP stack more room for optimization.

                          1. 3

                            Nice, always great to notice a trivial miscomunication instead of someoneswrongontheinterneting! :)

                1. 11

                  This is a weird way to measure a language. Look at GitHub and how productive a single developer can be with Go. Some programs are editors, computer games or command line utilities. Not every program needs the type of defensive programming the article promotes.

                  Inherent complexity does not go away if you close your eyes.

                  This is true. But not all projects has this particular type of inherent complexity.

                  Successful projects, quick compile times, an ecosystem that lets you add a dependency that works, not having broken code just because 3 years has passed; these are also nice things.

                  One should not close the eyes to the quick development cycles and low cognititve overhead experienced Go developers can achieve, either.

                  1. 12

                    This is a weird way to measure a language. Look at GitHub and how productive a single developer can be with Go.

                    You could apply this argument to literally any language.

                    1. 8

                      No, you couldn’t, and there is no comparison. Look at active repos in Common Lisp, Scheme, Forth, Smalltalk, Ada, you know, the languages everybody talks about nicely. Then look at Go. If there’s 2 types of X, the ones everybody complains about, and the ones nobody uses, Go is certainly in the first group.

                      Go is very popular, and getting more popular, and there is no denying it: https://madnight.github.io/githut/#/pull_requests/2022/3. Tons of people are learning go in their free time and using it to get stuff done, because it is easy to learn and easy to get stuff done with.

                      1. 11

                        Now you’re arguing by popularity, which is a fallacious argument and a very different thing than “you can get stuff done with it”. Clearly you can gst stuff done with the other languages too.

                        1. 4

                          Github and the popularity of the language in the platform proves that many single developers can and are getting stuff done with it. Which is a proof that you don’t get with other, unpopular languages. You might claim that they are just as good, but you are not getting proof of it from github.

                          1. 11

                            Popularity is not a good measure of anything. Tobacco smoking is popular. Eating Tide-Pods used to be popular.

                            1. 7

                              Tobacco smoking and eating tide pods do not produce anything (well, except damage to the person doing it). I think corasara is implying that projects on github demonstrate that people get stuff done with Go. To put it differently, TikTok and Insta measure popularity. GitHub showcases code that actually does something (well, most of the time). And the fact that there are many repositories on GitHub implies that many people do get stuff done with Go. Now, there might be an issue that users of other languages do not advertise their work on GitHub…

                              1. 3

                                My point is that the people that get stuff done with Go could be just as well getting stuff done with other languages, but most of them haven’t really tried many other languages besides the ones that are obviously worse for productivity, such as C.

                                1. 4

                                  I think this is wrong. I think developers that have tried many different programming languages are the ones that most appreciate the things that Go gets right.

                                  1. 4

                                    but most of them haven’t really tried many other languages besides the ones that are obviously worse for productivity, such as C.

                                    This belief reveals a great deal about you and I’m not sure reflects anything true about Go programmers.

                                2. 4

                                  Wow, look at the replies you’re getting. These people are coming out of the woodwork to defend their questionable career choices. One could base a whole career on Go criticism.

                                  1. 5

                                    This is ridiculous. You are comparing a decade of individual’s spending their rarest resource, their free time, to a claim that a 4chan meme was real and chemical addiction.

                                    Get over it, some people use go because it’s the best choice for them.

                                    This is not your schools debate club.

                                    1. 1

                                      How do the people know whether it’s the best choice for them?

                        1. 1

                          Thus, if I use  @grocery-list  with a  for  loop, the body of the loop will be executed five times. But if I use  $grocery-list , the loop will get executed just once (with the list as its argument).

                          Why would the second case ever be what the programmer wants? If they wanted a single body execution, they would not be writing the for loop in the first place, so isn’t it just a footgun?

                          1. 4

                            In other words nothing was stolen but a mirror/proxy was set up.

                            In a way that’s the same thing one pays cloudflare for and what Google does with cached URLs, or something archive.org etc du. I think some of the measures mentioned therefor would prevent that too.

                            And yes reporting to their DNS and/or housing provider is the right way to go. Of course depending on persistence it’s whack a mole. In the end one could download the whole website (just like your browser does) and upload it somewhere. On the topic of JS. An “attacker” could run a headless browser. That’s something some front end frameworks do/did for websites to be indexed by search engines.

                            So what you put in the internet people can download and upload or proxy in some form.

                            1. 4

                              In other words nothing was stolen but a mirror/proxy was set up.

                              Insert obligatory “You wouldn’t steal a car” here

                              https://www.youtube.com/watch?v=HmZm8vNHBSU

                              The words “theft” and “steal” are inaccurate when duplication is essentially free and nobody lost their copy.

                              1. 5

                                The correct word is probably impersonation. They are impersonating the author. In certain cases impersonation is indeed a crime.

                                1. 3

                                  I think the better word is plagiarism. They are not pretending to be the author (good.com), they are trying to be another, better ranked, full of ads website (proxy.com) that happens to have the same content (it’s just plagiarized).

                            1. 7

                              Ever wanted to write x max= y while searching for some maximum value in some complicated loop? You can do that here. You can do it with literally any function.

                              Oh my gosh I love this.

                              We don’t have that problem because we don’t distinguish sets and dictionaries.

                              I don’t love this as much.

                              1. 4

                                Ever wanted to write x max= y while searching for some maximum value in some complicated loop? You can do that here. You can do it with literally any function

                                I do not understand what x max= y is supposed to do. Care to explain, maybe with an example?

                                1. 8

                                  x=max(x,y)

                                  1. 6

                                    Thanks for this! I now understand that max= is being understood the same way += is. Neat.

                                2. 2

                                  I don’t love this as much.

                                  Why not? That’s how at least a few languages implement sets: hashtables/dictionaries with dummy values. Python, in particular, comes to mind.

                                  1. 3

                                    This may have been true in the past, but I don’t think that python does this currently. Python’s dict implementation now guarantees order, but the set definitely does not.

                                    1. 3

                                      While they don’t share the same implementation (anymore?), Python sets are absolutely still implemented using a hashtable: https://github.com/python/cpython/blob/main/Objects/setobject.c

                                    2. 1

                                      What’s the union/intersection/difference of two dictionaries?

                                      1. 2

                                        What’s the union/intersection/difference of two dictionaries?

                                        A dictionary containing union/intersection/difference of the keys of those two dictionaries?

                                  1. 22

                                    I never really like “we need something as elegant as Maxwell’s equations” argument because as soon as you substitute anything into the equations you get a gigantic mess that’s so big everybody uses alternate approaches to solve stuff. PDEs are no joke.

                                    More relevant to your actual point, IMO the biggest barrier for little languages is tooling adoption. LSP/Treesitter plugins are 1) a lot of work to make, and 2) enormous productivity boosters. That pushes people to stay with big languages with extant tooling.

                                    EDIT: I really wouldn’t trust the STEPS program’s claims that they got feature-parity of a 44,000 LoC program with only 300 lines of Nile, not without carefully evaluating both programs. Alan Kay really likes to exaggerate.

                                    1. 3

                                      Yeah, tooling-wise you’re really swimming upstream as users won’t even have syntax highlighting from day one.

                                      I guess the upside is that certain tools become much easier to create for little languages (eg Bret Victor’s graphical Nile debugger—for example, writing a simplified regular expression engine could realistically be an undergraduate homework project—but it’s still work that needs to be done, and it’s the kind of work that won’t pay anyone’s bills.

                                      1. 3

                                        Regarding Nile VS Cairo—yes, if you read the fine print the graphics rendering pipeline actually works out to be much more, as that doesn’t include some of the systems underlying Nile, like Gezira. Looking at the STEPS final report you can see that the total amount of graphics code ends up being somewhere closer to 2000-3000 lines. As I mentioned briefly in the article, part of that size reduction was probably due to replacing hand-rolled optimizations with JITBlt .

                                        However, if you think about it, JITBlt isn’t really connected to the whole “little language” concept at all; you could’ve probably achieved the same thing by adding a JIT compiler as a library in Cairo, or by using some really gnarly C++ template tricks, and end up with a significant size reduction of the Cairo code base instead.

                                        (There might be some other caveats as well that I’m not aware of—for example, there could be a lot of backwards compatibility code in Cairo that they could just skip implementing as Nile is a green-field project)

                                        1. 1

                                          EDIT: I really wouldn’t trust the STEPS program’s claims that they got feature-parity of a 44,000 LoC program with only 300 lines of Nile, not without carefully evaluating both programs. Alan Kay really likes to exaggerate.

                                          The claim about Cairo really stood out to me. We run Cairo, nobody runs Nile, after all. If there is this amazing library that does all the same things in just 300 lines of code, then it would have been a no-brainer to adopt it when it came out… unless there is some details that we would rather not talk about.

                                          1. 1

                                            I see 3 such potential “details”:

                                            • Saying that Nile’s performance “compete” with Cairo may mean it is comparable, within the same order of magnitude. Up to twice as slow. And even if it was only 5% slower people would probably refuse to switch.
                                            • The 300 lines don’t count the rest of the STEPS compilation toolchain, which I believe takes about 2K lines of code. They’re just not counted here because those 2K lines are used for the whole system, not just Nile.
                                            • As far as I know Nile doesn’t have a C API. This one is probably a deal breaker.
                                          2. 1

                                            The link is paywalled for me, is there an alternate source?

                                          1. 9

                                            while the circumstances are tragic and I wish all the best for all involved, this is another example why crucial services cannot be provided for free.

                                            If they are, it is either based on self-exploitation or hidden interests – which is both not desireable.

                                            So please either pay for it (to an association or the like) or DIY – you are to one provider for your own cause that won’t go out of business surprisingly.

                                            1. 13

                                              I think there’s a middle ground between a lone person working for nothing and risking burnout and full corporate stewardship. A healthy community can “work for free” and share the duties and responsibilities. Civil society is full of these organizations. It does require formal stuff like a charter, maybe elections, but the main thing it provides is a way for the community to survive even if some of its leaders decides to step down.

                                              In fact this is mentioned in the post:

                                              Choosing and vetting such a new admin would be a lot of work, not to mention the messy process of transferring each piece of infrastructure to them.

                                              If I had started this process six months ago, this might be possible. But I missed that window.

                                              1. 4

                                                I agree. But it goes beyond just that. The fact that the software is free doesn’t mean the service needs to be. Or that if you pay for service, that you are giving over responsibility.

                                                With these federated things it’s even easier, you pay someone like masto.host 10$ a month and you’re fully served, and can easily transfer the ownership and responsibility even in urgent cases like this.

                                                It’s no different then your website. You can host your own, you can pay wordpress to host it for you, and those are probably things you want to keep longer term, unlike tweets and Facebook statuses.

                                                But it’s also not for everybody. I’m just happy that I have the option.

                                                1. 2

                                                  I do consider the time put in organisations for elections or administration also a cost even if volunteered.

                                                  I don’t call for free what costs no money but time.

                                                2. 2

                                                  No, this is much more the case of a single person starting a project with no fallback admin. If this was a team there could’ve been a plan in place to find a replacement if just one of N people vanishes.

                                                  That’s basically the reason I personally would not start such a project without a (small) team of people I trust, or at least a single other person to take over for a while. If this sounded like blaming the admin of that instance, it wasn’t intended - just a lesson I have learned. Also teams have different problems, but usually not this one.

                                                  1. 1

                                                    team or not doesn’t mean a thing – if you want things for free you have no claim to make.

                                                    1. 1

                                                      I didn’t make a claim. Just from the admin’s post I’d gather that providing a service for a fee could (I’m not saying would) have changed nothing. As long as there are no SLAs given they could still say “that’s it, server is shutting down”. A hypothetical different admin could even add “thanks for making me rich, suckers” - and it would not change a thing that the users would not have the instance they’re using anymore after date X.

                                                      My point was… something happened that made them stop the server. Material costs aside, this could have been fixed by having a team in place “Choosing and vetting such a new admin would be a lot of work,”

                                                  2. 2

                                                    Or, simply don’t invest in mastodon instances that don’t have proper administration behind them… One guy is a recipe for disaster

                                                    1. 3

                                                      What instances do have proper administration?

                                                      1. 1

                                                        No idea, not a mastodon user, but, given the reasons in the article, it seems to me the operators of these instances should be transparent to their users about their operations structure

                                                        I’m assuming this guy was pretty transparent, and yet the users joined anyway… Maybe for smaller instances it’s fine to be run by one guy, but once it grows to a certain size, the community should decide on important matters like this

                                                        1. 4

                                                          Sure if that community contributes in any way. Partially you’re also correct, even Ash (the instance administrator in question) says that he should have set this up earlier.

                                                          But switch of trust to a committee might not be everybody’s idea of good practices. Personally I was okay with trusting Ash but if he did transfer the admin function, I’m sure a lot of people could see that as a problem.

                                                          But none of it matters -you just migrate your account elsewhere and you’re done

                                                        2. 1

                                                          e.g. https://digitalcourage.social which charges you a Euro per month.

                                                    1. 8

                                                      Is #pragma once not enough? There are no numbers to show how much faster it is, to decide if it’s worth the price.

                                                      1. 12

                                                        Every time I come across one of these pro-TDD articles, I always remember Ron Jeffries attempt to implement sudoku: https://ronjeffries.com/xprog/articles/oksudoku/ https://ronjeffries.com/xprog/articles/sudoku2/ https://ronjeffries.com/xprog/articles/sudokumusings/ https://ronjeffries.com/xprog/articles/sudoku4/ https://ronjeffries.com/xprog/articles/sudoku5/ TD;DR—one of the main pushers of TDD utterly fails at TDD in a series of articles that come across as satire of TDD.

                                                        1. 12

                                                          My main takeaway from that has always been “if I work on something publicly and then lose interest the Internet will hound me about it until the end of time.”

                                                          I flit between ideas all the time. One of these days I’ll flit away from an FM project and people will forever use it as proof that FM is stupid.

                                                          1. 2

                                                            What’s FM?

                                                            1. 1
                                                            2. 1

                                                              The problem isn’t that he didn’t complete it, the problem is that he set it up as an example of how TDD produces superior results to non-TDD. Even then simply losing interest wouldn’t have been a big deal.

                                                              The problem is that what he documented was ever increasingly complexity of implementation, and didn’t even achieve basic functionality.

                                                              1. 1

                                                                He didn’t set it up that way at all:

                                                                A number of people on the tdd list have reported having a lot of fun TDD programming the game of Sudoku. I’ve not played the game, though of course I’ve tripped over the piles of books in the bookstores and at the airport. But discussion of the thing makes it sound like it might be fun to TDD on it, as people are saying. Let’s get started.

                                                                […]

                                                                I’m not saying this is good, or what you should do, or anything of the kind. I’m displaying what I do, faced with this problem, and how I explore what the computer and I can do in moving toward a Suduko solution.

                                                                1. 2

                                                                  I’m not saying this is good, or what you should do, or anything of the kind.

                                                                  The contextually reasonable interpretation of this statement was not him talking about how he was going about the development methodology, but about his actual approach to solving the sudoku, to which I say fair enough - often a first attempt at doing something turns out to be the wrong way (especially given his stated lack of familiarity with the problem space).

                                                                  I’m displaying what I do, faced with this problem, and how I explore what the computer and I can do in moving toward a Suduko solution.

                                                                  This is where we get to why people find this series so absurd. This is an (the?) expert on TDD showing how TDD influences development of software, and the result of his approach is an absurd level of complexity in the solution, regardless of whether the approach was actually “correct”.

                                                              2. 1

                                                                But Ron Jeffries is an original signer of the Agile Manifesto, and a consultant selling Agile practices, including TDD. So are you a recognized expert in FM and are selling it to organizations?

                                                                1. 6

                                                                  Yes, he is.

                                                                  1. 4

                                                                    He is and he does.

                                                                2. 5

                                                                  It really was an amazing series, and I agree I initially thought it was satire.

                                                                  I always feel a lot of pro-TDD folk think the alternative to TDD is no tests - of course I have test cases, but many more test cases come out of the implementation not the design. The actual act of implementing a feature in a well thought out way makes you realize what things are more complex and require more tests.

                                                                  The idea that any set of tests made ahead of time capture the actual nuance of real software development is farcical, and I’ve seen again and again in pro-TDD posts.

                                                                  Then there’s also the super problematic appropriation of another culture for no real benefit.

                                                                  1. 2

                                                                    Then there’s also the super problematic appropriation of another culture for no real benefit.

                                                                    Can you link to a more verbose explanation of what you mean by this? Of all the problems I’ve seen cited with TDD, that’s the first time I’ve heard it called cultural appropriation.

                                                                    1. 7

                                                                      As @telemachus said it’s the appropriation of terms like katas, dojo, etc by people of fairly universally western developers. Some of these terms and concepts have non-trivial meaning in the originating cultures, and co-opting them for some fad development methodology is kind of gross, and totally unnecessary.

                                                                      Western culture has plenty of similar concepts that could be used instead, so adopting terms from a culture that you fundamentally lack any real involvement with comes across as considering the importance of terms from another culture as being less significant than western culture.

                                                                      There are places where we’ll refer to things like “the church of X” where X is some development ideology so western developers do understand that we can use our own cultural concepts, but the formal systems seem to always adopt features of East Asian culture. Part of this is the western fetishization of isolated chunks of East Asian culture, but the formalization of such behaviour is super irksome.

                                                                      1. 3

                                                                        Thanks for expanding a little. I had kind of zeroed in on the Jeffries series that the thread had drifted to, and “kata” and “dojo” were no longer front of mind when I read your appropriation comment.

                                                                        I have long objected to the two terms in the original article on the grounds that I dislike thinking about programming as some kind of martial arts discipline. This angle hadn’t really occurred to me.

                                                                        (And thanks @telemachus for adding the other examples, too.)

                                                                      2. 4

                                                                        I’m guessing that he means the use of “kata.” I can’t speak for the OP, but I’m often uncomfortable with the way that (some) programmers use East Asian culture. (E.g., Koans, faux Buddhist sayings, conversations between a “Master” and a “student” leading to wisdom, etc.)

                                                                        Specific examples:

                                                                        These sorts of things are easy to do without the appropriation, and they can still be fun and engaging.

                                                                        E.g.,

                                                                      1. 3

                                                                        There will always be those who invent a term for a particular hammer then tell everyone they should use it for all DIY jobs.

                                                                        I have plenty of experience with TDD, both good and bad, and in that experience I see a clear line dividing when it likely works and when it likely won’t. Beware anyone preaching TDD for all, or even many, situations, but also beware writing it off as never useful.

                                                                        1. 1

                                                                          This is my experience too. I switch back and forth among tests-first and tests-during and tests-after depending on context.

                                                                          It was useful to force myself to stick with pure, no-compromises TDD for the entirety of a nontrivial project. Making myself figure out how to solve problems with a tests-first development approach when it wasn’t obvious how to do so was the only way I was able to begin to distinguish between, “TDD is a bad fit for this kind of task,” and, “I am too inexperienced to see how to do this with TDD.” The latter is probably still true in some of the cases I shy away from TDD, but I like to think having tried the discipline out for real gives me better perspective.

                                                                          1. 4

                                                                            I agree, a significant part of the problem with TDD is that it obsesses over tests to an extent that it gets in the way of actually development, creates tests for single sub components to an extent that is far from necessary.

                                                                            I recall in the WebKit project there was a period where a bunch of TDD folk started adding huge amounts of basic functionality tests for core data types that were already extensively tested through the tests of real functionality. These tests did not add any functionality, but in some cases they even added complexity to the implementations in order to be able to write tests of internal functionality. I recall that in at least one case they actually introduced regressions as part of introducing complexity to support testing (the regressions being caught by other tests that they didn’t run - because they had already written tests for the basic data type functionality, so running the full engine tests “wasn’t necessary”.

                                                                            I think this applies to all the fad-driven-development methodologies, they take an reasonable piece of software development, and then just take it to extreme, and declare that that extreme is the only way to reliably develop software.

                                                                          2. 1

                                                                            It’s something I use quite often. But not all the time, and not none of the time. I also sometimes write tests after an initial implementation, and then use tests-first for changes. But never stupid things like iterating between code and tests every 2 lines of change, and returning hardcoded results; hardcoded values are OK if actually need to build another part first, and are just fulfilling record requirements in a typed language, then its fine, but I’m criticising the silly tick-toking method you see in videos)

                                                                            1. 1

                                                                              I think it’s acceptable to return hard coded data during development. Normally you are returning data in a specific shape, and will dynamically generate that shape later.

                                                                              1. 2

                                                                                I agree, but IMO you do it because it allows you to work on something else that is expecting data in that shape, not just so you can turn a useless test green.

                                                                        1. 2

                                                                          Maybe some bigram and trigram analysis would have helped with the frequency analysis, but the lack of spaces complicates things, so maybe not.

                                                                          1. 0

                                                                            Wait, Windows has had automatic root updates since XP? And nobody else does? Why is everything worse than Windows XP?!

                                                                            1. 13

                                                                              I’m not sure what you mean by automatic updates, but ~every Linux distro has a ca-certificates package which gets updated all the time. (And even when it’s EOL, you can use it from newer versions since it doesn’t have dependencies)

                                                                              1. 4

                                                                                Except for Android, I guess, which doesn’t get them updated until you get a whole new image from the manufacturer.

                                                                                1. 2

                                                                                  But as stated in the official LE post, you can (and probably should) add the root cert to your app-local trust storage (and probably even use cert pinning). That won’t help with browsers, though I’d guess you just install firefox for android (or something like that) and they ship their complete own TLS stack (and certs). Because you can’t run any relevant TLS on android 4.4, which you want to support for many apps..

                                                                                  1. 2

                                                                                    FYI: you can still order stuff from Amazon and use Google Search on Android 4.4. Mozilla’s own website works, too, even though they suggest yours shouldn’t.

                                                                                    If your own website, and your employer’s website, don’t work on Android 4.4 because TLSv1.0 iNsEcUrE — it sounds like you’ve been sold the snake oil!

                                                                              2. 0

                                                                                Windows has automatic root updates?

                                                                              1. 5

                                                                                Finally! I mean, WhatsApp continues being a bad choice for privacy due to the enormous metadata collection and use for targeted advertisement; but this is going to make WhatsApp much better for existing users who can’t quit it. Telegram will shit their pants when this gets introduced, since their whole «business model» consists on blaming WhatsApp for not encrypting backups, and telling you you should trust Telegram to store everything instead of Google and Apple.

                                                                                1. 5

                                                                                  Telegram’s business model consists on having a better user experience than whatsapp. Good web and desktop clients, being able to join groups under a handle instead of everybody knowing your phone number, big pretty animated stickers, decent UX without sharing your phone’s contacts to the app, support for bots on which moderation tools are built. The kind of people who care about security know telegram isn’t great in that regard, but that’s not at all what the users pick telegram for.

                                                                                1. 5

                                                                                  Very unfortunate name, Català being the name of an actual spoken language. I would have hoped the authors didn’t miss this fact since they seem to be French and Catalan is also spoken in southern France. I wonder how they would feel reading a paper about the “Français programming language”.

                                                                                  1. 18

                                                                                    Being Polish I would have hoped the manufacturers of shoe polish and telephone poles didn’t miss this fact as well.

                                                                                    This language is called after a surname of a person involved.

                                                                                    1. 3

                                                                                      The verb to polish and the nouns that accompany it did not get invented in the 21st century, and do not refer to people. Yes, words with multiple meanings exist, this doesn’t mean we shouldn’t try to avoid confusion in newly named things. The Catala programming language and the Català (in English, Catalan) language are both languages.

                                                                                      1. 4

                                                                                        Are you familiar with the time google thought for several months about changing the name of their programming language Go so as not to clobber the preexisting usage by the language Go!, but decided they didn’t care? I agree we should avoid name conflicts when possible, and I think this is a far more egregious example because they are both programming languanges and there’s a clear abusive power dynamic.

                                                                                        1. 2

                                                                                          Yes, the famous issue #9. But I don’t expect Google to not be evil by now.

                                                                                        2. 2

                                                                                          Are you familiar with the time google thought for several months about changing the name of their programming language, Go, so as not to clobber the preexisting usage by the language, Go!, but decided they didn’t care? I agree we should avoid name conflicts when possible, and I think this is a far more egregious example because they are both programming languages and there’s a clear abusive power dynamic.

                                                                                      2. 6

                                                                                        I now want to make a programming language where the fundamental collection is the span (ie, a pointer and length). I think I’ll call it Span-ish.

                                                                                        1. 7

                                                                                          I’ll make a successor to ALGOL called ESPAGNOL.

                                                                                        2. 8

                                                                                          The French will not rest until the very last vestige of Occitan is extirpated.

                                                                                          1. 2

                                                                                            It’s the same for many other languages, right? The most popular one being https://en.m.wikipedia.org/wiki/Java

                                                                                            1. 1

                                                                                              As far as I know, Java is an island, not a language.

                                                                                              1. 2

                                                                                                The verb to polish and the nouns that accompany it did not get invented in the 21st century, and do not refer to people

                                                                                                I think you’re not being consistent with your complaint.

                                                                                                1. 2

                                                                                                  Can Shoe Polish create confusion with Polish the demonym? No.

                                                                                                  Can the Java language create confusion with Java the island? No.

                                                                                                  Can the Catala language create confusion with the Català language? Yes.

                                                                                                  Is my complaint clear enough now?

                                                                                                  1. 2

                                                                                                    Is the French term for “computer language” “langue d’ordinateur” or similar? In other words, would the search term “Catala language” in French be same for Catala/Català ?

                                                                                                    Because in English you’d search for “Catala language” and “Catalan language”.

                                                                                          1. 7

                                                                                            Making any changes to the Libc triggers a mass rebuild. Modifying system headers causes massive rebuilds.

                                                                                            This point should be repeated whenever people discuss the pros and cons of statically linked executables.

                                                                                            1. 8

                                                                                              Although this was not because of statically linked executables but because of the way nix works. For instance, if you change a license (or any comment line) in a header file you don’t need to rebuild anything. But you will have to with nix, because the hashes are now different.

                                                                                            1. 6

                                                                                              I like lisp but macros should be a last resort thing. Is it really needed in those cases, I wonder.

                                                                                              1. 18

                                                                                                I disagree. Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic, not my macros - it’s much more difficult to reason about things whose interesting behavior is at run-time than at compile-time.

                                                                                                Moreover, most macros are limited to simple tree structure processing, which is far more constrained than all of the things you can get up to in your application code.

                                                                                                Can you make difficult-to-understand code with macros? Absolutely. However, the vast majority of Common Lisp code that I see is written by programmers disciplined enough to not do that - when you write good macros, they make code more readable.

                                                                                                1. 3

                                                                                                  “Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic”

                                                                                                  What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time. They have the benefits you describe. Such code is common in real-time and safety/security-critical coding. An extra benefit is that static analysis, automated testing, and so on can easily flush bugs out in it. Tools that help optimize performance might also benefit from such code just due to easier analysis.

                                                                                                  From there, there’s macros. The drawback of macros is they might not be understood instantly like a programmer will understand common, language constructs. If done right (esp names/docs), then this won’t be a problem. Next problem author already notes is that tooling breaks down on them. Although I didn’t prove it out, I hypothesized this process to make them reliable:

                                                                                                  1. Write the code that the macros would output first on a few variations of inputs. Simple, deterministic functions operating on data. Make sure it has pre/post conditions and invariants. Make sure these pass above QA methods.

                                                                                                  2. Write the same code operating on code (or trees or whatever) in an environment that allows similar compile-time QA. Port pre/post conditions and invariants to code form. Make sure that passes QA.

                                                                                                  3. Make final macro that’s a mapping 1-to-1 of that to target language. This step can be eliminated where target language already has excellent QA tooling and macro support. Idk if any do, though.

                                                                                                  4. Optionally, if the environment supports it, use an optimizing compiler on the macros integrated with the development environment so the code transformations run super-fast during development iterations. This was speculation on my part. I don’t know if any environment implements something like this. This could also be a preprocessing step.

                                                                                                  The resulting macros using 1-3 should be more reliable than most functions people would’ve used in their place.

                                                                                                  1. 2

                                                                                                    What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time.

                                                                                                    In a very local sense, I agree with you - a simple function is easier to understand than a complex function.

                                                                                                    However, that’s not a very interesting property.

                                                                                                    A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

                                                                                                    My experience has been that, when I create lots of small, simple functions, the overall accidental complexity of the system increases. Ignoring that accidental complexity for the time being, all problems have some essential complexity to them. If you make smaller, simpler functions, you end up having to make more of them to implement your design in all of its essential complexity - which, in my experience, ends up adding far more accidental complexity due to indirection and abstraction than a smaller number of larger functions.

                                                                                                    That aside, I think that your process for making macros more reliable is interesting - is it meant to make them more reliable for humans or to integrate tools with them better?

                                                                                                    1. 1

                                                                                                      “A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

                                                                                                      I think the question might be what is simple and what is complex? Another is simple for humans or machines? I liked the kinds of abstractions and generative techniques that let a human understand something that produced what was easy for a machine to work with. In general, I think the two often contradict.

                                                                                                      That leads to your next point where increasing the number of simple functions actually made it more complex for you. That happened in formally-verified systems, too, where simplifications for proof assistants made it ugly for humans. I guess it should be as simple as it can be without causing extra problems. I have no precise measurement of that. Plus, more R&D invested in generative techniques that connect high-level, human-readable representations to machine-analyzable ones. Quick examples to make it clear might be Python vs C’s looping, parallel for in non-parallel language, or per-module choices for memory management (eg GC’s).

                                                                                                      “is it meant to make them more reliable for humans or to integrate tools with them better?”

                                                                                                      Just reliable in general: they do precisely what they’re specified to do. From there, humans or tools could use them. Humans will use them as they did before except with precise, behavioral information on them at the interface. Looking at contracts, tools already exist to generate tests or proof conditions from them.

                                                                                                      Another benefit might be integration with machine learning to spot refactoring opportunities, esp if it’s simple swaps. For example, there’s a library function that does something, a macro that generates an optimized-for-machine version (eg parallelism), and the tool swaps them out based on both function signature and info in specification.

                                                                                                2. 7

                                                                                                  Want to trade longer runtimes for longer compile times? There’s a tool for that. Need to execute a bit of code in the caller’s context, without forcing boilerplate on the developer? There’s a tool for that. Macros are a tool, not a last resort. I’m sure Grammarly’s code is no more of a monstrosity than you’d see at the equivalent Java shop, if the equivalent Java shop existed.

                                                                                                  1. 9

                                                                                                    Java shop would be using a bunch of annotations, dependency injection and similar compile time tricks with codegen. So still macros, just much less convenient to write :)

                                                                                                    1. 1

                                                                                                      the equivalent Java shop

                                                                                                      I guess that would be Languagetool. How much of a monstrosity it is is left as an exercise to the reader, mostly because it’s free software and anybody can read it.

                                                                                                    2. 7

                                                                                                      This reminds me of when Paul Graham was bragging about how ViaWeb was like 25% macros and other lispers were kind of just looking on in horror trying to imagine what a headache it must be to debug.

                                                                                                      1. 6

                                                                                                        The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it’s considered to be bad style to use them when they’re not necessary. So every macro in that code is there because it has to be. What that means is that at least 20-25% of the code in this program is doing things that you can’t easily do in any other language.

                                                                                                        It’s such a bizarre argument.

                                                                                                        1. 3

                                                                                                          I find it persuasive. If a choice is made by someone who knows better, that choice probably has a good justification.

                                                                                                          1. 11

                                                                                                            It’s a terrible argument; it jumps from “it’s considered to be bad style to use [macros] when they’re not necessary” straight to “therefore they must have been necessary” without even considering “therefore the code base exhibited bad style” which is far more likely. Typical pg arrogance and misdirection.

                                                                                                            1. 3

                                                                                                              I don’t have any insight into whether the macros are necessary; it’s the last statement I take issue with. For example: Haskell has a lot of complicated machinery for working with state and such that doesn’t exist in other languages, but that doesn’t mean those other languages can’t work with state. They just do it differently.

                                                                                                              Or to pick a more concrete example, the existence of the loop macro and the fact that it’s implemented as a macro doesn’t mean other languages can’t have powerful iteration capabilities.

                                                                                                              1. 1

                                                                                                                One hopes.

                                                                                                        1. 10

                                                                                                          One of the common complaints about Lisp is that there are no libraries in the ecosystem. As you see, five libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections.

                                                                                                          Wait are they really making an argument of “we used a library for getting the current time, and also for sockets” as if that’s a good thing?

                                                                                                          1. 16

                                                                                                            Lisp is older than network sockets. Maybe it intends to outlast them? ;)

                                                                                                            More seriously, Lisp is known for high-level abstraction and is perhaps even more general than what we usually call a general purpose language. I could see any concrete domain of data sources and effects as an optional addition.

                                                                                                            In the real world, physics constants are in the standard library. In mathematics, they’re a third party package.

                                                                                                            1. 12

                                                                                                              Lisp is older than network sockets.

                                                                                                              Older than time, too.

                                                                                                              1. 1

                                                                                                                Common Lisp is not older than network sockets, so the point is moot I think.

                                                                                                                1. 1

                                                                                                                  I don’t think so. It seems to me that it was far from obvious in 1994 that Berkeley sockets would win to such an extent and not be replaced by some superior abstraction. Not to mention that the standard had been in the works for a decade at that point.

                                                                                                              2. 5

                                                                                                                Because when the next big thing comes out it’ll be implemented as just another library, and won’t result in ecosystem upheval. I’m looking at you, Python, Perl, and Ruby.

                                                                                                                1. 4

                                                                                                                  Why should those things be in the stdlib?

                                                                                                                  1. 4

                                                                                                                    I think that there are reasons to not have a high-level library for manipulating time (since semantics of time are Complicated, and moving it out of stdlib and into a library means you can iterate faster). But I think sockets should be in the stdlib so all your code can have a common vocabulary.

                                                                                                                    1. 5

                                                                                                                      reasons to not have a high-level library for manipulating time

                                                                                                                      I actually agree with this; it’s extraordinarily difficult to do this correctly. You only have to look to Java for an example where you have the built-in Date class (absolute pants-on-head disaster), the built-in Calendar which was meant to replace it but was still very bad, then the 3rd-party Joda library which was quite good but not perfect, followed by the built-in Instant in Java 8 which was designed by the author of Joda and fixed the final few quirks in it.

                                                                                                                      However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                      1. 7

                                                                                                                        Common Lisp has (some) date and time support in the standard library. It just doesn’t use Unix time, so if you need to interact with things that use the Unix convention, you either need to do the conversion back and forth, or just use a library which implements the Unix convention. Unix date and time format is not at all universal, and it had its own share of problems back when the last version of the Common Lisp standard was published (1994).

                                                                                                                        It’s sort of the same thing with sockets. Just like, say, C or C++, there’s no support for Berkeley sockets in the standard library. There is some history to how and why the scope of the Common Lisp standard is the way that it is (it’s worth noting that, like C or C++ and unlike Python or Go, the Common Lisp standard was really meant to support independent implementation by vendors, rather than to formalize a reference implementation) but, besides the fact that sockets were arguably out of scope, it’s only one of the many networking abstractions that platforms on which Common Lisp runs support(ed).

                                                                                                                        We could argue that in 2021 it’s probably safe to say that BSD sockets and Unix timestamps have won and they might as well get imported in the standard library. But whether that’s a good idea or not, the sockets and Unix time libraries that already exist are really good enough even without the “standard library” seal of approval – which, considering that the last version of the standard is basically older than Spice Girls, doesn’t mean much anyway. Plus who’s going to publish another version of the Common Lisp standard?

                                                                                                                        To defend the author’s wording: their remark is worth putting into its own context – Common Lisp had a pretty difficult transition from large commercial packages to free, open source implementations like SBCL. Large Lisp vendors gave you a full on CL environment that was sort of on-par with a hosted version of a Lisp machine’s environment. So you got not just the interpreter and a fancy IDE and whatever, you also got a GUI toolkit and various glue layer libraries (like, say, socket libraries :-P). FOSS versions didn’t come with all these goodies and it took a while for FOSS alternatives to come up. But that was like 20+ years ago.

                                                                                                                        1. 2

                                                                                                                          However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                          GET-UNIVERSAL-TIME is in the standard. It returns a universal time, which is the number of seconds since midnight, 1 January 1900.

                                                                                                                          1. 2

                                                                                                                            Any language could ignore an existing standard and introduce their own version with its own flaws and quirks, but only Common Lispers would go so far as to call the result “universal”.

                                                                                                                          2. 1

                                                                                                                            However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                            Actually, it doesn’t support leap seconds so in that case the value repeats.

                                                                                                                          3. 1

                                                                                                                            Yeah but getting the current unix time is not Complicated, it’s just a call to the OS that returns a number.

                                                                                                                            1. 6

                                                                                                                              What if you’re not running on Unix? Or indeed, on a system that has a concept of epoch? Note that the CL standard has its own epoch, unrelated (AFAIK) to OS epoch.

                                                                                                                              Bear in mind that Common Lisp as a standard, and a language, is designed to be portable by better standards than “any flavour of Unix” or “every version of Windows since XP” ;-)

                                                                                                                              1. 1

                                                                                                                                Sure, but it’s possible they were using that library elsewhere for good reasons.

                                                                                                                            2. 3

                                                                                                                              In general, I really appreciate having a single known-good library promoted to stdlib (the way golang does). Of course, there’s the danger that you standardise something broken (I am also a ruby dev, and quite a bit of the ruby stdlib was full of footguns until more-recent versions).

                                                                                                                              1. 1

                                                                                                                                Effectively that’s what happened though. The libraries for threading, sockets etc converged to de facto standards.

                                                                                                                          1. 9

                                                                                                                            If anyone feels that some of the glyphs could be improved, please let me know! This is pretty much my first time fiddling with font design and I’m pretty sure there are plenty of unpolished bits right now.

                                                                                                                            1. 4

                                                                                                                              Looks pretty cool! If you’re looking for honest feedback I’d say the lowercase t could use some love. I find it a bit jarring, probably because it’s cut off at the top.

                                                                                                                              1. 4

                                                                                                                                That’s one of the main reasons this font is called cursed. :)

                                                                                                                                1. 2

                                                                                                                                  Speaking of, why did you choose to make it cursed?

                                                                                                                                  1. 4

                                                                                                                                    I was experimenting with different glyph designs for i, j, and t with an old version of cursed that was just an upscaled version of Chicago (see the bottom section of the site). My handwriting uses a t with a cutoff stem and undotted i/j, so I tried that for a while and really liked it.

                                                                                                                                    Later on I shared it in #lobsters. Someone commented that it “looked a bit cursed”, so I decided to call the font that :)

                                                                                                                              2. 2

                                                                                                                                I think the underscores extending all the way to the edges of the box is detrimental to programming, because they can’t be as easily counted. __main__ is obviously 2 underscores on each sides on many monospace fonts (not the one I see here in lobste.rs though), and I think that’s a feature.

                                                                                                                                1. 1

                                                                                                                                  because they can’t be as easily counted.

                                                                                                                                  I find it both more aesthetically pleasing, personally. I didn’t notice any difficulty in reading the number of underscores either.

                                                                                                                                  I’m considering adding an optional patch, though, which changes this back to normal.

                                                                                                                              1. 3

                                                                                                                                I’m happy to see latency is not neglected in the measurements.

                                                                                                                                1. 5

                                                                                                                                  Maclisp: (list . 1)

                                                                                                                                  This is wrong, isn’t it? If you have a typical LISP list: (1 . (2 . (3 . (4 . nil))), then (list . 5) would give you ((1 . (2 . (3 . (4 . nil))) . 5) instead of the expected (1 (2 (3 (4 (5 nil)))))

                                                                                                                                  1. 4

                                                                                                                                    Yep; you’re right; this won’t append to the end of a list. I think the author was probably thinking of (1 . lst) which will put it on the head rather than the tail; putting it on the tail requires (setcdr (last lst) '(4)), or just (append lst '(1)).

                                                                                                                                    The point the article is making here is that this is trivial compared to C and Pascal, and while it’s a slight overstatement, it doesn’t seem invalidate the point.

                                                                                                                                    1. 1

                                                                                                                                      I think so. It should be more like a push: (1 . list)