Threads for mk12


    @hwayne What are your top recommended resources or practices for quickly learning Excel?

    1. 9

      You Suck at Excel by Joel Spolsky is polarizing, but I thought it was hilarious.


        Wow, I wish I had watched this a long time ago. I’ve done my taxes in Google Sheets for 3 years and using named ranges would have made it so much easier. Some of the things he claims don’t work in Sheets seem to have been improved in the 7 years since the talk, but unfortunately looks like there’s still no proper table support.

      2. 7

        Microsoft has official tutorial sheets. I’d recommend going through the one on formula, then after that walk through the ribbon and the list of formula and then look any buttons or formula that seem interesting. That’s what I did!

      1. 5

        Looks interesting. This is exactly how Lektor works. A python static generator but with a very minimal admin interface. I have been using it since 2016 and it is just great.

        1. 6

          Yes, I also came across Lektor in my search. If their Mac desktop app had been working I would have tried it. But it’s not supported right now, so I assume you have to start a local server in a terminal in order to access the admin panel. I wanted something easy to use for people who aren’t comfortable with a CLI. That’s good you’re enjoying it though, maybe I’ll give it another look.

          1. 1

            Oh yeah, I forgot about the Mac desktop client. I completely missed the fact that this is a desktop client. Though I like CLIs, I can see the appeal having used Microsoft’s LiveWriter (now OpenLiveWriter) in the past.

        1. 17

          I like the idea, but this part really bothers me:

          you can craft a stylish modern site that will run faster than greased lightning even on mobile thanks to Google AMP technology

          How about saying no to Google’s attempts to monopolize the web?

          1. 6

            For what it’s worth, AMP support is off by default. The switch to enable it has this warning:

            AMP (accelerated mobile pages) creates mobile-optimized pages for your static content that render fast.

            Please note: when this option is enabled your website will load third-party scripts provided by Google’s AMP CDN.

            1. 16

              That first paragraph is still misleading, though. You know what else creates mobile-optimized pages that render fast? Publii with AMP turned off.

              Google’s AMP project is not benign. It sells itself as ‘faster pages’, but you don’t need AMP-the-technology for that; and it does all sorts of things are tenuously connected to speed, but clearly increase Google’s power over the web and its users. Especially: when a user goes from Google dot com to an AMP page, the site never sees that traffic because Google serves the page itself (all the better to track you with, my dear.)

              So Google’s purpose for AMP is pretty clear, and dmbaturin makes a good point that it should not be voluntarily included in any project that cares about the web.

              1. 2

                I largely agree, but this is just a technicality. I wouldn’t say Publii project cares about the web, they care about their users. And their users care about their own visibility. And someone told them they need amp. So they got amp.

                …or something close to that. Again, not saying that this is proper, just that this is how it probably goes down, for a lot of software projects.

            2. 1

              Why stop at Google AMP? I admire that Sourcehut Pages stated limitations include:

              Connections from CloudFlare’s reverse proxy are dropped. Do not help one private company expand its control over all internet traffic.

              1. 1

                What does this mean in practice? Can I be blocked from viewing content hosted on SH pages in some contexts?

                1. 2

                  No, the primary goal (I’m assuming) is to prevent people following ‘guru’ advice and putting Cloudflare as a caching proxy in front of their site. If Page’s are not already distributed CDN for static content, I’m sure the goal would be to at least have a couple of mirrors distributed globally if that level of performance is needed in the future which makes Cloudflare redundant as well. Using Cloudflare in any capacity at this point is centralizing the internet because of how many people are using it because it’s ‘free’ and because SEO’d advice always recommends it. We saw Cloudflare go down not long ago and a massive portion of the clearnet went down with it. Cloudflare also tends to through up hCAPTCHAs for users on the Tor network, using VPNs, or just trying to use WiFi in a non-Western country putting an unnecessary burden on users seeking privacy.

            1. 12

              Oh, we’re finally bringing back FrontPage and iWeb?

              1. 5

                Eh, almost. As far as I can tell, this imposes some file/directory structure constraints and has limited HTML, template and theme editing features. So I’d say we’re bringing half of WordWideWeb back for now :-). Took us about five years to get from that to FrontPage so, adjusting for modern software boilerplate and maintenance requirements I’d say give it another… ten years or so :-).

                On the bright side the HTML code that Publii produces looks considerably less atrocious than anything FrontPage ever did so maybe it’s worth waiting these ten years or so!

                1. 5

                  Yeah I get that it feels full circle but I think this is a bit different. I’ve never used FrontPage but I remember iWeb feeling more focused on WYSIWYG web design. Publii feels more like a CMS with all the features you’d expect for a blog: posts, authors, tags, categories, excerpts, feeds, etc. The default theme looks nice, works on mobile, supports dark mode, and provides the exact right level of configurability for my use case (change colors, heading image, date format, pagination, etc.) without having to touch code.

                1. 12

                  I’m setting up a blog for my father and I wanted something simple and free. I don’t like WordPress: it’s too complicated, the free .com plan has ads, it has a history of security issues, and I’d rather something static that I can deploy on GitHub Pages or Netlify. If it were for me, I’d use an SSG like Zola since I’m comfortable editing Markdown by hand, using git, etc. But for my father I wanted a WYSIWYG editor where you can just drag and drop images and not worry about the filesystem. I tried Netlify CMS but gave up after wasting hours on it. I kept getting errors about git-gateway despite trying everything in this thread.

                  Then I found Publii and I was blown away by it. Does exactly what I want, and it’s free and GPL-licensed!

                  1. 1

                    My wife has been using Publii+Netlify for several sites, but now has complaints - it’s too rigid. I’ll have to start teaching her markdown and SSGs, I think :)

                    Personally I think Publii is almost good. I would use it myself, if it didn’t have its internal database, and no simple way of versioning and forking (not software itself - sites that you create with it).

                  1. 3

                    I had never seen Wisp before. Maybe it takes some getting used to, but it seems confusing to me, especially the double colons and having to remember to use a period before anything you don’t want wrapped in parens. It makes a bit more sense after reading the presentation Why Wisp?.

                    1. 23

                      Tabs for indentation, spaces for alignment.

                      1. 7

                        Exactly. Variable-width characters at the start of a line are great. Variable-width characters in the middle of a line are annoying because they won’t line up with other fixed-width things. Recent versions of clang-format now support this style, so there’s no reason to use spaces anymore.

                        1. 4

                          I have to suffer through clang-format at work, I can tell you it’s pretty bad. The worst aspect so far is that it does not let me chose where to put line breaks. It’s not enough to stay below the limit, we have to avoid unnecessary line breaks (where “unnecessary” is defined by clang-format).

                          Now to enforce line lengths, clang format has to assume a tab width. At my workplace it assumes 4 columns.

                          Our coding style (Linux) explicitly assumes 8.

                          1. 2

                            You can tell clang-format how wide it should assume for tabs. If people choose to use a wider tabstop value, then it’s up to them to ensure that their editor window is wider. That remains their personal choice.

                            1. 1

                              I’ve found out that clang-format respects line comments:

                              void f( //
                                  void *aPtr, size_t aLen, //
                                  void *bPtr, size_t bLen //
                          2. 7

                            I think when people say this they imagine tabs will only occur at the start of the line. But what about code samples in comments? This is common for giving examples of how to use an function or for doc-tests. It’s much harder to maintain tab discipline there because your formatter would have to parse Markdown (or whatever markup you use) to know whether to use tabs in the comment. And depending on the number of leading comment characters, the indentation can look strange due to rounding to the next tabstop. Same thing goes for commented out sections of code.

                            1. 3

                              Go uses tabs for indentation and spaces for alignment. It works pretty well in practice. I can’t say that I’ve ever noticed anything misaligned because of it.

                              1. 4

                                If you wrote some example code in a // comment, would you indent with spaces or tabs? If tabs, would you write //<space><tab> since the rest of the comment has a single space, or just //<tab>? gofmt can’t fix it for you, so in a large Go codebase I expect you’ll end up with a mix of both styles. With spaces for indentation it’s a lot easier to be consistent: the tab character must not appear in source files at all.

                                1. 1

                                  I can’t say that I’ve ever written code in a comment, because I just write a ton of ExampleFunctions, which Go automatically adds to the documentation and tests. Those are just normal functions in a test file. I think what’s interesting about Go is that they don’t add all the features but the ones they do add tend to reinforce each other, like go fmt, go doc, and go test.

                                2. 3

                                  Personally, I think it would have annoyed me if go fmt didn’t exist. Aligning code with spaces is annoying, and remembering to switch between them even more so.

                                  1. 1

                                    Yes, it’s only practical if a machine does it automatically.

                              2. 1

                                I said this elsewhere in the thread, but it’s worth reiterating here: I’d bee with you 100% if it weren’t for Lisp, which simply can’t be idiomatically indented with tabs (short of elastic tabs) because it doesn’t align indentation with any regular tab-stops.

                              1. 16

                                I find the minified-font for the logo to be a little overkill, or is it just me?

                                Wouldn’t it be more efficient to just use a traced (= convert text to path in Inkscape) SVG as a logo? If you do this, you can also manually adjust the kerning to make a really neat logo.

                                1. 3

                                  Having it be properly textual is kinda necessary for non-graphical users, although I guess that’s what the alt attribute is for.

                                  1. 4

                                    Making the text part of a logo an actual text also makes it automatically adjust its color when the user switches from a light mode to dark. I agree it’s not a worthwhile endeavor for every website, but I think it may be.

                                    1. 7

                                      You can get that with SVG too, using fill="currentColor".

                                  2. 2

                                    I didn’t actually think about that, but that sounds like a good option to me! Maybe I’ll explore that at some point in the future.

                                  1. 2

                                    I used to be an 80-column purist but I don’t care about it so much anymore. At work we mostly limit code to 100 columns and wrap comments at 80 columns. For some languages the formatter enforces the 100-column limit (clang-format) but in others it’s up the programmer (gofmt). I’ve grown to prefer the gofmt style of formatting, in part for reasons described in the recent clang-format discussion. VS Code makes it easy to have vertical rules at 80 and 100 so I can keep an eye on it. But there are always exceptions, e.g. I think it’s silly to hard warp a URL in a comment since then you can’t click it anymore.

                                    I admit I’m not very rational about these things though. The line length for comments (apart from URLs etc.) should be uncontroversial since it’s easy to have your editor reflow text. But I will absolutely waste time rewording a comment or a git commit message to avoid orphans when reflowing. I wish I could stop doing this.

                                    1. 2

                                      Another problem no one’s mentioned yet is performance. At work we use formatters on large generated files, and we’re having a lot of trouble with clang-format ( On my MacBook Pro, it can format 1 MB/s but the max RSS also scales linearly at 150x file size. So formatting a 1 MB file takes 150 MB of memory. The clang++ parser deals with these files without that memory blowup. My teammate @ianloic is trying to optimize some of clang-format’s data structures.

                                      (Why format generated code? Because people can jump-to-definition and read it. Why generate huge files? We’re also working on splitting or shrinking, but these sizes aren’t unusual compared to e.g. protobuf, thrift.)

                                      1. 3

                                        Rustfmt isn’t good on generated code either. The performance is in the same ballpark as what you quoted for clang-format: 2.8 MB/s. For formatting generated code I made based on a simpler algorithm, which does 60 MB/s and fixes other shortcomings of rustfmt that tend to occur in generated code.

                                        The same approach may be adaptable to C++, but I admit I’m not sure how it would accommodate preprocessor macros. Rust’s macros are much easier to format in comparison because the syntactic positions that they can be invoked in are strictly limited.

                                        1. 2

                                          I hope to have some simple patches that can land easily and some others that might take a little more convincing. The peak memory reduction is only about 20% though, IIRC.

                                        1. 4

                                          aspects of code formatting that contribute to write amplification – how big a change becomes in the resulting diff

                                          I love code formatters, but this problem is my pet peeve. When I choose a coding standard for a new project, my top priority is choosing a flavor that avoids write amplification. A standardized format is great, but not if it interferes with code review. I’ve encountered too many bugs hiding inside a diff hunk that was 90% auto-formatting noise. (I’ve had no luck with diffs that hide white space changes.)

                                          in some ways better than any autoformatter could ever come up with, because the human knows best

                                          I agree that a conscientious developer can format code better than a machine, at the margin. However, I’ve not found the marginal improvement to be worth the marginal cost in developer time/attention. A machine can format code 90% as good as a human in 1% of the time. Something like reformatter for emacs automatically reformats the buffer when I save it, so I spend almost no time on layout at all. I enjoy the way it lets me stay in flow.

                                          It’s also the case that not all programmers are human. Maybe it’s a fancy refactoring tool or maybe it’s a lazy Perl script to munge away a recently discovered anti-pattern. Either way, they’re terrible at formatting. Having a human reformat this code manually gets expensive. Of course, non-conscientious developers who format at random also border on non-human :-)

                                          1. 3

                                            I’ve encountered too many bugs hiding inside a diff hunk that was 90% auto-formatting noise. (I’ve had no luck with diffs that hide white space changes.)

                                            Have you tried difftastic? Seems like it’s designed to address exactly this problem.

                                            1. 1

                                              A standardized format is great, but not if it interferes with code review. I’ve encountered too many bugs hiding inside a diff hunk that was 90% auto-formatting noise.

                                              I think the real mistake here is that it sounds like you’re reviewing diffs? That’s always the wrong move – you want to review the resulting file, not the minimal diff going into it.

                                              All a diff can tell you is that someone inserted a new method into a file. If you just review the diff you might walk away thinking it’s a well-written change. If you review the resulting file after applying the diff, you might catch the fact that the added method is duplicative of other methods, that this has become a real issue in the file, and recommend the submitter abstract out the common logic and leave things better than they found them rather than trying to do the bare minimum and making the underlying mess worse.

                                              Etc. The diff just does not contain enough information to do a proper code review. Worrying about whitespace in the diff is getting hung up on the wrong details.

                                              1. 3

                                                A typical LLVM PR changes a hundred lines of code in 3-4 files, each of which is thousands of lines long. Telling people that they should review the entire file when reviewing a PR rather than the changes is the same as telling them that they should not bother doing code review: you’re advocating for something that is completely infeasible.

                                                1. 1

                                                  A typical LLVM PR changes a hundred lines of code in 3-4 files, each of which is thousands of lines long. Telling people that they should review the entire file when reviewing a PR rather than the changes is the same as telling them that they should not bother doing code review: you’re advocating for something that is completely infeasible.

                                                  a) Most things aren’t LLVM (indeed, nothing in the person I was replying to’s bio or GitHub seems to indicate that they’re an LLVM dev – and believe it or not, I do not frame everything I write always in terms of you personally and the unique challenges you face, stranger). “This doesn’t work in the most extreme case, so it’s not a good idea in any case” is just letting the perfect be the enemy of the good.

                                                  b) In most projects, the solution to “I can’t review 4 ten thousand line-long files” is “don’t let your files become 10s of thousands of lines long in the first place”. That’s really a gigantic, phenomenally industrial strength C++ codebase problem, which isn’t most codebases. It shouldn’t be surprising that LLVM is a pretty extreme outlier with fairly atypical challenges! We’ve got linters that scream bloody murder when files hit 500 lines, and something in the 200-1000 line range is typical. “That this doesn’t work in the extremes means this is always a bad idea, to me” is, again, just letting the perfect be the enemy of the good.

                                                  c) There is such a thing as using your head. Don’t review just the diff, because it does not have enough context; also don’t exhaustively review 14,000 lines that aren’t changing in the 30 line diff. Skim it. Get the gist of the file or at least the things that are near it and which it touches. Try your best to find a reasonable balance. Code reviews are only as valuable as the effort you put into them.

                                              2. 0

                                                When I choose a coding standard for a new project, my top priority is choosing a flavor that avoids write amplification.

                                                I guess the argument is that “the coding standard” in the code formatting sense shouldn’t be something that’s different from project to project, it should be (more or less) a universal property of the language.

                                                1. 1

                                                  Then the “formatting” should be enforced at the compiler level. Why leave it up to another tool?

                                                  1. 1

                                                    I agree!

                                              1. 2

                                                I’ve been using Racket as an R6RS target in It seems like a really cool language, and I find the “language-oriented programming” idea from Beautiful Racket intriguing. But the installation process and tooling is a buzzkill for me. On macOS, there are two main options:

                                                1. brew install --cask racket. This pulls in the whole kitchen sink, littering my Applications directory with a host of apps I’m never going to use, including DrRacket but also launchers for documentation, slideshows (?). It reminds me (in a bad way) of installing MacTeX. DrRacket feels like an educational toy, like the Processing/Arduino IDE: not something I’d use for a large project. For all other languages I’m happily living in the VS Code era with high quality LSP-based extensions.

                                                2. brew install minimal-racket. Just the barebones CLI, this is what I wanted! Oh, but it’s really minimal, so I need to raco pkg install r6rs. It prompts me for every single dependency unless I pass --auto … strange. And why on earth is it downloading scribble and dozens of other things? Oh, maybe I need --no-docs? Nope, it still wants to download them all. Turns out I want r6rs-lib. Then to compile a project I need to use raco make, but that subcommand doesn’t exist until I raco pkg install make. This time I can’t figure out a way to stop it from smuggling in all these GUI packages I clearly don’t need. So I wait for several minutes on my M1 Pro machine until it finishes. The CLI for racket/raco reminds me (in a bad way) of autotools or cpan.

                                                Maybe I’m just spoiled by Go/Rust/etc. but polishing the tooling would go a long way for me.

                                                1. 6
                                                  1. I use DrRacket for professional development on large projects. Of course, you can do it differently but it’s widely used in the Racket community.
                                                  2. You can press a to answer yes to all subsequent installation prompts.
                                                  3. raco pkg install r6rs installs documentation because the r6rs package depends on the r6rs-doc package. The --no-docs flag doesn’t build the documentation, but does install it.
                                                  4. The raco make command is in compiler-lib, not in the make package (which is a build system).

                                                  In general, the things you have run into are not about the tooling per se, but about what is in what package, and the fact that Racket provides extensive local documentation for everything.

                                                  1. 3

                                                    Thanks, that’s good to know about compiler-lib. Is there some way I could have discovered that on my own? It would be nice if it was mentioned on

                                                    1. 2

                                                      I agree, I think including that on those documentation pages would be a good idea.

                                                  2. 3

                                                    The official installers for macOS are here:

                                                    ad 1. The official installer gives you a single folder in /Applications/Racket version-number/ so with the official installer there is no problems with littering Applications.

                                                    ad 2. The --no-docs situation is a but tricky. Some package have separate collections for the main library and for the documentation. In that case --no-docs work as expected. However, some collections use only one collection for both. That is, there is no way to install that package without automatically installing its documentation. If you see this, the best thing to do is to gently ask the developer of the package to split his package in two parts.

                                                    I recommend to get the full official installer. Unless we are talking deployment on, say, a web server.

                                                    1. 2

                                                      Homebrew just wraps the official installers. It put them in that folder but in Launchpad they’re all spread out and I have to hide them. The whole “install many little GUI apps for various tasks” gives me vibes of old software not tailored to macOS. That’s what pushed me to minimal-racket, the same way I prefer brew install python rather than running the official .pkg which assumes I want IDLE etc.

                                                      I see, thanks for explaining about --no-docs. I guess it’s nice if Racket encourages thoughtful documentation (unlike stereotypical Haskell for instance). However, as a newcomer, having raco pkg install take forever because of documentation feels like a strange problem that I’ve never encountered in any other system. If it’s typical for documentation to need so much machinery to build, it seems there ought to be a separate set of “doc dependencies” similar to how most package managers have separate “dev dependencies”.

                                                      1. 2

                                                        I’ll be honest - I never use Launchpad, so I have no idea what the “standard practice” is here. Bring it up on and see whether the community agrees.

                                                    2. 3

                                                      I don’t know who does the brew installers. AFAIK they are not part of the release testing process.

                                                      I never have any trouble with either either the official installers at or building from source on macOS or linux. They are clean and easy to uninstall, putting everything into a single folder.

                                                      If you are working through SICP them I’d suggest you use #lang sicp - you do need to install it with raco pkg install sicp, but it provides a variant of R5RS that matches SICP. (it also includes the picture language from SICP)

                                                      FWIW R5RS and R6RS are included when you use the full racket installer (not minimal), a community member has also released R7RS. I don’t use any of the scheme implementations as I’m more interested in the Racket language and Typed Racket.

                                                      If you don’t want to use DrRacket you don’t have to; you can use VScode. Try the ‘Magic Racket’ VSCode plugin.

                                                      My personal opinion is while Dr Racket has capabilities that other environments don’t have, if you are learning you will probably be happier using an editor you already know.

                                                      If you want the official ‘minimal’ install you can get it at

                                                      • The minimal installers include “just enough of Racket that you can use raco pkg to install more”, but my recommendation would be to use the full official installer until you are more familiar.

                                                      It is big but you get a lot. Like MacTeX, I think it is worth it. 😀

                                                      Racket isn’t Go or Rust. judge it for what it is, not what it isn’t.

                                                      If you are new to Racket and have questions or need a hand getting started: Discourse and Discord are the most active places.

                                                      We welcome questions 😃

                                                      1. 1

                                                        I appreciate your enthusiasm, and I apologize for being so ranty (wasn’t really thinking from the perspective of a Racket contributor). I think the bar for CLI tooling quality has gone up quite a bit in recent years. Maybe some of it is just changing fads, but there is definitely a trend towards more friendly, consistent, predictable, easy to use interfaces. These first impressions are important the same way having a nice looking website is. My gripe is mainly with raco. Compared to all the other package managers I’m used to — cargo, go, pip, gem, npm, yarn, elm — I found it disappointing. Why does raco pkg install make download 100+ packages, most of which seem to be for documentation or GUI, despite --no-docs? And why is the default to prompt y/n ~100 times?

                                                        Racket may not be Go or Rust, but as general-purpose programming languages I think it’s fair to judge them against one another. And Racket isn’t unique here. I think Haskell’s package management is far worse, even with stack. And Clojure’s tooling and error messages leave a lot to be desired. All three are great languages but these things hold them back, at least for me. I want a language to be cool on paper and have all the boring stuff (installing packages, running tests, …) work delightfully. I know that’s a lot to ask 🙂

                                                        As for #lang sicp, I’m trying to target as many R6RS compliant Schemes as possible, so Racket is just one of them. I chose R6RS because I was really impressed with Chez Scheme’s speed and wanted to primarily use it. Of course now Racket is now Racket CS so it’s less variety than I had before. I also support Guile.

                                                        1. 1

                                                          Racket may not be Go or Rust, but as general-purpose programming languages I think it’s fair to judge them against one another.

                                                          Sorry forgot to respond to this.

                                                          I think fair comparison is to use them as intended. This varies from language to language and with Racket I think it is fair to say you are intended to use the full (official) install.

                                                          1. 1

                                                            Why does raco pkg install make download 100+ packages, most of which seem to be for documentation or GUI, despite –no-docs? And why is the default to prompt y/n ~100 times?

                                                            Sorry, I don’t know what dependencies #lang r6rs has so I’d suggest asking on the discourse server.

                                                            As for #lang sicp, I’m trying to target as many R6RS compliant Schemes as possible, so Racket is just one of them. I chose R6RS because I was really impressed with Chez Scheme’s speed

                                                            When you are using #lang r6rs you are using neither Racket nor Chez. Rackets r6rs is built on top of Racket which is in turn built on top of Chez.

                                                            I think you can access the implementation of chez from the command line by identifying the binary, but if you want to support chez I’d use the definitive version as I believe racket uses a fork Chez is a superset of r6rs

                                                            Racket the language isn’t a R6RS implementation. I think they had to change the name (11 years ago?) because calling it a scheme at that point had already become a problem

                                                            That said - thank you for making and sharing your R6RS SICP work.

                                                            Will you eventually extend to include Common Lisp?

                                                            I suppose the recently released SICP javascript edition opens the door to a similar exercise in other languages; C++, Java. I wonder how an SICP in Pharo, Ocaml, Forth, or Erlang would look.

                                                            Bw Stephen

                                                            (Edit - added SICP to ‘sharing your R6RS work’ to clarify meaning)

                                                            1. 2

                                                              You’re right, I’m not really using Racket the language. I wanted to avoid tying myself to a particular Scheme implementation by targeting one of the RNRS standards. And the best way to ensure I’m really sticking to the standard is to test with multiple independent implementations. Also I wanted to be able to use modules portably, so R5RS was out, and I couldn’t find enough cross-platform R7RS implementations. So R6RS it was.

                                                              I think I’m operating in the spirit of Racket’s language-oriented programming though, in that my solutions are not written directly in R6RS but in a custom language that I implemented with macros.

                                                      1. 13

                                                        I’m quite skeptical of the real world value of 24bit color in a terminal at all, but the biggest problem I have with most terminal colors is they don’t know what the background is. So they really must be fully user configurable - not just turn on/off, but also select your own foreground/background pairs - and this is easier to do with a more limited palette anyway.

                                                        I kinda wish that instead of terminal emulators going down the 24 bit path, they instead actually defined some kind of more generic yet standardized semantic palette entries for applications to use and for users to configure once and done to get across all applications.


                                                        1. 4

                                                          I’m quite skeptical of the real world value of 24bit color in a terminal at all

                                                          I have similar misgivings, but I admit to liking the result of 24-bit colour. It’s useful! I just don’t like how it gets there.

                                                          Something that is a never-ending source of problems with the addition of terminal colours in the output of utilities these days is that in almost every case they are optimized for dark mode. I don’t use, nor can I stand, dark mode. It is horrible to read. But as a result, the colour output from the tools is unreadable. bat is the most recent one I tried. I ran it on a small C file and I literally couldn’t read most of the output.

                                                          Yes, you can configure them but when they are useless out-of-the-box, the incentive is very low to want to configure everything. And then, I could just… not configure them and use the standard ones that are still just fine.

                                                          Terminal colours are really useful. I find 24-bit colour Emacs in a terminal pretty nice. It’s the exception. Most other modern terminal tools that produce colour output don’t work for me because they can’t take into account my current setup.

                                                          Having standard colour pallettes that the tools could access would be much better.

                                                          1. 4

                                                            I’ve started polling my small sample size of students and they almost unanimously prefer dark mode. I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                            Personally I prefer dark because I have a lot of floaters in my eyes that are distracting with light backgrounds. For many years I had to change the defaults to dark.

                                                            That said, I like to be able to toggle back and forth between light and dark. When I’m outside in the sun, or using a projector, light mode is critical. This is made difficult by every tool using their own color palette rather than the terminal’s. Some tools can be configured to do so, and maybe that should be their default.

                                                            1. 5

                                                              I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                              Back when I was in undergrad (~25 years ago), light mode was what everyone used. Then again, it was always on a CRT monitor and was the default for xterms everywhere. If you got a dark theme happening, it attracted some attention because you knew what you were doing. People did it to show off a bit. (I did it too!)

                                                              Then I got older and found dark backgrounds remarkably difficult to read from. I haven’t used them for well over 15 years. I simply cannot read comfortably on a such colour schemes, which why I have to use reader view or the zap colours bookmarklet all the time.

                                                              I’m not saying dark mode is bad, but I am saying it’s probably trendy. I suspect things will swing in a different direction eventually, especially as the eyes of those who love it now get older. (They inevitably get worse! Be ready for it.) So the default will likely change. In which case, maybe we should really consider not hard-baking colour schemes into tools and move the colour schemes to somewhere else, as you mention. This is the better way to go. As I mention elsewhere in the thread, configuring bat, rg, exa, and all these modern tools individually is just obnoxious. Factor the colour schemes out of the tools somehow. It’s a better solution in the long run.

                                                              1. 1

                                                                I too find light displays easier to read.

                                                                From memory, the first time I heard of TCO-approved screens was when Fujitsu(?) introduced a CRT screen with high resolution, a white screen, and crisp black text. This was considered more legible and more ergonomic.

                                                                (TCO is Tjänstemännens Centralorganisation, the main coordinating body of Swedish white-collar unions. Ensuring a good working environment for their members is a core mission.)

                                                                1. 2

                                                                  What I find helps the most is reducing the blue light levels - stuff like f.lux works well.

                                                                  I’m also looking into e-ink monitors, but damn, they’re pricey.

                                                            2. 3

                                                              Yeah, I’m a fan of light mode (specifically white backgrounds) on screen most the time too, and actually found colors so bad that’s a big reason why I wrote my own terminal emulator. Just changing the palette by itself wasn’t enough, I actually wanted it to adjust based on dynamic background too (so say, an application tries to print blue on black, my terminal will choose a different “blue” than if it was blue on white, having the terminal emulator itself do this meant it would apply to all applications without reconfiguration, it would apply if i did screen -d -r from a white screen to a black screen (since the emulator knows the background, unlike the applications!), and would apply even if the application specifically printed blue on black since that just drives me nuts and I see no need to respect an application that doesn’t respect my eyes).

                                                              A little thing, but it has brought me great joy. Even stock ls would print a green i found so hard to read on white. And now my thing adjusts green and yellow on white too!

                                                              Whenever I see someone else advertising their new terminal emulator, I don’t look for yet another GPU render. I look to see what they did with colors and scrollback controls.

                                                              1. 2

                                                                I got fed up with this and decided to do something about it, so after what felt like endless fiddling and colorspace conversions, I have a color scheme that pretty much succeeds at making everything legible, in both light and dark mode. It achieves this by

                                                                • Deriving color values from the L*C*h* color space to maximize the human-perceived color difference.
                                                                • Assigning tuned color values as a function of logical color (0-15), whether it’s used for foreground or background, and whether it’s for dark or light mode.
                                                                • Assigning the default fg/bg colors explicitly as a 17th logical color, distinguished from the 16 colors assignable by escape sequences.

                                                                As a result, I can even read black-on-black and white-on-white text with some difficulty.

                                                                Here it is:

                                                                1. 2

                                                                  I had the same problem with bat so I contributed 8-bit color schemes for it: ansi, base16, and base16-256. The ansi one is limited to the 8 basic ANSI colors (well really 6, since it uses the default foreground instead of black/white so that it works on dark and light terminals), while the base16 ones follow the base16 palette.

                                                                  Put export BAT_THEME=ansi in your .profile and bat should look okay in any terminal theme.

                                                                  1. 2

                                                                    As I said, I could set the theme, but my point was that I don’t want to be setting themes for all these things. That’s maintenance work I don’t need.

                                                                    1. 1

                                                                      I definitely agree that defaulting to 24 bit colour is a terrible choice for command line tools, but when it’s a single environment variable to fix, I do think some (bat) are worth the minor, one-off inconvenience.

                                                                2. 3

                                                                  I agree 100%. I think the closest thing we have to a standardized semantic palette is the base16 palette. It’s a bit confusing because it’s designed for GUI software too, not just terminals, so there are two levels of indirection, e.g. base16 0x8 = ANSI 1 = red-ish. It works great for the first eight ANSI colors:

                                                                  base16  ANSI  meaning
                                                                  ======  ====  ==========
                                                                  0x0     0     background
                                                                  0x8     1     red-ish/error
                                                                  0xb     2     green-ish/success
                                                                  0xa     3     yellow-ish
                                                                  0xd     4     blue-ish
                                                                  0xe     5     violet-ish
                                                                  0xc     6     cyan-ish
                                                                  0x5     7     foreground

                                                                  The other 8 colors are mostly monochrome shades. You need these for lighter text (e.g. comments), background highlights (e.g. selections), and other things. The regular base16 themes place these in ANSI slots 8-15, which are supposed to be the bright colors, which breaks programs that assume those slots have the bright colors.

                                                                  The base16-256 variants copy slots 1-6 into 9-14 (i.e. bright colors look the same as non-bright, which is at least readable), and then puts the other base16 colors into 16-21. It recommends doing this maneuver with base16-shell, which IMO defeats the purpose of base16. base16-shell is just a hack to get around the fact that most terminal emulators don’t let you configure all the palette slots directly; kitty does, so I use my own base16-kitty theme to do that, and use base16-256 for vim, bat, fish, etc. without base16-shell.

                                                                1. 1

                                                                  I have a fzf setup in fish that started out as a function, and then moved into a plugin: It mostly works with a single shortcut that aims to “do what I mean”.

                                                                  • Ctrl-O: fzf files
                                                                    • Already started typing a path? Start from there.
                                                                    • Extra stuff at end of that path? Start with that query.
                                                                    • Has keybindings to switch files/directories, navigate up/down directories, show hidden files, etc.
                                                                    • Previews files with bat and directories with exa, or falls back to cat and ls.
                                                                  • After selection:
                                                                    • Command line not empty, or more than 1 selected? Insert on command line.
                                                                    • Selected 1 file? Open with $EDITOR.
                                                                    • Selected 1 directory? cd to it.
                                                                  1. 2

                                                                    Ha, just yesterday I was checking the website to see if there was anything new, then shortly after saw this :)

                                                                    At first it seemed strange that case_False and case_True are just global names that could be used for other things. Am I understanding right that “stack identifier deshadowing” is what allows you to consider these private in some sense to _False and _True? Also, that reminds me a lot of alpha-renaming in the lambda calculus. Are these essentially the same thing?

                                                                    1. 2

                                                                      The only time a stack identifier can be considered private is when a term doesn’t touch anything that was already on that stack and does not leave any values on the stack. For example, swap can be considered to have two private stacks, because it pushes on to them and then pops off of them exactly what it pushed on.

                                                                      This becomes more clear when you start writing out the types for terms. The stack-polymorphic type for (s'|(s|swap)) would be something like ∀a b c . <s|a b c -> a c b>. Note that despite pushing and popping off of s1 and s1', neither of those stack identifiers appear in swap’s type. So they can be considered private.

                                                                      Stack identifier deshadowing might appear to be equivalent to alpha-renaming in the lambda calculus, and there are certainly similarities, but it’s quite different for the following reason: given any valid program, as a user, you can trivially hoist out new terms or inline existing terms without affecting the semantics and without needing to rename any stacks. This is what makes this calculus considerably easier to refactor than the lambda calculus.

                                                                    1. 9

                                                                      I discovered many years ago and occasionally used it for quick throwaway sessions, e.g. double checking the behavior of something in Python. It seems like they’re aiming to be something much bigger now, based on all the features advertised on their homepage. I can’t help but feel that Microsoft will eat their lunch with and GitHub Codespaces, though. Curious what others think.

                                                                      1. 10

                                                                        After reading about the design of LSP I get the feeling that the people doing dev tooling at Microsoft don’t even understand what a repl is.

                                                                        1. 4

                                                                          Think it depends who has the best reproducibility story. Nix is a good foundation but more needed. vscode and GitHub co-pilot together is interesting in theory. But GitHub data isn’t best to machine learn off - a start though. Getting jacked into a user environment with reproducibility opens up much better machine learning potential imo for code completion than just repo commits. They both miss the boat on how to rank code based on how canonical it it is. Let’s see what happens

                                                                          1. 3

                                                                            Indeed. And with people making a lot of examples of code that can be analyzed on the fly to see if it works, what stuff it does, etc.; if they implemented a predictive model, it could be smarter (and more useful) than CoPilot.

                                                                            1. 3

                                                                              I would say that probably 80% or more of the developer population don’t care about reproducibility to any great degree. Some of those will care in another 5-10 year perhaps after getting bit by the 100’th issue caused by not having reproducibility but the vast number of them will never care. Which is too bad because our industry gets a lot better in the long run if they do care.

                                                                              1. 2

                                                                                Well then we got a fat developer arbitrage opportunity to bridge this until they converge

                                                                          1. 2

                                                                            I have three security keys: A, B, and C. Key A is on my keyring, which is always in my pocket. Key B is in a folder of important documents in my house. Key C is offsite, at my parent’s house in another country. B and C have a set of printed backup codes attached to them. I also register MacBook and iPhone Touch ID where possible, so 5 keys in total. Few services actually support this ideal setup (e.g. some have max 2 keys, USB key only, no backup codes, force SMS backup, proprietary OTP, or other limitations), so I have a spreadsheet that tracks the status for each service.

                                                                            I don’t feel the need for a safe. Phishing is a much more likely attack, which security keys prevent by design. The backup codes are only for peace of mind in case the security keys stop working. If someone steals a key or the backup codes, they are useless without passwords. For each possible attack scenario (master password guessed, phone stolen, etc.) and loss scenario (master password forgotten, key lost, etc.) I’ve written down a list of actions, e.g. unregistering keys, remotely deauthorizing sessions, etc.

                                                                            1. 5

                                                                              What’s the reason for avoiding Yoda conditions? I see that there’s a way to get errors on assignments when a condition was intended, but I’m still going to be cold dead hands about them without knowing why.

                                                                              1. 8

                                                                                Why write in an awkward, contorted style to prevent a class of errors that the compiler/interpreter is perfectly capable of preventing? In Perl you just need to enable warnings, and in clang/gcc you can use -Wparentheses.

                                                                                1. 3

                                                                                  Because the technique is applicable in a huge range of programming languages and environments, many of which don’t have tooling support to catch the error another way. That, and it’s only awkward and contorted the first time you do it.

                                                                                  1. 4

                                                                                    Even if you personally get used to it, it is contorted for anyone trying to read your code and slows the process down. And most other languages also warn or prevent it. In modern C, you need to put assignments in extra parentheses to avoid warnings.

                                                                                    It’s a pity the Pascal approach of := for assignments and = isn’t more pervasive.

                                                                                    1. 1

                                                                                      I don’t think the contortion issue is that big a deal. I got used to it fast and I can’t see why anyone else couldn’t too. It’s not a new idea and it’s out there in loads of code bases already. There’s nothing inherently correct about putting the variable on the LHS, it just seems more common. This technique costs basically nothing and can prevent bugs. If someone reading code does a double-take when they spot a construction like that, good! It means they’re paying attention.

                                                                                2. 4

                                                                                  This isn’t just a piece of Perl advice; anything that allows an assignment to occur in a conditional can potentially have the problem mentioned in the article, which is that you accidentally type an assignment operator in the conditional expression instead of an equality operator, and get a conditional that doesn’t actually implement the logic you wanted to implement.

                                                                                  I’ve seen the same advice for working in C and other languages which allow conditional expressions on assignment. Having the left-hand side of the expression be something you can’t legally assign to (as the article recommends) is often the preferred style in such languages, in order to avoid the possibility of accidental assignment due to typo.

                                                                                  1. 6

                                                                                    It is silly though. People go out of their way and write funny looking code to defend against a class of errors that they are fully aware of. If you are aware enough to write conditions with Yoda style, you are aware enough too look there whe debugging.

                                                                                    It also silly to combat a class of errors by reminding yourself about it every single time. How is that better than checking them all before compiling? At the point you think “I write this condition with this style because…”, Why don’t you just make that same thought exercise but put two double signs instead of one?

                                                                                    Quite frankly, I suspect people do such sily things because it gives them a feeling of being “advanced” programmers. It’s an exercise of display of experience. I personally never felt any gain in such silly technique nor have I found anyone that came up with that themselves for their own interest, rather than having learned it from someone else.

                                                                                    1. 4

                                                                                      languages which allow conditional expressions on assignment

                                                                                      … and which make the assignment operator a prefix of the comparison operator.

                                                                                      1. 2

                                                                                        Yes, I know why to use a Yoda condition, I’m curious why “avoid Yoda conditions […] you should.”

                                                                                        1. 4

                                                                                          As far as I can tell, the reasoning in full totality is “they look weird”.

                                                                                          1. 3

                                                                                            The unstated assumption (which I think is uncontroversial) is that the common, standard way to write conditionals in most every language is if $var == val, so by using the yoda style you are just going against the grain for no good reason, making your code idiosyncratic. It’s not “bad” in any absolute sense…. it would just be like making weird but legal formatting choices, or using camelCase in a language whose standard style was snake_case, etc…

                                                                                            An invalid (imo) answer to your question is that the non-yoda style is more “natural” in some absolute sense…. which is like saying SVO or more natural than SOV word ordering.

                                                                                            1. 4

                                                                                              I don’t believe there are any absolutes at play here, like you mention, but it’s funny you bring up word order, because I think it’s precisely due to the predominance of subject-first languages that the variable-first order in conditionals is more intuitive, because I would guess that most people, in referencing a variable that way, model it as the subject of the conditional expression at hand in their heads.

                                                                                              Anyway, if it’s not something you already do intuitively, I don’t know why you’d adopt the practice when static analysis can easily guard against this case and others besides.

                                                                                              1. 2

                                                                                                I think it’s precisely due to the predominance of subject-first languages that the variable-first order in conditionals is more intuitive

                                                                                                Absolutely. Which implies that “more intuitive” here means “what you’re used to”. Which is also the reason to follow community standards, when they exist, which was essentially my answer to the question. It just makes it easier for everyone, on average. You can say, “But the accepted way is not more intuitive to me,” and choose to die on that hill, but as a general policy when working with shared code I think it’s wiser to give in, and in a few weeks the accepted way will probably start to look natural. The point is, all these things are arbitrary and habit-driven.

                                                                                                1. 2

                                                                                                  In most natural languages (all?) The subject comes first when stating a comparison verbally. The subject is the fixed mental reference that is to be tested against a value and therefore we start by that. We start by what’s known. The value on the other hand is naturally put by the question signal (a question mark or descending melody) because is the part of the language that is subject to that doubt.

                                                                                                  Note that, unlike latin languages, Germanic languages such as English often have qualifiers (adjectives) in front of subjects. However, not when asking a question. In that case the subject comes first and the “value” comes next by the question marker.

                                                                                                  Yoda conditionals look weird because they are objectively conter intuitive. Sure, one can get used to it as with anything else.

                                                                                                  1. 1

                                                                                                    This got me curious. According to Wikipedia 87% are SOV (45) or SVO (42). 9% are VSO. Was surprised to find svo wasn’t the most common.

                                                                                        1. 1

                                                                                          Ah, that’s what these are called! I referenced this a month ago when someone was suggesting a similarly misguided (IMO) programming style to avoid forgetting break; in switch statements, in C++.

                                                                                          1. 1

                                                                                            I really like where Zig is going. My main worry design-wise at this point is anytype duck typing. This relates to the recently discussed “allocgate” because it’s another way to do polymorphism. It’ll be great if there’s a standard way to do runtime polymorphism, as planned in ziglang/zig#10184. But for compile-time, you have to use (foo: anytype) or (T: type, foo: T). Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

                                                                                            I asked about this in the Discord and people seemed to think that{Reader,Writer} is the only case where this pattern is used pervasively. But I have a hard time imagining it will be restricted to that as people start to write lots of libraries in Zig. Imagine reading/editing Rust code with all trait bounds hidden (to you and to rls), it would be a nightmare. I’m happy to be proven wrong!

                                                                                            1. 2

                                                                                              Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

                                                                                              We plan to basically upstream zls into the self-hosted compiler and have it provide compile-time “understanding” to zls using the same --watch mechanism that should enable incremental compilation.

                                                                                              1. 2

                                                                                                I want to clarify that I have not looked at the ZLS source code and cannot vouch for its quality or whether or not we will literally upstream it. Also, the protocol that the compiler will support will be our own language-specific protocol which is more powerful and performant than LSP. There will need to be a third party proxy/adapter server to convert between what e.g. VSCode supports and what the Zig compiler provides.

                                                                                                1. 1

                                                                                                  ZLS is a bit of a red herring, I didn’t actually mean to focus on it. Consider C++ and Rust here:

                                                                                                  // C++: runtime polymorphism
                                                                                                  void foo(MyReader* r) { ... }
                                                                                                  // C++: compile-time polymorphism
                                                                                                  template <typename R> void foo(R r) { ... }
                                                                                                  // C++: compile-time polymorphism with concepts
                                                                                                  template <typename R> requires MyReader<R> void foo(R r) { ... }
                                                                                                  // Rust: runtime polymorphism
                                                                                                  fn foo(r: &mut dyn MyReader) { ... }
                                                                                                  // Rust: compile-time polymorphism
                                                                                                  fn foo<R: MyReader>(r: &mut R) { ... }

                                                                                                  C++ templates are a lot more powerful than Rust traits. You can do all kinds of things with SFINAE, static assertions, etc. In Rust you can’t express even simple logic like negative trait bounds. However, pre-c++20 templates suck because you have no idea what R is. It’s duck typed: if it compiles, it compiles. This means:

                                                                                                  • You have to rely on non-machine-readable comments, much like types in dynamic languages before TypeScript/Mypy/Sorbet/etc. These can be inaccurate or outdated.

                                                                                                  • Editor tooling is hamstrung: no jumping to the definition of MyReader, no autocompletion after typing “r.” in the body of foo, no way to find all uses of MyReader (it’s just in a comment).

                                                                                                  Zig feels similar to C++ without concepts in this respect. If you want to change from dynamic dispatch (e.g. Allocator) to static (e.g. reader/writer), you sacrifice a lot. If you take reader: anytype, you have a Turing-complete language at your disposal to determine what types are allowed. It seems to me that in order to get the benefits of a restricted system, e.g. “any type that implements MyReader”, you need language support.

                                                                                                  I suppose there could be a convention of calling (e.g.) at the beginning of methods. This would give nice error messages, at least. But it feels a bit hacky for something like ZLS to hardcode recognizing this convention.

                                                                                                2. 1

                                                                                                  This sounds cool, but can you elaborate on how that helps the situation? Will it get extra “understanding” from all the call sites? It seems to me that by design, we’re stuck with English prose describing what sort of types T are expected, and it won’t be possible to (e.g.) command-click into the Reader interface or get autocompletion for its methods. I know Zig’s comptime programming is a lot more powerful than Rust traits, but I wonder if there’s a way to get the best of both worlds.

                                                                                                  1. 2

                                                                                                    I think I can’t really give you a good answer until we start working on the thing but in general ZLS right now can only look at the AST to reason about types, while the compiler also implements semantic analysis so ideally some of that machinery can be used to provide suggestions etc.