Threads for akkartik

    1. 2

      I really like the idea presented in this article. And i especially like how the author’s main argument is presented visually: it’s undeniable that the second screenshot has much more information condensed into it, and would be a much better default when navigating to modules/struct definitions.

      I also think the arguments in favor of this feature over the “Outline” view are understated. Simply put: us programmers are used to reading source code in our editors, so presenting the outline of a module in the same format just makes sense. It has the same syntax highlighting we’re used to. We can do all the things we normally do, like using a shortcut when the cursor is over a type (e.g. a function argument) to go to it’s definition, or just edit something there right away, because it is the source code! Thinks that the separate and custom “Outline” view of IDEs usually don’t provide.

      1. 4

        I also think the arguments in favor of this feature over the “Outline” view are understated.

        Thanks for noticing this <3

        Though, getting programmers to re-discover text files in between text streams and GUIs is not a mission I am quite ready to push right now. Can’t stop thinking about that it anyway:

        1. 1

          Come check out BABLR!! You’re not the only one diving down into what it looks like to build a new narrow waist for code editors: https://github.com/bablr-lang/ https://discord.gg/NfMNyYN6cX

          1. 1

            From the first link:

            Unlike full HTML, the text is not nested, and logically is a 2 dimensional grid of characters.

            Do you care about proportional fonts, or do you mean ‘grid’ literally here?

            1. 1

              Literally: y coordinate is a line, x coordinate is a visual character in the line.

              This is mostly orthogonal to proportional fonts: eMacs is mostly monospaced, but can do proportional in a pinch.

              1. 1

                I think I disagree with you on this. There’s nothing about the emacs model of text + ranges mapping to attributes that requires fixed width or a grid. Your posts hang together equally well if one assumes proportional fonts.

                This is not a big deal (I don’t care that much about proportional fonts), just adds some noise to a nice mental model. But perhaps I’m missing something.

                1. 2

                  Oh I think I made sense of what you’re saying.

                  The 2D grid isn’t how the text looks on screen. Even if a line is wrapping across multiple screen lines it’s still conceptually a single line. The width of characters in pixels doesn’t matter.

                  I agree with this.

        2. 5

          I never know, but some rough ideas:

          1. 2

            There is significant overlap between the values underlying akkartik’s concept of freewheeling apps and my own work with Decker: without the runtime, decks are very small, portable, trustworthy by construction, and highly plastic, encouraging users to remix and customize to suit their preferences. The key difference, of course, is that I’m building my own substrate layer instead of using Love2D as-is.

            I believe one must strike a careful balance between the costs and benefits of different methods of testing. Tests that catch meaningful regressions, are stable over time, and are easy to write are valuable assets. Tests that rarely catch regressions, require frequent maintenance and redesign, and are difficult to write are liabilities.

            I maintain 3 interpreters for Decker’s Lil scripting language: one in c, one in js, and one in awk (just for fun). Having a fairly comprehensive test suite that touches only the external interface of the language has been easy to write and maintain, and the proportional payoff has been high, identifying defects when I rework parts of each interpreter and keeping the behavior of each implementation in sync. It’s true that these tests ossify the design of the language to an extent, but for my purposes this provides a valuable counter-pressure to breaking changes: Lil has a modest-but-nonzero community of users and library of existing software to support.

            I maintain 2 implementations of Decker’s runtime: one in c, one in js. I have a moderate collection of API-level tests for these runtimes. They aid in catching some regressions (and validating my designs initially), but they are also more tedious to write than the Lil tests, and more tedious to maintain. Accordingly, I cull these regularly when they aren’t helpful and write fewer overall. Interactive, UI-level testing for Decker itself would be even more tedious and brittle, so I test such behaviors manually as I make changes.

            For “modules” (reusable libraries written in Lil), I’ve found the best testing strategy is to build interactive documentation that explores and explains all of the features of a module with practical demonstrations. In addition to the communicative benefits of documentation, when I make a change which affects some module, I can quickly flip through the demos and validate that they’re behaving properly; the effort I put in pulls double-duty.

            1. 2

              I just want to say I love what you’re doing with Decker! I spent a few years bearish on the web, but I seem to be finding my way back to it. Just plain html so far, but Decker is definitely on my radar.

              1. 1

                Thanks!

                Browsers can be a very frustrating platform, and I find Chrome’s slow creep toward complete monopolization of the space deeply disturbing, but the convenience, portability, and hackability of single-file .html “applications” is compelling. The only component we’re truly missing is universal adoption of filesystem IO APIs.

                1. 2

                  Chrome’s slow creep is/was exactly my fear, especially coupled with whatever trajectory Firefox is on.

                  https://merveilles.town/@akkartik/107687413458195682

                  But lately I think alternative browsers will emerge at least for the subsets we care about.

              2. 2

                To respond to the substance of your comment, yes I agree:

                • The tests in https://git.sr.ht/~akkartik/lines.love are indeed useful, not 100% liability. I think I’d currently assess them as 80% benefit, 20% cost.
                • Tests are more useful in some domains than others. Programming languages are easy (which is why I didn’t have this sort of experience when I worked on Mu), games are hard. Graphical text editors as a domain feel somewhere in between: you get a lot of benefit from tests, but you also get some uncomfortable coupling with tiny pixel-specific attributes of a specific font.
                • I do care as well about not causing regressions for my modest users. lines2.love gets a separate repo, but I’ve been using it exclusively with the goal that I should never have to think about the differences between lines.love and lines2.love.
              3. 4

                I don’t necessarily think I agree with a lot of these ideas, but I found this an interesting article to see the perspective of someone writing software very differently to how I write software.

                Particularly with the comments about testing, where the author sees it as being good in very small amounts: I kind of get where they’re coming from (you can definitely write too many tests), but the idea of pushing for as few tests as possible feels very strange to me.

                1. 3

                  Me too! To give you some context on how strange a journey it’s been for me, I liked tests to the extent of spending 7 or so years building a computing stack from machine code up, with the express purpose of making every OS interface testable.

                  1. 2

                    From one extreme to the other, almost! The form that most appealed to me (at least in terms of testing) was the transitional form you mentioned for Freewheeling Apps, with a tested core and an untested shell - this is how I get on best with tests, using them where they can be most useful, and using them outside of that only if I’m confident that the test will really be useful.

                    I’m intrigued what made you want to delete all those tests this year - were even the testable core tests getting in the way?

                    1. 2

                      I had to think about it a bit. I think there are 3 reasons:

                      • The path-dependent reason is that the tests weighed on my brain and made it hard to “import everything into my brain at once” as I describe level 8 in OP. By this reasoning, perhaps I could bring them back now after the exercise.

                      • The logistical reason is that the tests were/are a bit brittle, so I suspect there’d be spurious failures if I tried to bring them back. The key thing they’re for is the word-wrap algorithm, which depends on pixel-level details of the font.

                      • I think the principled reason is that the tests kept me from getting this far for a year or more, so I’m not convinced they’re worthwhile. On the other hand, I might end up with different, better tests if I test-drive this knowing what I know now. Might be worth trying.

                      1. 1

                        I think the principled reason is that the tests kept me from getting this far for a year or more, so I’m not convinced they’re worthwhile. On the other hand, I might end up with different, better tests if I test-drive this knowing what I know now. Might be worth trying.

                        Right; obviously it’s true that deleting those tests helped you move faster and get to where you wanted to be, but it’s a bit of a logical leap to assert that this is a problem with testing-in-general rather than a problem with that particular test suite.

                        Often I find that the test suite is how I import everything into my brain. “How does this work? Does it do this for a good reason?” (changes it, observes a test failure -> ah yes, turns out it does!)

                        1. 3

                          Yeah, for sure. You’re preaching to the choir there. I’m not convinced tests are worthwhile in this context that I’m writing about. The key thing here is that the code has gotten to a state where:

                          • I’m unlikely to ever need to change it.
                          • The code most of my tests were exercising lives in a few self-evident lines.
                          • This is starting to get into graphics where I find it difficult to write self-evident tests. “Test failed because something is drawn at x=57 rather than x=58, why do I care?”
                          1. 2

                            This is starting to get into graphics where I find it difficult to write self-evident tests. “Test failed because something is drawn at x=57 rather than x=58, why do I care?”

                            I think the article could probably be improved by including some examples like this; I’m sure other readers would be asking the same questions.

                            1. 2

                              The article really needs to convey I’m not dispensing perfect wise opinions about tests or versions, I’m reflecting on my own imperfections of the past and present. You probably have a closer look at this evolution than most since you follow me on Mastodon, so definitely tell me if you have ideas now how I can better convey that to random strangers on the internet. I don’t immediately think adding more details about tests is the right direction.

                              1. 3

                                Sure, but even in the context of “what worked for me on this program in particular” it could be clearer.

                                In my case, tests and versions actively hindered getting to the end of this evolution. Tests let me forget concerns. […] It took a major reorientation to let go of them.

                                I don’t really understand what “letting you forget concerns” means here. Actually I’m still not sure whether the example you gave above is an example of forgetting concerns or an example of something else. It seems like in this thread you’re not talking about forgetting but instead about tests that are bad because they’re over-specific.

                                Putting it in my own words, it seems like the bad tests are for those cases where “I’ll know it’s right when I see it” and it’s almost impossible to algorithmically describe the desired end goal, so your gut feeling that “everything must have a test” ends up being channeled into tests that make assertions about unimportant minutia and rely on implementation details, because that’s the best you can do.

                                So I conclude that if I notice myself writing a test like that, I should stop and try to find another way to test it. If I can’t find one, maybe I need to just accept that the requirements of this particular piece of functionality dictate that it be validated by a human brain and not a program.

                                1. 2

                                  That seems like a good thought!

                                  The only thing I’d add is there’s something about all the tests of a program in aggregate that isn’t conveyed by looking at individual tests in isolation. I was able to fit the program in my head to do things to it I had hitherto been unable to because I got rid of all tests, because of a gestalt sense that the tests were a net negative. Going through the tests one by one to identify good vs bad tests would have bled more of my motivation, I think.

                                  Now that I’m through on the other side, I have the option to bring back some tests. And I might go do the exercise of separating good and bad tests if I run into a new bug that suggests my attempt isn’t quite as great as I think it is, that the tests have more of a positive ROI.

                                  More likely, though, I’ll first rethink things from scratch and test-drive some aspect that seems to need them. Then I might go back over the old tests to see if there are ideas for me to steal.

                                  1. 2

                                    What I mean by “letting me forget concerns” is something you alluded to above:

                                    Often I find that the test suite is how I import everything into my brain. “How does this work? Does it do this for a good reason?” (changes it, observes a test failure -> ah yes, turns out it does!)

                                    The rut I was in was partly that I was habitually importing only parts into my brain, because tests were always on hand to answer questions like this on demand. Tests help me be lazy. But in this situation I really needed to import everything.

                    2. 2

                      the idea of pushing for as few tests as possible feels very strange to me.

                      Not to me. Test code is still code, and I still need to maintain it. The less I have, the better… except for that tiny little issue of, well, correctness. In the end, this is how few tests I could get away with, for a sub-2K SLOC project.

                      Sometimes, “as few tests as possible” is still quite a few.

                    3. 2

                      I’ve been doing something similar, but:

                      • It’s only a subset of my posts, about a more narrow topic than my entire account.
                      • It also includes some stuff I post outside Mastodon.
                      • It’s much more manual.

                      https://akkartik.name/freewheeling-apps#devlog

                      1. 1

                        The thing that still keeps me from using Go, all these years later, is the uncompromising unused-variable errors. Help a debug-by-printing brother out and give me a commandline flag to make that a warning, that I can use just while debugging. I pinky-swear to my nanny I won’t check in unused variables, even in private repositories that will only ever run on computers I own, behind a door labeled, “beware of leopard.”

                        1. 2

                          I hated this too when I started, but nowadays I just run <SPACE> l a shortcut in my neovim configuration, that triggers the LSP actions that has an option to remove unused imports. For variables I generally just rename it to _.

                          Go is not the only language to do it by the way, Kotlin also has the same behaviour (and this is the language that I am using in $CURRENT_JOB). So I kind of got used to it (by force).

                          1. 1

                            Thanks. Yeah the unused imports error feels useful and doesn’t bother me. It’s just extrapolating it to variables that I feel goes too far.

                        2. 44

                          My favorite microfeature: have a page which lists titles of all posts with dates in reverse chronological order. This makes it so much easier to read the entire blog, rather than just the hit piece of the week. Different ways this feature is typically messed up:

                          • paginate by year or, worse, month, such that the median page has 0 entries and the largest has three.
                          • include the first pageful of the post, so that you can’t actually see a compact list of titles, and have to use pagination
                          • only allow viewing posts with a particular tag (with a median tag tagging a single post)
                          • infinite-scroll pagination which loads a handful titles at a time.
                          1. 27

                            infinite-scroll pagination which loads a handful titles at a time.

                            Wow did you hit a nerve here for me. I don’t know who came up with the idea that you can only have 10 or 25 things on a page but it’s so frustrating, especially if whatever pagination strategy isn’t stable (e.g. always resets to page 1 on page load). A long list of items or a long-scroll table is just fine thank you and means that Ctrl-F works to find things in the page.

                            1. 4

                              I don’t know if that would scale or not. I currently have 5,547 blog posts across 24 years. I do have an archive page and clicking on a month will show that month’s post, although I might be able to do something different? I never did figure out a decent “show the archive” format.

                              1. 8

                                I don’t see what wouldn’t scale about it. 5k blog posts times ~300 bytes of with hyperlink, date is 15kb, which is the size of the CSS file that loaded when I opened this page. (Your blog is commendably light, so it would still be a larger page than the resources you’re already loading, but it’s still not very much.)

                                1. 3

                                  Math is off, that would be more like 1.5MB which is still plenty fine.

                                  Here is a comparable example: the Dinosaur Comics archive with over 4K entries. 220KB transferred, loads under a second.

                                  1. 1

                                    Derp, yes, 1500kb. There must be some gzip compression in your numbers, because if I curl | wc that page I get 765kchar. So accounting for that, we’re probably still under a megabyte for GP’s blog archive.

                                    1. 1

                                      Yep content-encoding: gzip, that’s why I wrote “transferred” :)

                                2. 2

                                  Here’s an example: https://akkartik.name/archives/foc

                                  Try clicking around. The links go to lists of all the top level chat messages for each channel.

                                  1. 1

                                    Not the person you replied to but I would bet they would accept a pagination after 100 entries as the second-best choice ;)

                                    1. 1

                                      My blog just has a list of dates and titles of all posts back to the dawn of time, though I am not as prolific as you! On my link log you can bop a year to get all that year’s links, which is generally about 1000 links.

                                      1. 1

                                        As long as it’s not a log file/log lines that get paginated at 100 that’s good enough for me.

                                      2. 1

                                        sounds like a fun code golf (html golf?) problem to me!

                                      3. 3

                                        Yes, agreed. I implement this on: https://amontalenti.com/archive

                                        I only choose not to list posts before 2009 because the “feel” of my blog was different back then. It was less an essay archive and more “quick updates,” which then got replaced (for me) by things like Twitter/X. Plus, 2009 is already a long time ago.

                                        For those of you using WordPress, the simplest way to do this is the “Display Posts” and “Display Posts Date View” mini-plugins.

                                        https://wordpress.org/plugins/display-posts-date-view/

                                        These are tiny (in terms of source code), focused, & auditable plugins. They simply add a WordPress shortcode [display-posts] that will generate a listing of all your posts, grouped / ordered / limited however you like. You then just use that shortcode on an “/archive” page. The original plugin is 800 lines of code and the date view extension to that plugin is another 150 lines of code.

                                        (Why do I use WordPress? I’m a long-time blogger. Explained more in this comment on the orange site.)

                                        1. 2

                                          Sounds like an RSS feed, no?

                                          I guess these days Firefox makes you install RSSPreview to have the rendering functionality that used to be included.

                                          1. 2

                                            Don’t RSS feeds typically include at least some of the content of posts and not necessarily include all posts throughout time?

                                            It sounds more like a sitemap to me.

                                            1. 2

                                              Typically, yes, but I do follow at least one RSS feed that has 1500 entries and 0 context.

                                              1. 2

                                                Hmm I guess that’s true, but it’s up to the webmaster. Podcast RSS feeds at least tend to have all the posts.

                                            2. 2

                                              I have something this on my posts page, though I’m not sure if fits your definition of “compact”. Do you think it’s more valuable to list titles only?

                                              1. 1

                                                It’s borderline for me :) I wouldn’t say it fully commits mistake number 2, as I still get more than one post on the screen at a time. So it is usable. That said, I think keeping just the titles (and maybe the word count) would be better, as that optimizers for the primary use-case — looking at the entire list to see if anything catches your eye. If a title is interesting, you can always click it to read the first paragraph or two!

                                                1. 1

                                                  I do this on my archive page. Below the posts I list the other pages on the site: my “about me” page, my feeds page, and so on. It’s a sitemap, just with a less-technical-sounding name.

                                                  1. 1

                                                    You’ll love https://elv.sh/blog/, although that’s because I didn’t bother implementing anything more fancy :)

                                                    1. 1

                                                      I’ve been idly considering making something that would only show you the posts in the last year on my main blog index with a per-year archive list, but you’ve just convinced me to leave this alone in a big index of fun.

                                                    2. 23
                                                      • Lua
                                                      • LÖVE

                                                      Why

                                                      Secondary alternatives:

                                                      1. 9

                                                        Wow that “why” page is so inspiring!!

                                                        To encourage others to click:

                                                        • The article starts out encouraging folks to use less common languages,
                                                        • It wanders into some demos for a homegrown text/graphics editor
                                                        • It then quickly escalates into some really compelling and novel demos for text-editor features and spatial layouts

                                                        I loved this – thank you for sharing

                                                      2. 5

                                                        Maybe back up and give up on ppx_rapper? That feels like scope creep..

                                                        1. 10

                                                          Maybe I’m in the wrong, but commit messages have never been very useful for me in any situation. The amount of effort required to write excellent commit messages seems far too much for the returns it gives, at least in my experience (10+ years writing software with other people) I can only remember once where the commit message was at all useful to me and it was probably a 5 minute time-save.

                                                          1. 35

                                                            Maybe if the commit messages were better they would have been more useful? Oftentimes I see context about the changes being spread out through several tools (Slack, GH issues, Jira, pull requests). If all of that was in the commit message, maybe it would have saved us much more time.

                                                            1. 18

                                                              Also my experience, and why I improved my messages over time and appreciate when colleagues put in good error messages in turn.

                                                              • good commit messages provide direct information when blame diving, and allow quickly discarding irrelevant commits
                                                              • good commit explain the reasoning and edge cases of the original, and allow evaluating whether those choices hold, or whether the original author missed something, or whether I missed something (the latter being a reasonably common occurrence)
                                                              • IME commit messages preserve a lot better across tool migrations because the commit message is just… there, you only need your ticket provider to fold, or someone to decide to stop paying for the old jira nobody uses anymore or decommission the old TRAC or Mantis server because “it’s useless” and you’re left bereft.
                                                              1. 3

                                                                After 2 months of trying the approach outlined in this blogpost I must say I’m convinced now. It hasn’t been that useful for others I believe, but it has been very useful for myself to sort the ideas of the patch and to review what makes sense in that patch and what doesn’t. And if someone ever has to go back to the patches I write they should have a better idea of what is going on. Both due to the improved messages and because of the patch cleanliness that comes from my increased awareness.

                                                                jujutsu has also been instrumental in making me write and rewrite commit messages a painless endeavour.

                                                                1. 2

                                                                  It could be, however I’ve worked with people that organized their commits very well and spent a good effort writing good commit messages and it wasn’t useful to me when dealing with their bugs anyway (it may be to them! or maybe they weren’t very good either). Just wanted to express my thoughts on something that I know is considered a good practice but that I’ve never found any value from.

                                                                  I’ll try to apply some of what the post says to see if I change my mind tho.

                                                                  In my experience if they spent that time writing more tests it would have been better (of course this is a bit of a false dichotomy, but if you have some time to do a task as it often happens in an office setting sometimes you need to choose what you spend your time in)

                                                                2. 9

                                                                  And I’ve found it useful to refer to commit messages for context (e.g. found from git blame) on multiple occasions 🤷‍♂️

                                                                  1. 1

                                                                    Fair enough, just my experience. With git blame I usually find what I want without caring for commit messages and if not git bisect does the rest.

                                                                    1. 12

                                                                      git bisect helps find the commit that does something particular. A good commit message then helps me understand why that commit did what it did. Were they considering the bug I’m chasing right now? Were they considering some other scenario that’s not in my head at the moment? Good commit messages can often automate away most of the Chesterton’s fence operation: figuring out what something does before you decide whether to modify it.

                                                                      1. 3

                                                                        I prefer to encode the assumptions in a test or write it next to the code as a NOTE comment. The commit message just seems like the worst option, far away from the code that makes the assumption and depends on users reading the specific commit message that introduced it.

                                                                        1. 8

                                                                          The single most valuable thing I’ve seen in commit messages is LLVM’s convention of NFC for things that are refactorings but should have no functionality change. When I see a bug in a commit with this tag, I know that the change in behaviour was an accident and not a side effect of some other (desired) change. That TLA alone has saved me a load of debugging time.

                                                                          You can’t capture that in a comment because it makes no sense for an artefact in the comment to say ‘this behaves the same way it used to’ and because there often isn’t a single point in the source code to capture the lack of change.

                                                                          1. 7

                                                                            Comments are by far the worst option as far as I’m concerned, people don’t care so they get lost, they drift away as new code is inserted, they don’t get updated when the code is, and there’s very much such a thing as commenting too much, whereas you can ramble on in commit messages as long as you organise and summarise them. Also commit messages work for changes which are spread out. Comments literally only work for spot information.

                                                                            Rare comments are valuable, because they’re noticeable and denote something that’s so important it has to live next to the code.

                                                                            1. 3

                                                                              I mean, yeah you can ramble as much as possible. I can also ramble as much as I want in a notion page or a markdown file in the /docs directory. Doesnt mean anyone will read it because it’s disconnected from the code people are reading. Unless you are just chasing a particular bug that was introduced in a particular commit and that assumption is encoded in the commit message (which already makes it harder since you also have to provide where the assumption happens and on which interfaces) noone will ever check those for a chesterton fence. I almost never write comments but when I make an assumption that is necessary I tend to write a // NOTE(Marce): We assume X Y Z. Noone has ever deleted one those in any codebase I’ve added them. I still prefer tests of course.

                                                                              1. 8

                                                                                Tests, comments and commit messages all have their place.

                                                                                Comments are good for recording the why of a system as a static thing. Some comments can be replaced by tests, but not all. Tests and comments work well together.

                                                                                Commit messages are good for recording the why of a change. If you put the reason for each commit in code comments the comments would swamp the code.

                                                                                The fact that commit messages are far from the code is precisely the point. I only want to see them when I’m interested in precisely why a line of code is written just so. As a bonus, the fact that they come with a timestamp gives me added context on how much to trust the documentation.

                                                                    2. 7

                                                                      For me, git history / commit messages are only really useful on very large codebases. But by the time I need them at all, they are VERY useful. The biggest benefit is mostly scoping down the codebase to likely-relevant parts, or getting the perspective of “they were thinking about this” when they changed some struct that’s probably relied on in hundreds of different places with different concerns.

                                                                      Small codebases simply don’t have the same organizational drift, half-done refactors, and countless pieces of public but untamed state. If you can’t hold the project in your head anymore, then having something that points to what parts the committer decided to hold in THEIR head can be invaluable. It’s good for figuring out what they didn’t think of, rather than identifying some bug sitting in the code that they actually committed.

                                                                      1. 3

                                                                        I agree that this is mostly faff, because:

                                                                        • The people who write good/reasonable commit messages don’t really need this
                                                                        • The people who don’t won’t really read a guide like this anyway

                                                                        There’s a mythical person in between who does not write good commit messages, wants to get better and has the prerequisites to actually be able to get better for which guides like this may be useful.

                                                                        1. 5

                                                                          I frequently benefit from my own commit messages for the purposes of identifying and fixing bugs, understanding the correct commit to revert, acting as a self-review of my code, and for clarifying my thoughts while I’m in the midst of building the commit (I’ve gone back and changed the content of my commit while writing the commit message). It’s absolutely beneficial to you - the writer - even if you’re the only person doing it.

                                                                        2. 2

                                                                          i think they can be useful when they’re new, and the relevant context is still fresh in everyone’s head. you can glance at the recent commits, and see who is doing what, roughly. or the commits in a PR, to get someone’s thought process

                                                                          once they’re ~a week old, i’d be fine if they were all git commit -m changes. and when i’m working solo, i just do that

                                                                          if i’m searching through old commits, i’m always looking for something in the code, not something in the message

                                                                          a further hot take here: if something would be useful in a commit message, it’s probably better as a code comment, and even better as a test or an assert

                                                                          1. 2

                                                                            yeah, this is my view as well, also completely agree on your last point! I think I mention it elsewhere in this comment section as well

                                                                        3. 14

                                                                          I think there’s extra nuance here. Consider the central example, implementing a JIT for an interpreted language. This example is weird: if you pick an interpreted language, you fundamentally don’t care about the performance. Alternative implementation tries to compete on a property that is un-interesting to the bulk of the target audience. The main competition for PyPy is not CPython, but Go and Rust, because those two are a more principled and long term solution to the Python is slow problem.

                                                                          It seems that competing on something that is your audience’s primary value in general has a better perspective. For example, clang is a pretty successful gcc, because more permissive licensing (with more composable software downstream) is something the audience cares about. Or TypeScript, that improves developer velocity (something you use an interpreted language for)

                                                                          Another interesting example is Scala and Kotlin. They both compete with Java. Scala’s value is the most advanced type system since Haskell, Kotlin’s value is developer speed through conciseness and pragmatism. Kotlin has won (in a sense of becoming a platform’s language) against both Java and Scala. Kotlin solves problems Java programmers care about. Scala solves problems Haskell programmers care about.

                                                                          That’s the dilemma here: it’s easy to compete on the value of low priority for the canonical implementation — there’s a lot of low-hanging fruit to be picked. But to actually win the market, you gotta compete on the central values!

                                                                          Finally, there’s also the extra weird place where you can get stuck if there’s a value split. Where both the primary and the alternative implementation address important values, and both end up being used.

                                                                          An example that is near and dear to my heart is rust-analyzer. RA solves dev experience. rustc solves everything else. But dev experience is a darn important value for Rust, as it prides itself on productivity. So we are stuck with two alternative implementations for the foreseeable future!

                                                                          1. 15

                                                                            if you pick an interpreted language, you fundamentally don’t care about the performance.

                                                                            You mean like Java? :)

                                                                            When adopting a language people care about one thing. After they’ve accumulated a decade worth of code, they care about new things. So what people using a language care about is nuanced.

                                                                            1. 3

                                                                              You mean like Java? :)

                                                                              How is Java “interpreted”? It has a JIT, yes, but it’s not reparsing your source code every time you run the program, which is what I take “interpreted” to mean.

                                                                              1. 11

                                                                                We can argue what terms mean, but it seems uncontroversial that when Java first came out, the people who picked it “fundamentally didn’t care about performance”. Java’s reputation as performant was built past its first decade.

                                                                                1. 4

                                                                                  I agree about that, and I think that’s an interesting point.

                                                                                2. 7

                                                                                  A counterexample to your definition is .pyc files or Lua bytecode files :-) So another common meaning for “interpreted” is “compiles to a bytecode (instead of native code)”. Though that definition is not clear-cut either because there are bytecodes designed to be compiled rather than interpreted, e.g. LLVM IR, TenDRA TDF, ANDF, UNCOL.

                                                                                  1. 2

                                                                                    Fair point- the line is a little fuzzy.

                                                                                  2. 6

                                                                                    Pedantically speaking, “interpreted” is indeed the wrong word to use here. The correct thing to say would be something along the lines of “languages dynamic enough to require RTTI to resolve all method calls and field accesses”, but “interpreted” is sure a more concise way to get the idea across!

                                                                                    1. 1

                                                                                      Fair enough. I wasn’t intending to be pedantic, and based on their response to me, it seems more like the point is simple that Java used to be known to be slow (rather than being interpreted/dynamic/whatever). So, 20 years ago one could’ve been found saying “If you pick Java, you fundamentally don’t care about the performance”.

                                                                                      I’m still not convinced it’s a relevant point against your comment, unless there was a “Fast Java” project that became successful. Otherwise, your point about nobody caring enough about performance for such a project to be successful still stands.

                                                                                      1. 1

                                                                                        If you’re willing to pull out all the stops, Java is plenty fast:

                                                                                        https://github.com/gunnarmorling/1brc?tab=readme-ov-file#results

                                                                                        In comparison, here’s the first result for “1 billion row challenge c”, same ballpark in the end:

                                                                                        https://www.dannyvankooten.com/blog/2024/1brc/

                                                                                        1. 2

                                                                                          I wasn’t assessing the veracity of the claim. I was just interpreting what the other commenter was saying, which was that Java had a reputation of being slow 20 years ago.

                                                                                          Even if you were to argue that claim, a benchmark from 2024 with GraalVM, which didn’t exist 20 years ago, wouldn’t do it.

                                                                                3. 7

                                                                                  if you pick an interpreted language, you fundamentally don’t care about the performance.

                                                                                  I do not think this is sound reasoning.

                                                                                  Yes, If you pick a dynamic language, it suggests performance isn’t the most important criteria in your decision.

                                                                                  But that doesn’t means you don’t care how fast it is at all, nor that you are not interested in making it faster if you can.

                                                                                  If I need to move a big package around, I’ll pick a truck because enough storage space is the most important characteristic to me for that task. Doesn’t mean I totally don’t care about how fast my truck is going. If you come to me with a new truck that retains all the other characteristics but is faster, I’m certainly interested.

                                                                                  1. 7

                                                                                    if you pick an interpreted language, you fundamentally don’t care about the performance.

                                                                                    I think this contains an element of truth (and the rest of your post expands on it well), but I don’t think this phrases it well. What I think is accurate is:

                                                                                    when you picked the interpreted language, you found the performance it offered at the time acceptable given the other tradeoffs.

                                                                                    But if it were 10x slower, you might not have made the same decision. And once your situation changes, you may regret your choice. You may gain more users, you may add more functionality, your app may start slowing down. Or as you reach $10 million in compute spend, performance wins start to look more appealing. Then you start to care.

                                                                                    But for the users who don’t reach those circumstances, they remain relatively insensitive to performance improvements.

                                                                                    1. 1

                                                                                      Don’t forget inertia. There will always be a massive amount of organizations who will not introduce new technology just because they (rightly or wrongfully) assume their developers can not or will not accept or learn a new language.

                                                                                    2. 5

                                                                                      Good points, but I’d like to add some nuance to your nuance. ;)

                                                                                      As much as I hate to admit it, a disturbingly large contributor to the “success” of a programming language (implementation) is simply luck of being in the right place at the right time.

                                                                                      I think the example of Kotlin really shows this more than anything else. Kotlin had some huge tailwinds to becoming successful. First, Java had been very stagnant for a long time, and the general programmer zeitgeist was moving away from the 90s style of programming that Java was stuck in. Second, there was real world (yuck!) legal stuff going on with Google and Oracle with respect to Android and Java, and while I’m fuzzy on the details now, it seemed that there was some reason that Android was stuck on older JVM APIs, so even as old and stale as Java was, “Android Java” was even worse! Then, there were probably legal reasons that Google promoting Kotlin to “first class” for Android app development might’ve been helpful to their business/ecosystem.

                                                                                      Obviously I don’t have a crystal ball, but if we teleported back to 2014-ish (whenever Kotlin started getting popular amongst Android devs) with the current, 2024, versions of Kotlin and Java, I’d be willing to bet that most Android apps today would still be written in Java and that Kotlin would be pretty niche if it still existed at all.

                                                                                      As for Scala, I think that’s an example of being in the right place at the wrong time (and a little bit of bad project management). Scala was too far ahead of its time when it first came out. I’ll wait for the laughter and eye-rolling to die down… But, seriously, if Scala came out around the same time as Rust, Swift, Kotlin, TypeScript, etc, I’m sure it would be very popular. But, programmers in 2004 were absolutely NOT interested in strong static type systems and fancy ideas like “type classes” or immutable data or “sum types”. It’s probably also fair to say that computers were too slow back then for a good DX with such an advanced type system (e.g., imagine doing Rust dev on a computer from 2004- it’s pretty rough doing it on a computer from 2024…).

                                                                                      1. 1

                                                                                        Scala was just a mess. When I tried to learn it (must have been before 2010) no two code bases looked even remotely similar. It was the Perl of the JVM. Nearly Java, only functional, some mix of everything? Yes.

                                                                                        Plus the compilation times; and I think the community criticism only came later?

                                                                                        I’m not particularly trying to hate on Scala, but I’ve talked to many people for whom it didn’t click. Interestingly people from all backgrounds, be it functional or imperative, hobbyists or corporate.

                                                                                        1. 3

                                                                                          I mean, that’s fine and fair. It’s honestly not at all my favorite language either. I just keep going to bat for it because I think it gets more flak than it deserves.

                                                                                          Everyone is entitled to their opinions and tastes, and maybe the hybrid Java+Haskell syntax is truly too big of a barrier for it to be popular. But, are you the kind of person I was addressing in that comment? E.g., are you a fan of Rust for its sum types (enums), type classes (traits), and Result return type idiom?

                                                                                          Based on how many people seem to enjoy Rust-style enums and Result types, and how often they are reinvented by programmers working in languages like Kotlin and TypeScript, I can’t help but suspect that many of them would actually like Scala if it weren’t for its bad reputation– especially those who like Kotlin, which “borrowed” almost all of its defining features (besides coroutines) from Scala–except often times they did it worse (see: when vs match and “context receivers” vs traits and implicits).

                                                                                          I just recently had a back-and-forth on the Kotlin subreddit where someone asserted that Scala sucked because it was too flexible and had too many features. My response was to list a bunch of features of Kotlin while asserting that Kotlin probably has more individual features than Scala, and to point out how everyone seems to love Kotlin’s DSL-writing capabilities.

                                                                                          To your point, it’s definitely a real thing that two Scala code bases can look totally different, but I’ll assert that if two Kotlin code bases look similar, it’s only because of the culture of its practitioners, because the Kotlin language is every bit as syntactically flexible as Scala.

                                                                                          1. 2

                                                                                            I’ve not looked at enough Kotlin code bases so I’ll take your word for it. But yeah, I didn’t say it was all Scala’s fault but from my outside view, at the time, they touted this flexibility as an awesome feature, whereas Kotlin seems to push it more for DSLs. I dunno, it just feels more like a logical continuation of Java (and now recent Java looks more like Kotlin than ever) and Scala somehow didn’t.

                                                                                            Or maybe the times have just changed. Let’s not forget that LSP did not exist, most IDEs were still a bit dumb compared to today and many convenience features we take for granted (especially AST-based autoformatting accoding to a standard on a single keypress) were rare or pipe dreams, depending on your tooling. So I agree on that part that it was too early, but for the widespread use of tooling and maybe one or two “blessed” coding standards. C++ kinda has the same problem I think, code can look completely different, but no one ever mentioned that as a plus.

                                                                                            Also not sure what person I am, I think I was mostly looking for something that can do what C/C++ can do but without the major downsides, while not being bound to the JVM. I don’t think I was searching for the perfect language, but Rust checks a lot of boxes. :)

                                                                                            1. 3

                                                                                              I dunno, it just feels more like a logical continuation of Java (and now recent Java looks more like Kotlin than ever) and Scala somehow didn’t.

                                                                                              It definitely is, and Scala isn’t. But, I think that’s pretty much in line with the goals of both languages. Kotlin wants to just be Java with nicer syntax and null safety. Scala wanted to be some new thing that was both FP and OOP, and had a much more advanced type system than Java.

                                                                                              Or maybe the times have just changed. Let’s not forget that LSP did not exist, most IDEs were still a bit dumb compared to today and many convenience features we take for granted (especially AST-based autoformatting accoding to a standard on a single keypress) were rare or pipe dreams, depending on your tooling. So I agree on that part that it was too early, but for the widespread use of tooling and maybe one or two “blessed” coding standards. C++ kinda has the same problem I think, code can look completely different, but no one ever mentioned that as a plus.

                                                                                              I agree that this was always a big part of it. Scala was too complex for the tooling to keep up with and for the computers of the day to keep up with. Even today, look at how long it takes to compile a medium sized Kotlin project compared to a Java project- Kotlin is way slower. It would have been unacceptably slow for pre-2010, just like Scala was.

                                                                                              To be fair, there are plenty of other reasons that Scala earned the bad reputation, too: there were big breaking changes from “minor” version bumps, SBT was always a giant pain in the ass, the syntax is/was pretty different from C/Java. My only contention is that I think people have cargo-culted a meme that Scala’s type system is too complex and weird, when it was really only complex and weird compared to the languages of 15+ years ago–it probably blends right in today with Rust, Kotlin, Swift, TypeScript, etc. Remember that a lot of people used to enjoy languages without any static typing at all back then: PHP, Perl, Python, JavaScript, because any static typing was “too much hassle”. I’d say that “times have changed” is a very valid takeaway.

                                                                                              Also not sure what person I am, I think I was mostly looking for something that can do what C/C++ can do but without the major downsides, while not being bound to the JVM. I don’t think I was searching for the perfect language, but Rust checks a lot of boxes. :)

                                                                                              I was in the same boat, although I will say that I was and am still looking for the perfect language. Rust isn’t it, but it’s the closest I’ve found so far. :)

                                                                                              1. 4

                                                                                                Speaking as someone who gave Scala a several-hour look once somewhere between a decade and two ago and then dropped it like a rock, with otherwise fairly incidental JVM context in general, mostly in Clojure—circumstances recently have conspired in it becoming the main language for my “fun” projects (because Chisel is written in it), so I picked it up in earnest and have really been putting in effort to use and learn it thoroughly. (Realising at one point that writing an SBT plugin was probably the correct way to do something (since I’m writing build tools, among other things) and so learning that thoroughly was certainly a trip.)

                                                                                                I’d probably not have gotten anywhere without LSP, and so that point is very germane — Metals in VSCode is more consistent and performant than IntelliJ, which seems so unfortunate! — but also how well the functional stuff really does carry through at so many levels. It’s a very minor example, but realising in real time that I could use appendedAll with an Option because Option extends Iterable¹ was very much a moment of my Haskell brain waking up. For something I was sure I was going to (have to) dislike again, it’s been super pleasant.

                                                                                                ¹IterableOnce, but who’s counting? Pun intended.

                                                                                    3. 2

                                                                                      Looking at the landscape of viable Common Lisp implementations, I wonder how much of this is down to a lack of a capital-s Standard for those languages.

                                                                                      1. 5

                                                                                        The Common Lisp standard happened after at least a dozen alternative implementations. That suggests about the level of pain needed before a standard happens.

                                                                                        1. 3

                                                                                          There is a capital-s Standard for Standard ML and those languages are all losers!

                                                                                          Disclaimer: I love Standard ML and personally know maintainers of two implementations that I hold in high regard.

                                                                                        2. 9

                                                                                          Looks nice but wait, is requiring Go and Rust the new requiring Python and Ruby?!

                                                                                          1. 2

                                                                                            I assume it requires Rust to build but only requires Go to run?

                                                                                            1. 4

                                                                                              Go is compiled as well. The toolchain you need on your dev environment includes Rust and Go in order to build a Borgo program. You get a static binary that do not need any interpreter.

                                                                                              1. 8

                                                                                                once borgo has binaries out you’ll not need a Rust toolchain just to build borgo programs, though.

                                                                                          2. 3

                                                                                            An OS with very little code that works on all common hardware.

                                                                                            I know device drivers are hard. But I’m willing to throw away performance. I wish there was a timeless set of low-performance device drivers that anyone could consistently run.

                                                                                            (I spent some time struggling with this but it’s not my skillset.)

                                                                                            1. 6

                                                                                              “very little code” seems to be fundamentally at odds with “all common hardware”

                                                                                              I think the problem is that hardware is software now. You can’t write a little code to interface with a lot of code! If the hardware is complex, then the software that talks to it will be complex.

                                                                                              There is no hardware abstraction layer any more … If you search for what Timothy Roscoe has been doing recently, he explains the problem well.

                                                                                              https://www.usenix.org/conference/osdi21/presentation/fri-keynote

                                                                                              I also think it relates to what Oxide computer is doing – they “discovered” all this crappy, opaque, proprietary vendor firmware you have to interface with.

                                                                                              The rabbit hole goes VERY deep


                                                                                              On the other end, Unix solved that problem the early 70’s :-) xv6 runs on QEMU, and supports a terminal

                                                                                              https://pdos.csail.mit.edu/6.828/2012/xv6.html

                                                                                              I guess you want something in between that and Linux, but I think defining the problem is the problem. “all common hardware” is a bit of a vague problem


                                                                                              Personally I still believe in abstraction by shared nothing processes for comprehensibility. I don’t necessarily want “very little code”, but I do want “very few moving parts” and “very simple interfaces”.

                                                                                              e.g. relates to Ousterhout’s narrow vs. deep interfaces. I use the Unix file system because it has a simple interface, but I don’t have to know exactly how ext4 is implemented.

                                                                                              I’m glad they have all sorts of performance optimizations.

                                                                                              Likewise, I am comfortable using git – it’s a lot of complex code, but it works well and it has a relatively simple immutable, shared nothing interface.

                                                                                              1. 1

                                                                                                Yes, that Roscoe talk was eye opening at the time it came out. Remember, this is a white whale :)

                                                                                                Arguably the OS is now just another abstraction for comprehensibility. Not a very good one anymore. What do you think of the UEFI suggestion in the other replies?

                                                                                                By “all common hardware” I’m alluding to the likelihood of being able to run a new computer I buy from the store (or thrift store). At some speed. Remember, I’m willing to trade almost any level of performance in exchange for something timeless I don’t have to constantly update.

                                                                                                In your terms I think I’m asking for all software to become hardware. Which, yes, is probably rowing against the direction the world is going. That is why this is a white whale :)

                                                                                              2. 4

                                                                                                Running under a hypervisor or some kind of abstract machine is a time-honoured tradition. virtio would provide framebuffer, ethernet, etc. in exchange for a perhaps hardware specific shim of a thin hypervisor.

                                                                                                That, or perhaps target EFI Boot Services, get GOP et al, running as an EFI executable.

                                                                                                1. 2

                                                                                                  I’d love to see a hypervisor or abstract machine that provided framebuffer and ethernet with very little code.

                                                                                                  EFI might be the approach to try. What is GOP?

                                                                                                  1. 2

                                                                                                    GOP is the framebuffer protocol in EFI.

                                                                                                    1. 1

                                                                                                      Build it out of existing parts (a half-dozen emulated 68ks or something), sell it as a semi-retro dev system.

                                                                                                  2. 2

                                                                                                    I wish there was a timeless set of low-performance device drivers that anyone could consistently run.

                                                                                                    Are you talking of all hardware of a given class supporting, in addition to their native programming interface, a simplified generic interface?

                                                                                                    1. 2

                                                                                                      That would be one way to do it. A more expansive version of the BIOS standard that also had support for hard disks and networking would be wonderful.

                                                                                                      1. 4

                                                                                                        UEFI does define a standard protocol for Ethernet so as long as you wrote your system such that it could run without exiting boot services, you could use it.

                                                                                                        1. 2

                                                                                                          Sounds a bit like OpenFirmware, which was used on a few different computer architectures (though I’ve only used it on my OLPC XO-1). If I understand correctly, the idea was to offer a slight abstraction so manufacturers could write one driver (in Forth) and have it work across different hardware, operating systems, etc.

                                                                                                      2. 2

                                                                                                        An OS with very little code that works on all common hardware

                                                                                                        This is the most impossible white whale out there :)

                                                                                                        1. 1

                                                                                                          Thanks! Remember the second paragraph, though. I’m trying hard to make it plausible.

                                                                                                        2. 8

                                                                                                          Story time: I’m online friends with dang who moderates Hacker News, and I once sent him an email about something else where I offhandedly said something like, I wouldn’t write this as a HN comment because it wouldn’t do my karma average any good. As I recall, dang said something mild about how I shouldn’t worry about that – and then a few months later it was just gone from HN profiles. So, prior art FWIW.

                                                                                                          1. 13

                                                                                                            This felt very valuable but was also dense and hard to read, so I deleted a bunch of stuff and reordered it for my notes as I made sense of all the forces at play. Here are my notes in case someone else finds them useful. Since this is just my private notes, I don’t feel compelled to credit or precisely quote speakers :)

                                                                                                            The problem: maintainer overwork and stress

                                                                                                            Maintainers end up with all the tasks nobody else wants to take on:

                                                                                                            • patch review
                                                                                                            • release engineering
                                                                                                            • testing
                                                                                                            • responding to security reports

                                                                                                            The list grows over time. (See Darrick Wong for a complete list.)

                                                                                                            The kernel’s culture can be off-putting and not inclusive, making people fight to get their changes in. There is no arbiter in the community; Torvalds wants developers to figure things out for themselves, so disagreements over changes often end up as big battles. Desire for a more encouraging tenor.

                                                                                                            Becoming a maintainer is often seen as a promotion for developers who do good work. People often see maintainers as some sort of “super developer”, but they are really just managers.

                                                                                                            Contributor and maintainer roles should be separated. Incoming contributors are largely working at the behest of employers. Maintainers tend to be more intrinsically motivated.

                                                                                                            People tend to hold onto power for dear life.

                                                                                                            Part of the pay for reviewing work is autonomy within a subsystem, but the community doesn’t actually provide that autonomy. Instead, maintainers hold onto all of the decision power.

                                                                                                            If there are 100 people sending patches, there may be five who can be convinced to help maintain the subsystem.

                                                                                                            People are sending too many patches.

                                                                                                            Some subsystems are setting their requirements for contributors too high, making it hard for new developers to come in.

                                                                                                            Build up a structure with people who are able to take on the various tasks needed. The filesystem layer is more important than graphics; why doesn’t it have more people than the DRM layer? Building up that structure is hard; one developer simply couldn’t do the work, another was unable to bring things to a conclusion. The problem space can be complicated, raising the bar for insiders at various levels.

                                                                                                            Part of the problem may be documentation; unclear how things work. Write down problems as they are encountered. Documentation can be hard to understand. Particularly if the problem space is complicated. Pointing contributors to documentation can come across as an impersonal brush-off.

                                                                                                            Leave voids for others to fill. But the generic interrupt subsystem currently has one person maintaining it, even though if it breaks, the whole world breaks.

                                                                                                            Code review

                                                                                                            Reviewing is boring, so it is unsurprising that people don’t want to do it. Find ways to get away from the email patch model, which is not really working anymore.

                                                                                                            There are tools out there that make the tasks we hate easier. GitHub is good at showing the work that is outstanding, so he can see what has been languishing. Adopting another tool is unlikely to solve the problem. The real solution is to teach managers proper engineering [??].

                                                                                                            Never know how long a patch submitter will be around, so it’s never clear whether time spent to educate them will be worthwhile.

                                                                                                            Don’t ask submitters to fix existing technical debt, but don’t let them add more debt. A developer trying to contribute code is often the best opportunity to get some cleanup done. The Btrfs community has been guiding developers that way for a long time and has learned how to do it well. Lessons? Maintainers should, he said, take a bigger role in teaching others.

                                                                                                            I like reviewing, but am not supported [by employer?] in doing it. Developers tend not to understand just how much social capital they can get from doing good reviews.

                                                                                                            This is all taking an overly simple view of the reviewing task; many developers hesitate to do reviews because they don’t want to be seen as having missed something if a bug turns up. Nobody should feel expected to catch everything. Unclear how to communicate that to the community, though. Maintainers make fools of themselves every other day. A Reviewed-by tag mainly means that the reviewer will be copied on any bug reports; developers should add those tags liberally. But it can take a few reviews to feel comfortable adding a Reviewed-by tag. Bare Reviewed-by tags offered without any discussion can be a sign of somebody trying to game the system and get into the statistics. If the maintainer does not know the reviewer, their Reviewed-by tag means nothing.

                                                                                                            1. 2

                                                                                                              I think we’re talking about three different things:

                                                                                                              • The original blog post was comparing Mastodon clients in terms of UX ; same contents, different presentations
                                                                                                              • This follow-up seems to focus on websites, which by essence present different contents
                                                                                                              • The title of this post is about UX and open-source

                                                                                                              I personally agree with your take on websites: if the content is useful to me, I don’t mind an imperfect UX or an austere UI, as I know how difficult web design is, and I know my browser can help me. And of course, if the content sucks, the best UX in the world and the most shiny UI in the world won’t compensate.

                                                                                                              But the original blog post is talking about different apps for browsing the same contents, and all it’s saying is: when comparing Mastodon clients, iOS ones seem to have put more effort to UX and more attention to detail. It may be related to the very choosy Apple Store validation process, which started with the best intentions of “high-end UX for everyone” but turned out to feel like dictatorship IMHO. And it doesn’t seem to be related to the open-sourceness of some of the apps, as even closed-source Android apps don’t seem acceptable to the author.

                                                                                                              Finally if you wanna talk about UX of open-source apps, clearly there’s a lot to say, but I believe it can be put this way: many open-source communities could be better at asking/encouraging/integrating non-code contributions (including design), and many UX designers could be better at proposing their help instead of criticizing open-source apps UX. Decent UX design is making the product easily usable ; great UX design is making the product enjoyable to use (sometimes so enjoyable that you want to use it more, which can lead to dark patterns to capture your attention), and this stuff is really hard (way more difficult than the tech stuff if you ask me). And one of the many beauties of open-source is: if you think it’s useful but could be better, your help is more than welcome!

                                                                                                              1. 1

                                                                                                                For this conversation I don’t see a big difference between websites and apps, and I think we can think similarly about UX vs open source with either of them.

                                                                                                                But the original blog post is talking about different apps for browsing the same contents.

                                                                                                                We all used to think of the web browser as an app for “browsing the same contents”, but today web browsers increasingly prioritize ads. So it’s not the same contents anymore. This is the same trajectory websites went through a few years earlier.

                                                                                                                In the end apps and websites are both code. Those who control the code get a lot of influence over the UX.

                                                                                                                I absolutely agree with you that open source projects could be more open to designers.

                                                                                                              2. 9

                                                                                                                Hah, that reminds me of my goodware radar on the internet. If a website is really plain-looking, minimalist and rough around the edges, you know you’re downloading good software written by someone who cares. The more polish there is in the web UX, especially if there’s loads of whitespace, the more the software behaves like malware. Works pretty reliably. If you have excess budget that goes into UX, you’re almost always either neglecting core functionality or taking advantage of your users somehow. I wish it wasn’t the case, but that’s just the way it is.

                                                                                                                Examples:

                                                                                                                1. 10

                                                                                                                  Counterpoint:

                                                                                                                  I think that your theory is really only potentially relevant to enthusiast / nerd software so I limited my examples to folks producing high quality nerdware.

                                                                                                                  1. 4

                                                                                                                    It’s a shame other platforms have nothing like the Mac shareware ecosystem. Very much the last reflex of commercial software like that.

                                                                                                                    1. 3

                                                                                                                      I have in the past, I don’t bother now, tried to explain to folks who rip on Macs that having fewer software options is a feature.

                                                                                                                      1. 3

                                                                                                                        I think nobody quite sat me down and explained to me that there’s a stable community (neither growing[1] nor shrinking) that buys software for money.

                                                                                                                        [1] Not growing (roughly) is important to keep out investors that don’t share the subculture’s values.

                                                                                                                        I’m still skeptical this can last. It feels like mom and pop stores until they get acquired by large chains. But what do I know. I’ll cheer it on and hope it lasts.

                                                                                                                        1. 2

                                                                                                                          It certainly helps that Mac users have, by buying a Mac, demonstrated that they’re happy to spend good money for a high quality product.

                                                                                                                          1. 2

                                                                                                                            I’ve heard that part before, but it hasn’t been as persuasive:

                                                                                                                            • M1 notwithstanding, Macs feel less premium today than they did 10 years ago.
                                                                                                                            • The market of Apple users has certainly exploded over time, which feels like a red flag as I said above.
                                                                                                                            • Just spending money isn’t enough these days to get a premium experience. For example, I see a lot more ads now in situations where I’m a paying customer. And these are not situations I’ve been accultured to like movie theaters or planes. Apple increasingly getting into ads reinforces this sense.

                                                                                                                            Here, the key new argument I’m hearing (correct me if I’m wrong) is that people who support Mac shareware are the secret sauce, and yes the community probably got bootstrapped by Apple’s premium aura at some point in time. But it seems to have a quasi-independent (modulo App store) existence at this point.

                                                                                                                            I used Macs from 2008 to 2022 (exclusively at the start and tapering off halfway through to return to Linux), but I never quite made it into this community. I only saw scattered glimpses of it on landing pages here and there that I never integrated into a coherent view of a subculture.

                                                                                                                      2. 1

                                                                                                                        I see, this might be what I’m missing.

                                                                                                                      3. 1

                                                                                                                        I have to say, I am with @benjaminri here.

                                                                                                                        His list shows some website which to me say “here are some professional tools that may be of use to you if you want to do something and don’t want to mess around and waste time.”

                                                                                                                        Your list shows 2 sites with cartoonish, pretty but almost childish, ways to mess around and waste time – therefore are not really of interest to me – and a website that says “hi, we made this bloated inefficient tool in the past and here is a new tool we made and we have absolutely zero mention of it being less bloated or less inefficient” (subtext: it’s not).

                                                                                                                        It’s like GNOME vs Xfce.

                                                                                                                        I keep reading GNOME fans who says “it looks great, it’s so slick and efficient, it’s got a great keyboard UI and it gets out of my way.”

                                                                                                                        What I see:

                                                                                                                        • it does look great but I don’t care about looks, I care about functionality much more.

                                                                                                                        • it’s extremely wasteful of screen space and only looks efficient if you don’t know how to use space efficiently.

                                                                                                                        • it has a passable keyboard UI that breaks 30Y of prior art in keyboard UIs, so it’s only good if you didn’t know how to use your older tools. Compare: ribbon-based MS Office versus previous 20Y of MS Office.

                                                                                                                        • it’s not efficient: it’s written in blasted Javascript, FFS, and in its preferred Wayland implementation it’s a giant single thread and if that dies, you lose your entire desktop and all apps, whereas with X11 you could just alt-F2, xterm, metacity or whatever and be back up and running.

                                                                                                                        It looks nice but it doesn’t work very well unless you never appreciated how well the previous generation of tools work.

                                                                                                                        I use Xfce on several machines and the design of that says to me, like this basic functional websites, “we care about it working more about the cosmetics, so if you want to slap skins on it, go ahead”. So like the post you’re responding to: I value a minimal cosmetic over the bling you prefer.

                                                                                                                        I’m not saying you are wrong. You do you. But what you like says to me “values form over function”. I value function over form, and if it looks nice that’s a bonus. Your post doesn’t refute the one you’re replying to: it reinforces its message.

                                                                                                                        1. 3

                                                                                                                          My comment was responding to this portion of their comment

                                                                                                                          The more polish there is in the web UX, especially if there’s loads of whitespace, the more the software behaves like malware. Works pretty reliably. If you have excess budget that goes into UX, you’re almost always either neglecting core functionality or taking advantage of your users somehow.

                                                                                                                          Panic Transmit has been the Mac workhorse FTP and remote file management Swiss Army knife since 1998. The Rogue Amoeba line is highly specific, focused utilities targeting audio engineers, music producers, and content creators.

                                                                                                                          They’re all native apps, written for Macs only, no Electron. They are consistent with the OS UI and functional metaphors. They fulfill specific tasks well.

                                                                                                                          They’re not diamonds in the rough. They’re particularly good examples, but there are many small software producers making Mac specific software that is concerned with solving problems in a way thoughtful of UX, both in marketing communication and the app itself, because that’s what their customers value.

                                                                                                                          When looking for cli FOSS tools the users value other stuff. Pure black text on a pure white background. Blue links. Bulleted lists with big indents. It’s an aesthetic.

                                                                                                                          The bottom half of your comment I don’t really know what to do with.

                                                                                                                          1. 1

                                                                                                                            Yeah, if you actually used any of these Mac apps, you’d realize they’re often far more intricate and featureful than they look at first glance, moreso than GUI apps on other platforms. Things like gestures and the contents of the menu bar will make this obvious when you actually use them. Good design makes them easy to get started with and progressively discloses further functionality.

                                                                                                                            1. 1

                                                                                                                              Well, yes: you listed 3 examples, and I mentioned 2 of them. Rogue Amoeba was the one I wasn’t talking about, because it’s [a] not a games company [b] doesn’t do Electron apps AFAIK.

                                                                                                                              And @benjaminri’s examples weren’t CLI apps, so while you do have a point, I don’t think it applies to them very well.

                                                                                                                              As for the rest: well, OK then, never mind. I am aware I am something of an odd-one-out here but I don’t think I’m entirely alone.

                                                                                                                        2. 7

                                                                                                                          I purchased a printer from a major manufacturer yesterday and had to download drivers. Let me tell you, the webpage was janky AF but I’m not sure that translates into good software…

                                                                                                                        3. 5

                                                                                                                          is it respectful of my attention

                                                                                                                          for me, good design and UX are part of this

                                                                                                                          if a tool makes me squint or mess around with my window or zoom or fonts or styling, that’s effort spent on something besides actually using it

                                                                                                                          1. 4

                                                                                                                            Absolutely agreed! But there are degrees to this. If I adjust the fonts once, that’s some cost. If the design keeps changing and moving things around in ways I don’t care about, that’s a larger cost.

                                                                                                                            Good design is a contingent thing you can gain or lose. When Gruber compares a proprietary Mastodon app with an open source one, that’s a perfect place for the caveat that, you know, this one’s great now but if it degrades because they took out some additional VC funding, you’re outta luck. With this second app they know someone else might make a fork so they’ll be more careful. They gave up some leverage early, and that deserves credit.

                                                                                                                          2. 1

                                                                                                                            But they never do.

                                                                                                                            I’ve often wished I’d had https://en.wikipedia.org/wiki/How_to_Design_Programs for my textbook. I wonder if universities still use it.

                                                                                                                          🇬🇧 The UK geoblock is lifted, hopefully permanently.