Threads for adam_d_ruppe

  1. 14

    Almost all the people starting UI tookits or GUI libraries and posting them here either don’t mention accessibility at all, or say it is something they hope to get to eventually…. but never actually do. It is complicated even on Windows, where there is a pretty established system api for it, and on Linux it is hard to even find a reasonable interface to use when you do make the effort.

    1. 6

      I thought this kind of thing was the whole reason Wayland existed. (Nevermind the fact that multi-monitor fractional scaling already works in X and has for quite some time, all it was really lacking is a cross-toolkit spec to unify the applications).

      1. 3

        Doesn’t different scaling on different monitors still require separate surfaces in X? As in, you can’t drag a window from one to their other? I remember that was a limitation some time ago.

        1. 3

          Doesn’t different scaling on different monitors still require separate surfaces in X?

          No, and it hasn’t for a very long time (xrandr combines the monitors into one virtual canvas, you’d have to go back to the different screens feature in the original core spec which has all the different monitor settings but indeed ties windows to a specific monitor).

          But even with the relatively new xrandr techniques, since the X protocol, ICCCM, and freedesktop specs don’t actually specify, each toolkit and/or compositor can do their own thing. So your results will vary a lot with gtk vs qt etc. And xrandr’s built-in bitmap scaling it does by itself is pretty poor (it is just integer onto the canvas….. basically the same as Wayland has….) so you’d not want to use that unless you’re desperate. But, to be fair, even that basic facility still lets you drag things between windows. It is only the old multiple screen facility that tied windows since it didn’t combine the monitors into one coordinate space canvas.

      1. 19

        Scrollbars on Linux and Windows 11 won’t take space by default.

        Can we please stop this (ever-continuing) trend? I originally thought auto-hiding scrollbars were a cool design trick until I realized just how much a scrollbar adds to UX: it’s a permanently visible representation of how big a document is, and how far along in it I am.

        Another release, yet more Firefox UI/UX changes seemingly just for the sake of change (which I guess is also the state of the modern web, in many ways, so it’s somewhat fitting).

        1. 11

          What I find interesting is they admit it harms accessibility - that’s why you can turn it back on under accessibility options.

          Why do we find it acceptable for gratuitous inaccessibility by default instead of vice versa?

          1. 1

            they admit it harms accessibility

            Here, “they” is Windows, not Firefox. It is Windows that categorized scrollbar visibility as an accessibility option.

            On Windows, Firefox follows the system setting (System Settings > Accessibility > Visual Effects > Always show scrollbars).

          2. 6

            I’m not sure about Windows/Linux, but on macOS you can just rest two fingers on the trackpad to make the scrollbar in the current app visible. You don’t have to scroll the app, just rest your fingers on the trackpad.

            And when you connect a non-Apple mouse to a Mac, the scrollbars become permanently visible by default.

            Because of this, I haven’t found auto-hiding scrollbars to be a usability issue at all.

            1. 5

              On Mac you can also toggle scroll bars back on for all apps in the system preferences, which is what I do

              1. 2

                You don’t have to scroll the app, just rest your fingers on the trackpad.

                Seems to be app specific, because this works in Firefox 100 but not in Chrome 101.

            1. 2

              There ought to be a standardized “CGI 2” that works similar to a mashup of CGI + Lambda – a single connection per process, but processes can be reused, with the application server starting up new processes to handle connection concurrency.

              I’d use a simple framing protocol over stdin/stdout that falls back to CGI 1 if the right CGI_VERSION environment variable is not set.

              Conceptually doing something like this:

              > {"request":"GET","path":"/index.html","headers":{"Content-Type":"text/plain"}}\n
              < {"code":200,"message":"OK","length":13}\n
              < Hello world.\n\n
              > {"request":"GET","path":"/page.html","headers":{"Content-Type":"text/plain"}}\n
              > {"code":403,"message":"OK","length":0}\n
              1. 3

                Isn’t that just FastCGI?

                1. 2

                  FastCGI is more typically implemented as just a transport, so you need to arrange for a daemon to be listening on a given socket. That breaks the magical “drop files in a directory” type of a workflow — and in most cases you may as well just deploy nested HTTP.

                  All of this is about developer experience, so I think it’s important how the typical application server implements it.


                  I guess maybe I just want actual FastCGI support in the servers that I use? Hmm 🤔

                  1. 3

                    FastCGI on Apache works the same as dropping files in (at least in my config, i guess it could be different for other people), it manages the worker processes for you starting and stopping as needed.

                    1. 2

                      I think uwsgi’s fastcgi interface would do basically what you’re talking about, but I guess fair enough that it’s probably not the typical application server.

                  2. 1

                    SCGI and its ilk have been a thing for quite some time. That’s never been CGI’s real problem.

                  1. 17

                    On the one hand, I totally get the value of a lack of a build step. Build steps are annoying. On the other hand, authoring directly in HTML is something I am perfectly happy to do as little of as possible. It’s just not a pleasant language to write in for any extended amount of time!

                    1. 20

                      I’m pretty convinced that Markdown is the local maxima for the “low effort, nice looking content” market.

                      1. 10

                        Agreed. ASCIIDoc, reStructuredText, LaTeX, and other more-robust-than-Markdown syntaxes all have significantly more power but also require a great deal more from you as a result. For just put words out, Markdown is impressively “good enough”.

                        1. 4

                          I can never remember Markdown syntax (or any other wiki syntax for that matter), while I’m fairly fluent in HTML, and I’m not even a frontend dev. HTML also has the advantage that if some sort of exotic markup is necessary, you know it’s expressble, given time and effort.

                          1. 7

                            That’s fine, because Markdown allows embedded HTML [1]

                            About the only thing that’s a bit obtuse is the link syntax, and I’ve gladly learned that to not have to manually enclose every damn list with or tags.

                            [1] at least Gruber’s OG Markdown allowed it by default, and I recently learned CommonMark has an “unsafe” mode to allow it too.

                            1. 11

                              The trick to remember how to do links in Markdown is to remember that there are brackets and parentheses involved, then think what syntax would make sense, then do the opposite.

                              1. 4

                                For reference: a Markdown [link](

                                Elaboration on the mnemonic you describe

                                I thought like you when I first started learning Markdown:

                                • Parentheses () are normal English punctuation, so you would intuitively expect them to surround the text, but they don’t.
                                • Square brackets [] are technical symbols, so you would intuitively expect them to surround the URL, but they don’t.

                                However, I find “don’t do this” mnemonics easy to accidentally negate, so I don’t recommend trying to remember the order that way.

                                Another mnemonic

                                I think Markdown’s order of brackets and parentheses is easier to remember once one recognizes the following benefit:

                                When you read the first character in […](…), it’s clear that you’re reading a link. ‘[’ is a technical symbol, so you know you’re not reading a parenthetical, which would start with ‘(’. Demonstration:

                                In this Markdown, parentheticals (which are everywhere) and
                                [links like these]( can quickly be told
                                apart when reading from left to right.
                                Why not URL first?

                                Since you wrote that Markdown does “the opposite”, I wonder if you also intuitively expect the syntax to put the URL before the text, like in [ MediaWiki’s syntax] (actual link: MediaWiki’s syntax). I never found that order intuitive, but I can explain why I prefer text first:

                                When trying to read only the text and skip over the URLs, it’s easier to skip URLs if they come between grammatical phrases of the text (here), rather than interrupting a (here) phrase. And links are usually written at the end of phrases, rather than at the beginning.

                                1. 2

                                  Well I’ll be dammed. That completely makes sense.

                                  I do, however, wonder whether this is a post-hoc rationalization and the real reason for the syntax is much dumber.

                                2. 3

                                  Hah. The mnemonic I use is everyone gets the ) on the end of their wiki URLs fucked up by markdown… because the () goes around the URL. therefore it is s []().

                                  1. 2

                                    This is exactly what I do. Parens are for humans, square brackets are for computers, so obviously it’s the other way around in markdown.

                                  2. 3

                                    A wiki also implies a social contract about editability. If my fellow editors have expressed that they’re uncomfortable with HTML, it’s not very polite of me to use it whenever I find Markdown inconvenient.

                                    1. 1

                                      Of course. I was replying in context of someone writing for themselves.

                                  3. 3

                                    This is interesting: I’ve heard that same experience report from a number of people over the years so I believe it’s a real phenomenon (the sibling comment about links especially being the most common) but Markdown clicked instantly for me so I always find it a little surprising!

                                    I have hypothesized that it’s a case of (a) not doing it in a sustained way, which of course is the baseline, and (b) something like syntactical cross-talk from having multiple markup languages floating around; I took longer to learn Confluence’s wiki markup both because it’s worse than Markdown but also because I already had Markdown, rST, and Textile floating around in my head.

                                    I’m curious if either or both of those ring true, or if you think there are other reasons those kinds of markup languages don’t stick for you while HTML has?

                                    1. 2

                                      I’m not Michiel, but for me, it’s because HTML is consistent (even if it’s tedious). In my opinion, Gruber developed Markdown to make it easier for him to write HTML, and to use conventions that made sense to him for some shortcuts (the fact that you could include HTML in his Markdown says to me that he wasn’t looking to replace HTML). Markdown was to avoid having to type common tags like <P> or <EM>.

                                      For years I hand wrote the HTML for my blog (and for the record, I still have to click the “Markdown formatting available” link to see how to make links here). A few years ago I implemented my own markup language [1] that suits me. [2] My entries are still stored as HTML. That is a deliberate decision so I don’t get stuck with a subpar markup syntax I late come to hate. I can change the markup language (I’ve done it a few times already) and if I need to edit past entries, I can deal with the HTML.

                                      [1] Sample input file

                                      [2] For instance, a section for quoting email, which I do quite often. Or to include pictures in my own particular way. Or tabular data with a very light syntax and some smarts to generate the right class on <TD> elements consisting of numeric data (so they’re flush right). Stuff like that.

                                      1. 2

                                        Yeah, with markdown, I often accidentally trigger some of its weird syntax. It needs a bunch of arbitrary escapes, whereas HTML you can get away with just using &lt;. Otherwise, it is primarily just those <p> tags that get you; the rest are simple or infrequent enough to not worry about.

                                        whereas again, with the markdown, it is too easy to accidentally write something it thinks is syntax and break your whole thing.

                                        1. 1

                                          Yes, I’ve found that with mine as well.

                                        2. 1

                                          I don’t mean this as an offense, but I did a quick look at your custom markup sample and I hated pretty much everything about it.

                                          Since we’re all commenting under a post from someone that is handwriting HTML, I think it goes without saying that personal preferences can vary enormously.

                                          Updated: I don’t hate the tables syntax, and, although I don’t particularly like que quote syntax, having a specific syntax for it is cool and a good idea.

                                          1. 1

                                            Don’t worry about hating it—even I hate parts of it. It started out as a mash up of Markdown or Org mode. The current version I’m using replaces the #+BEGIN_blah #+END_blah with #+blah #-blah. I’m still working on the quote syntax. But that’s the thing—I can change the syntax of the markup, because I don’t store the posts in said markup format.

                                        3. 2

                                          You’re absolutely right, and so is spc476; HTML has a regular syntax. Even if I’ve never seen the <aside> tag, I can reason about what it does. Escaping rules are known and well-defined. If you want to read the text, you know you can just ignore anything inside the angular brackets.

                                          Quick: in Markdown, if I want to use a backtick in a fixed-width span, do I have to escape it? How about an italic block?

                                          This would all be excusable if Markdown was a WYSIWYG plain-text format (as per Gruber’s later rationalisation in the CommonMark debate). Then I could mix Markdown, Mediawiki, rST and email syntax freely, because it’s intended for humans to read, and humans tend to be very flexible.

                                          But people do expect to render it to HTML, and then the ambiguities and flexibility become weaknesses, rather than strengths.

                                      2. 2


                                        While I agree about the others, I fairly strongly disagree about AsciiDoc (in asciidoctor dialect). When I converted my blog from md to adoc, the only frequent change was the syntax of links (in adoc, URL goes first). Otherwise, markdown is pretty much valid asciidoc.

                                        Going in the opposite direction would be hard though — adoc has a bunch of stuff inexpressible in markdown.

                                        I am fairly certain in my opinion that, purely as a language, adoc is far superior for authoring html-shaped documents. But it does have some quality of implementation issues. I am hopeful that, after it gets a standard, things in that front would improve.

                                        1. 1

                                          That’s helpful feedback! It’s limped with the others in my head because I had such an unhappy time trying to use it when working with a publisher[1] a few years back; it’s possible the problem was the clumsiness of the tools more than the syntax. I’ll have to give it another look at some point!

                                          [1] on a contract they ultimately dropped after an editor change, alas

                                      3. 4

                                        Agree, I’ve been using it a ton since 2016 and it has served me well. I think it’s very “Huffman coded” by people who have written a lot. In other words, the common constructs are short, and the rare constructs are possible with embedded HTML.

                                        However I have to add that I started with the original (written ~2004) and it had some serious bugs.

                                        Now I’m using the CommonMark reference implementation and it is a lot better.

                                        CommonMark is a Useful, High-Quality Project (2018)

                                        It has additionally standardized markdown with HTML within markdown, which is useful, e.g.

                                        <div class="">
                                        this is *markdown*

                                        I’ve used both ASCIIDoc and reStructuredText and prefer markdown + embedded HTML.

                                        1. 3

                                          I tend to agree, but there’s a very sharp usability cliff in Markdown if you go beyond the core syntax. With GitHub-flavoured Markdown, I can specify the language for a code block, but if I write virtual then there’s no consistent syntax to specify that it’s a C++ code snippet and not something else where the word ‘virtual’ is an identifier and not a keyword. I end up falling back to things like liquid or plain HTML. In contrast, in LaTeX I’d write \cxx{virtual} and define a macro elsewhere.

                                          I wish Markdown had some kind of generic macro definition syntax like this, which I could use to provide inline domain-specific semantic markup that was easier to type (and use) than <cxx>virtual</cxx> and an XSLT to convert it into <code style="cxx">virtual</code> or whatever.

                                          1. 3

                                            I agree. What sometimes makes me a bit sad is that markdown had a feature compared to others that you can write it to make a nice looking text document as well that you might just output on the terminal for example.

                                            It kind of has that nicely formated plain text email style. Also with the alternative syntax for headings.

                                            Yet when looking at READMEs in many projects it is really ugly and hard to read for various reasons.

                                            1. 4

                                              The biggest contributor there in my experience (and I’m certainly “guilty” here!) is unwrapped lines. That has other upsides in that editing it doesn’t produce horrible diffs when rewrapping, but that in turn highlights how poor most of our code-oriented tools are at working with text. Some people work around the poor diff experience by doing a hard break after every sentence so that diffs are constrained and that makes reading as plain text even worse.

                                              A place I do wrap carefully while using Markdown is git commit messages, which are basically a perfect use case for the plain text syntax of Markdown.

                                              1. 1

                                                I honestly don’t care that much about the diffs? I always wrap at around 88/90 (Python’s black’s default max line length), and diffs be dammed.

                                                I also pretty much NEVER have auto wrap enabled, specially for code. I’d rather suffer the horizontal scroll than have the editor lie about where the new lines are

                                          2. 4

                                            It’s not just that they’re annoying, computing has largely been about coping with annoyances ever since the Amiga became a vintage computer :-). But in the context of maintaining a support site, which is what the article is about, you also have to deal with keeping up with whatever’s building the static websites, the kind of website that easily sits around for like 10-15 years. The technology that powers many popular static site generators today is… remarkably fluid. Unless you want to write your own static site generator using tools you trust to stay sane, there’s a good chance that you’re signing up for a bunch of tech churn that you really don’t want to deal with for a support site.

                                            Support sites tend to be built by migrating a bunch of old pages in the first two weeks, writing a bunch of new ones for the first two months, and then infrequently editing existing pages and writing maybe two new pages each year for another fifteen years. With most tools today, after those first two or three honeymoon years, you end up spending more time just keeping the stupid thing buildable than actually writing the support pages.

                                            Not that writing HTML is fun, mind you :(.

                                            (Please don’t take this as a “back in my day” lament. A static site generator that lasts 10 years is doable today and really not bad at all – how many tools written in 1992 could you still use in 2002, with good results, not as an exercise in retrocomputing? It’s not really a case of “kids these days ruined it” – it’s just time scales are like that ¯\(ツ)/¯ )

                                            1. 1

                                              Heh. I was using an editor written in 1981 in 2002! [1] But more seriously, I wrote a static site generator in 2002 that I’m still using (I had to update it once in 2009 due to a language change). On the down side, the 22 year old codebase requires the site to be stored in XML, and uses XSLT (via xsltproc) to convert it to HTML. On the plus side, it generates all the cross-site links automatically.

                                              [1] Okay, it was to edit text files on MS-DOS/Windows.

                                            2. 2

                                              I find that writing and edititing XML or HTML isn’t so much of a pain if you use some kind of structural editor. I use tagedit in Emacs along with a few snippets / templates and imo it’s pretty nice once you get used to it.

                                            1. 1

                                              Not to disparage the effort, but I’m curious why the author has chosen to implement an X11 compatibility layer and not a Wayland one, since X is rapidly approaching obsolescence in the Linux space.

                                              1. 23

                                                since X is rapidly approaching obsolescence in the Linux space

                                                This is said a lot, but it isn’t really true.

                                                1. 3

                                                  This is said a lot, but it isn’t really true.

                                                  But it should be. X is …… old. It should be resting.

                                                  1. 17

                                                    Linux is only about seven years younger than X. I guess it must be approaching obsolescence too.

                                                    For that matter, I was released in 1986….. oh dear, I don’t think my retirement account is ready for that :(

                                                    1. 13

                                                      Personally I can’t believe is served primarily via TCP and UDP. They are 42 years old and should be put to rest.


                                                      1. 1

                                                        You’re right. We should be using SCTP instead.

                                                  2. 11

                                                    The author explained this here.

                                                    1. 9

                                                      Even if Wayland does finally replace Xorg for Linux users, it doesn’t necessarily mean people will stop wanting to run X11 applications.

                                                      1. 7

                                                        X was obsolete, full stop, a decade or two ago. Whether or not a thing is obsolete has little to do with how ubiquitous or useful it is.

                                                      1. 1

                                                        I have to port some VBA in an Access database to another language for work. I haven’t touched VBA for like 12 years.

                                                        1. 2

                                                          I remember my first multitasking experience.

                                                          On a basic A500 (68000, 512KB “Chip RAM”), I opened tens of clocks (Workbench 1.3’s :utilities/clock), and they were all running without issue.

                                                          Impressed with the level of bloat Windows 95 must have had, to not be able to update a single clock in the taskbar once per second, on an actual 386 with its 32bit ALU and higher IPC.

                                                          These days, with a multicore 64bit GHz+ machine, I enjoy i3status and its clock updating every 5 seconds.

                                                          1. 9

                                                            Impressed with the level of bloat Windows 95 must have had

                                                            You should try to understand things instead of insulting things. It easily could do it, but (as the linked article describes) it came with a cost, and that cost meant that in certain circumstances, it’d harm performance on something the user actually cared about in the name of something that was expendable.

                                                            Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                                                            With the one minute update, the taskbar needed only be paged in once a minute, then can be swapped back out to give the running application the memory back. A few kilobytes can make a difference when you’re thrashing to the hard drive and back every second.

                                                            1. 4

                                                              it’d harm performance on something the user actually cared about in the name of something that was expendable.

                                                              AmigaOS isn’t just preemptive, it also has hard priorities. If a higher priority task becomes runnable, the current task will be instantly preempted.

                                                              Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                                                              “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                                              Taking over the system was as easy as calling exec.library’s Disable(), which disables interrupts, then doing whatever you wanted with the system. This is how e.g. Minix 1.5 would take over the A500.

                                                              Alternatively, it is possible to disable preemption while still allowing interrupts to be serviced, with Forbid().

                                                              With the one minute update, the taskbar needed only be paged in once a minute

                                                              Why does the taskbar use so much ram that this would even matter, in the first place?

                                                              1. 2

                                                                “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                                                Windows 95 supported virtual memory and page file swapping. There’s a significant performance drop off when you cross the boundary into that being required, and the more times you cross it, the worse it gets.

                                                                Why does the taskbar use so much ram that this would even matter, in the first place?

                                                                They were squeezing benchmarks. Even a small number affects it. Maybe it was more marketing than anything else, but still the benefit of showing seconds are dubious so they decided it wasn’t worth it anyway.

                                                            2. 3

                                                              I’d imagine context switching is much faster on Amiga OS, since there’s only a single address space and no memory protection.

                                                              1. 2

                                                                The 68000 has very low and quite consistent interrupt latency, and AmigaOS did indeed not support/use an MMU, but I don’t see how this is relevant considering how much faster and higher clocked the 80386s that win95 requires are.

                                                                1. 3

                                                                  I think maybe you give the 80386 too much credit. I don’t think the x86 processors of the day were really that much faster than their m68k equivalents, and the ones with higher clock speeds were generally saddled with a system bus that ran at half or less than the speed of the chip. Add on the cost of maintaining separate sets of page tables per process, and the invalidation of the wee little bit of cache such a chip might have when switching between them, and doing all of this on a register-starved and generally awkward architecture.

                                                            1. 23

                                                              The thing is that systemd is not just an init system, given it wants to cover a lot of areas and “seeps” into the userspace. There is understandably a big concern about this and not just one of political nature. Many have seen the problems the pulseaudio-monoculture has brought, which is a comparable case. This goes without saying that ALSA has its problems, but pulseaudio is very bloated and other programs do a much better job (sndio, pipewire (!)) that now have a lot of problems to gain more traction (and even outright have to camouflage as

                                                              Runit, sinit, etc. have shown that you can rethink an init system without turning it into a monoculture.

                                                              1. 4

                                                                In theory, having all (or at least most) Linux distros on a single audio subsystem seems like a good idea. Bugs should get fixed faster, compatibility should be better, it should be easier for developers to target the platform. But I also see a lot of negativity toward PulseAudio and people seem to feel “stuck” with it now.

                                                                So where’s the line between undesirable monoculture and undesirable fragmentation?

                                                                1. 21

                                                                  The Linux ecosystem is happy with some monocultures, the most obvious one is the Linux kernel. Debian has dropped support for other kernels entirely, most other distros never tried. Similarly, with a few exceptions such as Alpine, most are happy with the GNU libc and coreutils. The important thing is quality and long-term maintenance. PulseAudio was worse than some of the alternatives but was pushed on the ecosystem because Poettering’s employer wanted to control more of the stack. It’s now finally being replaced by PipeWire, which seems to be a much better design and implementation. Systemd followed the same path: an overengineered design, a poor implementation (seriously, who in the 2010s, thought that writing a huge pile of new C code to run in the TCB for your system was a good idea?) and, again, pushed because Poettering’s employer wanted to control more of the ecosystem. The fact that the problems it identifies with existing service management systems are real does not mean that it is a good solution, yet all technical criticism is overridden and discounted as coming from ‘haters’.

                                                                  1. 5

                                                                    seriously, who in the 2010s, thought that writing a huge pile of new C code to run in the TCB for your system was a good idea?

                                                                    I really want to agree with you here, but looking back at 2010 what other choice did he realistically have? Now its easy, everyone will just shout rust, but according to Wikipedia, rust didn’t have its first release till June while systemd had its first release in March.

                                                                    There were obviously other languages that were much safer than C/C++ around then but I can’t think of any that people would have been okay with. If he had picked D, for example, people would have flipped over the garbage collection. Using a language like python probably wasn’t a realistic option either. C was, and still is, ubiquitous just like he wanted systemd to be.

                                                                    1. 3

                                                                      I really want to agree with you here, but looking back at 2010 what other choice did he realistically have?

                                                                      C++11 was a year away (though was mostly supported by clang and gcc in 2010), but honestly my choice for something like this would be 90% Lua, 10% modern C++. Use C++ to provide some useful abstractions over OS functionality (process creation, monitoring) and write everything else in Lua. Nothing in something like systemd is even remotely performance critical and so there’s no reason that it can’t be written in a fully garbage collected language. Lua coroutines are a great abstraction for writing a service monitor.

                                                                      Rust wouldn’t even be on my radar for something like this. It’s a mixture of things that can’t be written in safe Rust (so C++ is a better option because the static analysis tools are better than they are for the unsafe dialect of Rust) and all of the bits that can could be written more easily in a GC’d language (and don’t need the performance of a systems language). I might have been tempted to use DukTape’s JavaScript interpreter instead of Lua but I’d have picked an interpreted, easily embedded, GC’d language (quickjs might be a better option than DukTape now but it wasn’t around back then).

                                                                      C was, and still is, ubiquitous just like he wanted systemd to be.

                                                                      Something tied aggressively to a single kernel and libc implementation (the maintainers won’t even accept patches for musl on Linux, let alone other operating systems) is a long way away from being ubiquitous.

                                                                    2. 4

                                                                      In what optics are Pipewire any kind of improvement on the situation? It’s >gstreamer< being re-written by, checking notes, the same gstreamer developers - with the sole improvements over the previous design being the use of dma-buf as a primitive, with the same problems we have with dma-buf being worse than (at least) its IOS and Android counterparts. Poettering’s employers are the same as Wim Taymans. It is still vastly inferior to what DirectShow had with GraphEdit.

                                                                    3. 14

                                                                      I’ve been using Linux sound since the bad old days of selecting IRQs with dipswitches. Anyone who says things are worse under PulseAudio is hilariously wrong. Sound today is so much better on Linux. It was a bumpy transition, but that was more than a decade ago. Let it go.

                                                                      1. 6

                                                                        Sound today is so much better on Linux.

                                                                        Mostly because of improvements to ALSA despite pulseaudio, not because of it.

                                                                        1. 4

                                                                          Yep! Pulseaudio routinely forgot my sound card existed and made arbitrary un-requested changes to my volume. Uninstalling it was the single best choice I’ve made with the software on my laptop in the last half decade.

                                                                      2. -2

                                                                        It’s no accident that PulseAudio and SystemD have the same vector, Poettering.

                                                                        1. 16

                                                                          The word you’re looking for is “developer”, or “creator”. More friendlysock experiment, less name-calling, please :)

                                                                          1. 3

                                                                            Was Poettering not largely responsible for the virulent spread of those technologies? If so, I think he qualifies as a vector. I stand by my original wording.

                                                                            1. 6

                                                                              It’s definitely an interesting word choice. To quote Merriam-Webster: vector (noun), \ˈvek-tər,

                                                                              1. […]
                                                                                1. an organism (such as an insect) that transmits a pathogen from one organism or source to another
                                                                                2. […]
                                                                              2. an agent (such as a plasmid or virus) that contains or carries modified genetic material (such as recombinant DNA) and can be used to introduce exogenous genes into the genome of an organism

                                                                              To be frank, I mostly see RedHat’s power hunger at fault here. Mr. Poettering was merely an employee whose projects, who without doubt follow a certain ideology, fit into this monopolistic endeavour. No one is to blame for promoting their own projects, though, and many distributions quickly followed suit in adopting the RedHat technologies which we are now more or less stuck with.

                                                                              Maybe we can settle on RedHat being the vector for this, because without their publicitly no one would’ve probably picked up any of Poettering’s projects in the large scale. To give just one argument for this, consider the fact that PulseAudio’s addition to Fedora (which is heavily funded by RedHat) at the end of 2007 coincides with Poettering’s latest-assumed start of employment at RedHat in 2008 (probably earlier), while Pulseaudio wasn’t given much attention beforehand.

                                                                              Let’s not attack the person but discuss the idea though. We don’t need a strawman to deconstruct systemd/pulseaudio/avahi/etc., because they already offer way more than enough attack surface themselves. :)

                                                                              1. 5

                                                                                Let’s not attack the person but discuss the idea though. We don’t need a strawman to deconstruct systemd/pulseaudio/avahi/etc., because they already offer way more than enough attack surface themselves. :)

                                                                                This is why this topic shouldn’t be discussed on this site.

                                                                    1. 13

                                                                      I’m quite skeptical of the real world value of 24bit color in a terminal at all, but the biggest problem I have with most terminal colors is they don’t know what the background is. So they really must be fully user configurable - not just turn on/off, but also select your own foreground/background pairs - and this is easier to do with a more limited palette anyway.

                                                                      I kinda wish that instead of terminal emulators going down the 24 bit path, they instead actually defined some kind of more generic yet standardized semantic palette entries for applications to use and for users to configure once and done to get across all applications.


                                                                      1. 4

                                                                        I’m quite skeptical of the real world value of 24bit color in a terminal at all

                                                                        I have similar misgivings, but I admit to liking the result of 24-bit colour. It’s useful! I just don’t like how it gets there.

                                                                        Something that is a never-ending source of problems with the addition of terminal colours in the output of utilities these days is that in almost every case they are optimized for dark mode. I don’t use, nor can I stand, dark mode. It is horrible to read. But as a result, the colour output from the tools is unreadable. bat is the most recent one I tried. I ran it on a small C file and I literally couldn’t read most of the output.

                                                                        Yes, you can configure them but when they are useless out-of-the-box, the incentive is very low to want to configure everything. And then, I could just… not configure them and use the standard ones that are still just fine.

                                                                        Terminal colours are really useful. I find 24-bit colour Emacs in a terminal pretty nice. It’s the exception. Most other modern terminal tools that produce colour output don’t work for me because they can’t take into account my current setup.

                                                                        Having standard colour pallettes that the tools could access would be much better.

                                                                        1. 4

                                                                          I’ve started polling my small sample size of students and they almost unanimously prefer dark mode. I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                                          Personally I prefer dark because I have a lot of floaters in my eyes that are distracting with light backgrounds. For many years I had to change the defaults to dark.

                                                                          That said, I like to be able to toggle back and forth between light and dark. When I’m outside in the sun, or using a projector, light mode is critical. This is made difficult by every tool using their own color palette rather than the terminal’s. Some tools can be configured to do so, and maybe that should be their default.

                                                                          1. 5

                                                                            I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                                            Back when I was in undergrad (~25 years ago), light mode was what everyone used. Then again, it was always on a CRT monitor and was the default for xterms everywhere. If you got a dark theme happening, it attracted some attention because you knew what you were doing. People did it to show off a bit. (I did it too!)

                                                                            Then I got older and found dark backgrounds remarkably difficult to read from. I haven’t used them for well over 15 years. I simply cannot read comfortably on a such colour schemes, which why I have to use reader view or the zap colours bookmarklet all the time.

                                                                            I’m not saying dark mode is bad, but I am saying it’s probably trendy. I suspect things will swing in a different direction eventually, especially as the eyes of those who love it now get older. (They inevitably get worse! Be ready for it.) So the default will likely change. In which case, maybe we should really consider not hard-baking colour schemes into tools and move the colour schemes to somewhere else, as you mention. This is the better way to go. As I mention elsewhere in the thread, configuring bat, rg, exa, and all these modern tools individually is just obnoxious. Factor the colour schemes out of the tools somehow. It’s a better solution in the long run.

                                                                            1. 1

                                                                              I too find light displays easier to read.

                                                                              From memory, the first time I heard of TCO-approved screens was when Fujitsu(?) introduced a CRT screen with high resolution, a white screen, and crisp black text. This was considered more legible and more ergonomic.

                                                                              (TCO is Tjänstemännens Centralorganisation, the main coordinating body of Swedish white-collar unions. Ensuring a good working environment for their members is a core mission.)

                                                                              1. 2

                                                                                What I find helps the most is reducing the blue light levels - stuff like f.lux works well.

                                                                                I’m also looking into e-ink monitors, but damn, they’re pricey.

                                                                          2. 3

                                                                            Yeah, I’m a fan of light mode (specifically white backgrounds) on screen most the time too, and actually found colors so bad that’s a big reason why I wrote my own terminal emulator. Just changing the palette by itself wasn’t enough, I actually wanted it to adjust based on dynamic background too (so say, an application tries to print blue on black, my terminal will choose a different “blue” than if it was blue on white, having the terminal emulator itself do this meant it would apply to all applications without reconfiguration, it would apply if i did screen -d -r from a white screen to a black screen (since the emulator knows the background, unlike the applications!), and would apply even if the application specifically printed blue on black since that just drives me nuts and I see no need to respect an application that doesn’t respect my eyes).

                                                                            A little thing, but it has brought me great joy. Even stock ls would print a green i found so hard to read on white. And now my thing adjusts green and yellow on white too!

                                                                            Whenever I see someone else advertising their new terminal emulator, I don’t look for yet another GPU render. I look to see what they did with colors and scrollback controls.

                                                                            1. 2

                                                                              I got fed up with this and decided to do something about it, so after what felt like endless fiddling and colorspace conversions, I have a color scheme that pretty much succeeds at making everything legible, in both light and dark mode. It achieves this by

                                                                              • Deriving color values from the L*C*h* color space to maximize the human-perceived color difference.
                                                                              • Assigning tuned color values as a function of logical color (0-15), whether it’s used for foreground or background, and whether it’s for dark or light mode.
                                                                              • Assigning the default fg/bg colors explicitly as a 17th logical color, distinguished from the 16 colors assignable by escape sequences.

                                                                              As a result, I can even read black-on-black and white-on-white text with some difficulty.

                                                                              Here it is:

                                                                              1. 2

                                                                                I had the same problem with bat so I contributed 8-bit color schemes for it: ansi, base16, and base16-256. The ansi one is limited to the 8 basic ANSI colors (well really 6, since it uses the default foreground instead of black/white so that it works on dark and light terminals), while the base16 ones follow the base16 palette.

                                                                                Put export BAT_THEME=ansi in your .profile and bat should look okay in any terminal theme.

                                                                                1. 2

                                                                                  As I said, I could set the theme, but my point was that I don’t want to be setting themes for all these things. That’s maintenance work I don’t need.

                                                                                  1. 1

                                                                                    I definitely agree that defaulting to 24 bit colour is a terrible choice for command line tools, but when it’s a single environment variable to fix, I do think some (bat) are worth the minor, one-off inconvenience.

                                                                              2. 3

                                                                                I agree 100%. I think the closest thing we have to a standardized semantic palette is the base16 palette. It’s a bit confusing because it’s designed for GUI software too, not just terminals, so there are two levels of indirection, e.g. base16 0x8 = ANSI 1 = red-ish. It works great for the first eight ANSI colors:

                                                                                base16  ANSI  meaning
                                                                                ======  ====  ==========
                                                                                0x0     0     background
                                                                                0x8     1     red-ish/error
                                                                                0xb     2     green-ish/success
                                                                                0xa     3     yellow-ish
                                                                                0xd     4     blue-ish
                                                                                0xe     5     violet-ish
                                                                                0xc     6     cyan-ish
                                                                                0x5     7     foreground

                                                                                The other 8 colors are mostly monochrome shades. You need these for lighter text (e.g. comments), background highlights (e.g. selections), and other things. The regular base16 themes place these in ANSI slots 8-15, which are supposed to be the bright colors, which breaks programs that assume those slots have the bright colors.

                                                                                The base16-256 variants copy slots 1-6 into 9-14 (i.e. bright colors look the same as non-bright, which is at least readable), and then puts the other base16 colors into 16-21. It recommends doing this maneuver with base16-shell, which IMO defeats the purpose of base16. base16-shell is just a hack to get around the fact that most terminal emulators don’t let you configure all the palette slots directly; kitty does, so I use my own base16-kitty theme to do that, and use base16-256 for vim, bat, fish, etc. without base16-shell.

                                                                              1. 2

                                                                                I’ve done a single binary website a few times before too, heck my website is one main binary plus one html archive per subdomain, but I don’t often actually go all the way to one binary anymore because the deployment of small changes gets more complicated - it means actually rebuilding the server. (btw the article’s “compilation speed is fast (less than 10s most of the time)” makes me laugh, since I consider a 3s full rebuild on my computer (a budget machine i got in 2015) to be my upper limit of tolerance… a 10s build would be painfully slow to me)

                                                                                I don’t want to wait for the 2 1/2 second server rebuild and restart to see my change, so what I typically do is have the server binary load html templates from a runtime directory on demand. I actually have an open source example (though I haven’t pushed to it for a while, I so rarely git commit things lol, still good enough for this illustration):

                                                                                The server serves its dynamic functions as well as files out of the assets folder (which are just plain css/js/images) and the templates folder, which are HTML fragments the server pieces together. It opens the template requested and the skeleton.html file and merges them; the server does skeleton.querySelector("main").replaceWith(template.querySelector("main") (basically). So then the shared stuff is shared and the rest just injected at runtime so I can edit those easily and hit refresh to instantly see the changes - no recompile lag (which is about 2.5s again but that’s 2.45s longer than I want to wait).

                                                                                I’m still not completely happy with it, but I do find it overall works pretty well. (and btw, yes, I wrote all the code myself, from the webserver code to the script interpreter to the server-side dom and template implementation, that code is all in here if you’re interested, cgi.d is the web server, dom is the dom (lol), webtemplate.d does the piecing together of the template directory, etc)

                                                                                edit: I should mention actually I prefer doing the traditional cgi instead of the web server for a lot of things too since then the server doesn’t even need to be restarted! but meh, that bingo site needs long-lived connections, so traditional cgi model didn’t fit that as well.

                                                                                1. 3

                                                                                  One very interesting part of them is that they are stored on the X server, meaning you could have applications from different client machines all match the theme on your display, wherever that display happened to be.

                                                                                  But yeah, like the author said, it just didn’t really work out in practice since they’re just not terribly easy to use…

                                                                                  1. 2

                                                                                    I think this was really what killed them as a concept. All of my documents are stored on the X client (or a file server accessed via the X client), my settings on the server. If I move between X servers, I can access all of my documents but not my settings. Places making good use of the remote X11 functionality would have a load of dumb X terminals, which ran X servers and nothing else, and a small number of beefy machines (connected to a file server if the small number was more than one) running applications. You’d go to a machine, type your password into XDM, and get an X session and run applications. These often didn’t have persistent storage, so if you changed X resource settings then they wouldn’t be preserved, and even if they were then they wouldn’t be propagated to the next terminal that you used. Well, they were, but only because xrdb synchronised them with a file in the user’s home directory.

                                                                                    Whether you’re using this kind of system or a more desktop-like model where the X clients and server and the filesystem all run on the same machine, your X applications all have access to a filesystem already. The filesystem is persistent and accessible to the application irrespective of the X server.

                                                                                    The only model in which you ran X11 applications from multiple computers that didn’t have access to your home directory. I never saw a deployment like that in the wild because a program that couldn’t access your home directory generally wasn’t very useful. The closest I came to this was running X11 apps on different servers that both had access to my home directory over NFS or forwarding a single app from another machine (which had access to a different one of my home directories).

                                                                                    X had a few attempts at trying to provide its own filesystem abstraction and none of them really made sense. I am sad that MAS never took off though. Remote X11 worked fine for the display, but X apps would generally just open /dev/dsp for audio and so remote audio didn’t work. 20 years later, PipeWire seems to be basically solving the same set of problems.

                                                                                    1. 1

                                                                                      To clarify something: X applications (programs) looked up X resources from the server, but the server was not generally where you permanently stored them. Instead, you stored X resources in a flat file (often ~/.Xresources) and then your session setup scripts ran a command (xrdb) to load your resources into the X server. If you changed or set resources only in the X server, they’d be lost when you logged out or otherwise restarted the X server.

                                                                                      Of course this two step approach had problems, because if you changed .Xresources you had to remember to reload it into the server before it had any chance of taking effect.

                                                                                      1. 1

                                                                                        Right, the problem is that going via the X server is useful only for applications that can connect to your X server, but can’t read ~/.Xresources. That’s a vanishingly small use case.

                                                                                        The OpenStep equivalent, NSUserDefaults, worked over the distributed objects mechanism and so could also be made network transparent if required and provided rich data types (basically the same set of things that you can store in JSON, though with proper integers) and added layering (system-wide defaults, overridden by user-wide settings, and then current-session ones) and a lightweight concurrency mechanism (you got a notification when someone else modified settings, though concurrent updates to the same key-value pair could be lost). That provided some real value on top of a local filesystem file (not least for things like the current locale, where every application receives a notification if the user changes it, even if they’ve changed it only for the current session and not persisted the change).

                                                                                  1. 25

                                                                                    I laughed at the headline since if you look at my open source libraries you’ll find a few files bigger than that: simpledisplay is 21.8k, nanovega.d is 15.1k, minigui.d is 14.4k, cgi is 11.1k…

                                                                                    I find larger files easier to work with than collections of smaller files, all other things equal, and I like having complete units of functionality.

                                                                                    But “all other things equal” does a lot there - the article’s description of “It looked like the entire file would execute through from top to bottom”… That’s what is really scary: not the size of the file, but rather that this appears to be a large single function. See, if I open simpledisplay.d, I’m not looking at the whole file. I’m just interested in SimpleWindow.close() or EventLoop.impl. The rest of the file isn’t terribly important; I open it, jump straight to the function I want to work on, and do what needs to be done. Then the individual function is pretty ordinary.

                                                                                    So I push back against file size by itself as mattering - a file is just a container. You actually reason about functions or classes or whatever so that’s what you want to keep easy to follow (and note that smaller is not necessarily easier to follow, I also prefer long, simple code to short, clever code, and I’d prefer direct local use to indirect things through multiple layers).

                                                                                    1. 9

                                                                                      I find larger files easier to work with than collections of smaller files, all other things equal

                                                                                      That’s interesting, I’m very much the opposite: I try to keep my source files under 500 lines, and each file has a specific purpose like one class, or a set of constants used in one module. Makes it a lot easier to jump to a specific part of the code, by just clicking on a filename or tab. And when I search I’m limited to the appropriate context, like the specific class I’m working on.

                                                                                      What is it you prefer about single big files?

                                                                                      1. 8

                                                                                        I’m also team big files. I hate how many subjective decisions you have to make when you split things across files “Hey we split up foo and bar, now where do we put this function that was used by both foo and bar? Into foo.js, bar.js, or helpers.js?” or “Hey do we group models together and controllers together, or do we group by feature?”.

                                                                                        Whatever organizational decisions you make, some of them will ultimately prove unsatisfying as your codebase evolves and you’ll face the constant temptation to spend a bunch of energy reorganizing, but reorganizing your code across files doesn’t actually make your code more modular or more adaptable or improve it in any way other than maybe making it a little bit more navigable for somebody whose muscle memory for navigating codebases revolves around filenames.

                                                                                        I default to large files because it requires the least energy to maintain.

                                                                                        1. 7

                                                                                          I’ve had this idea for a while - why can’t a filesystem or text editor support “views,” or different groupings of the same underlying files. Example: view code by business function, or view code by technical concern e.g. “show me all web controllers.”

                                                                                          1. 2

                                                                                            It seems what you really want is to store all “source code” in a database instead.

                                                                                            When you think about it, how can it be possible that storing program code in a bunch of plain text files (ASCII, UTF-8) is in any way optimal for comprehension and modification? Text files are very much a least-common-denominator representation. We continue to use it because of the ecosystem of operating systems, version control, text editors and such allow us to use and interchange this information. So there is a very good reason why they persist to this day.

                                                                                            But I can imagine some kind of wacky Matrix-y (in the William Gibson, Vernor Vinge sense) 3D representation of programs, which makes great use of colored arrows, shapes and more to represent program operations and flow.

                                                                                            Do I have the slightest idea of where to start making such a programming “language”, and what exactly it looks like? No, I do not. Until we have better 3D systems (something akin to a floating hologram in front of me), that allows me to easily grab and manipulate objects, I don’t think I’d want to use such a system anyway. But this is the direction I think things will go in… eventually. That will likely take a long time.

                                                                                            Also, do we want to design a programming system that is optimized for human comprehension? Or something that is optimized for AI to use?

                                                                                            1. 2

                                                                                              Also, do we want to design a programming system that is optimized for human comprehension? Or something that is optimized for AI to use?

                                                                                              Well, my vote is always for humans. I have no stock in AI-produced code every being a good thing.

                                                                                              It seems what you really want is to store all “source code” in a database instead.

                                                                                              Actually now that I think about it, NDepend is pretty similar to this. Warning: that page autoplays a video with sound.

                                                                                              1. 2

                                                                                                It seems what you really want is to store all “source code” in a database instead.

                                                                                                So…Smalltalk? Lucid Common Lisp?

                                                                                                1. 1

                                                                                                  Those are heading in the right direction, but I’m thinking about something more comprehensive. The link from /u/amw-zero about NDepend is very interesting.

                                                                                            2. 3

                                                                                              I’m not sure all or even most of these decisions are subjective - I think that ideally one would want to reduce coupling throughout, and to limit the visibility of implementation details.

                                                                                              I tend to think in terms of build system DAGs - it profoundly annoys me when everything needs to get rebuilt for no actual good reason. Which is another reason to prefer smaller files, I think - less rebuilding of code.

                                                                                              1. 3

                                                                                                I agree. For one, I don’t think the decisions are subjective or objective in itself . But the fact that I do have to spend time thinking about code before I start is a clear benefit to me, not a disadvantage. Yes, sometimes it’s annoying, but mostly in pays off in, as you say, simpler code, clearer boundaries, less stuff tied into one big implementation.

                                                                                                1. 2

                                                                                                  Splitting things into classes or modules can limit coupling/visibility. Some programming languages enforce one-module-per-file, but many (golang, ruby) don’t and in these languages there is no encapsulation benefit to putting things in different files.

                                                                                                  Optimizing the build is a good point, though. If that’s the criterion for file divisions, and not subjective developer feelings about what belongs where, that eliminates the problem for me…

                                                                                                2. 1

                                                                                                  At some point in my career, I realized I had crossed a threshold where the writing & testing of the code was no longer the hard part: the hard part is organizing the code so it “makes sense” for what it’s doing, and I’ll be able to figure it out in 5 years when I come back to it.

                                                                                                  My mental line is 500 lines, too. Once I hit that, and I don’t immediately know how to break it up, it’s usually a sign that I need to take a hike and think about the structure at a higher level. Most of the time, this mental refactoring unlocks a lot of future features, and the invested “thinking” time pays itself off multiple times over.

                                                                                                  (None of this is a new insight, btw. I think it’s been written about since before the transistor.)

                                                                                                3. 2

                                                                                                  What is it you prefer about single big files?

                                                                                                  Basically everything. It is easier to find what I’m looking for, since I can just search inside the file rather than having to find more. Perhaps if I used an IDE and project files I’d feel differently, but I don’t. And even if I did, sometimes I browse projects online, and online, you often click links that lead to single files to see in the web browser. So it is easier to work with here and easier to browse online. It is also easier for users (which includes me doing a quick ad-hoc build or test run): I can say “download cgi.d and list it on your build command” and it just works for them, no complication in mirroring the directory and figuring out which list of files belong in the build, etc. D’s encapsulation is set at the file layer too, so I can actually define smaller, better defined API surfaces this way than with a variety of files, since the implementations are all wrapped up, without being tempted to expose package protection or similar to share bits that need to be shared across files (or worse yet, marking something public because I need it from a different file in my own project instead of committing to supporting it long term to the users, which is what I think public SHOULD me).

                                                                                                  When working with other people’s projects, I find the file organization almost never helps. I can never guess where something is actually found from the directory/file organization and just have to grep -R it.

                                                                                                  So I believe in the old gmail slogan: don’t organize, search! And then you start working on a more abstract level - classes, functions, etc. - instead of files anyway, so just take one more complication out of the way.

                                                                                                  1. 0

                                                                                                    Perhaps if I used an IDE and project files I’d feel differently, but I don’t.

                                                                                                    It’s not just in IDEs. Any good programming editor ought to offer a file hierarchy view and support multi-file search. I can’t even imagine doing nontrivial coding without that, just like I can’t imagine not having Undo.

                                                                                                    I don’t mean to sound condescending, but it really is worth it to look into more powerful tools.

                                                                                                    1. 1

                                                                                                      What possible benefit could I get from merging my file browser into my editor? I have a file browser, I have an editor, they know how to talk to each other.

                                                                                                      1. 0

                                                                                                        In the magic land where your file browser works as well with your editor as in an IDE, I’m sure you are correct. 🦄

                                                                                                  2. 2

                                                                                                    I generally like the large file as well, if I don’t have really good navigation in my IDE, because it puts that much more within easy reach of the search function of my editor. And since I spend a lot more time looking for and reading code than jumping confidently to places in a codebase that biases me to wanting longer files.

                                                                                                    1. 4

                                                                                                      Does your IDE not have good ‘search this entire project’ support?

                                                                                                      1. 1

                                                                                                        Yes, thus my statement “when I have good IDE navigation support.” The logical conclusion of that is Smalltalk where there are no files. But if I am sitting in a terminal with cli tools and vi, large files are easier since I don’t have to keep Ctrl+z’ing to run grep and fg’ing again.

                                                                                                    2. 2

                                                                                                      There’s no right / wrong answer here, because obviously people’s individual brains are different. But lots of small files absolutely kills my productivity, because working on large pieces of functionality ends up requiring changing 5-10 files. There’s no good way to look at 10 files simultaneously, I don’t care how large your monitor is. So the benefit of larger files is that all of the code that I need to understand something is localized within something that’s on my screen right now, and it’s pretty easy to use something like the sidebar in Sublime or VSCode to scroll quickly to the part of the file that you need. Or text search for a specific function name to jump right to it. The benefit is not really needing to do anything to find the code that I need.


                                                                                                      Makes it a lot easier to jump to a specific part of the code, by just clicking on a filename or tab

                                                                                                      This isn’t unique to the small-file approach, if your IDE / editor has “jump to definition” support, it works just as well within a file. Like I said, while this can be subjective and you may prefer one way or the other, I find this often with people who prefer small files - there’s no actual tangible reason or benefit, it just feels more organized (to proponents) when code is factored into small pieces.

                                                                                                      It may be a limitation of our tools, but I find too many files to be a cognitive cost. After a while I have too many editor tabs open, I can’t get to the part of code that I wanted without backtracking, etc. And, what is the downside of larger files? “Large things are bad” is not an axiom. “Large files are bad because they are large” is circular logic.

                                                                                                      All that being said, I don’t care all that much. I can navigate around most codebases whatever the structure.

                                                                                                      1. 2

                                                                                                        There’s no good way to look at 10 files simultaneously

                                                                                                        There’s also no good way to look at 11k lines simultaneously :)

                                                                                                    3. 7

                                                                                                      The problem is not, in my experience, large files, but the lack of separation of concerns. Large file sizes can be a symptom of a lack of separation of concerns but they’re the symptom, not the problem. I started work on clang in 2008 because I was working on GNUstep and wanted to use the shiny new Objective-C features that Apple had shipped. Apple had their own fork of GCC and no one merged their changes into the main branch[1] and so Objective-C on non-Apple platforms had a NeXT-era feature set.

                                                                                                      I looked at GCC to see how much effort it would be to update it. All of the Objective-C code in GCC was contained in a single file, objc-act.c, which was around 10K lines. It didn’t have any clear separation between the compiler stages and it was littered with if (next_runtime) everywhere. Some of the new features needed a new runtime, so all of those would need auditing and extending to provide a different implementation and become exciting switch cases.

                                                                                                      At the time, clang had mostly working codegen for C (it miscompiled printf implementations, but a lot of C code worked). It also had parsing and semantic analysis support for Objective-C, but no code generation. I started by adding an abstraction layer separating the language-specific parts from the runtime-specific parts. That’s still there: there is an abstract CGObjCRuntime class with a bunch of different subclasses (Apple has two significantly different runtimes and a bunch of different variants of the newer one, so has made a lot of use of this abstraction). For a while, clang had better support for Objective-C on non-Apple platforms than on macOS.

                                                                                                      Clang now has a bunch of source files that are larger than objc-act.c, but they’re cleanly layered. Parsing, semantic analysis, and IR generation are all in separate files. Objective-C runtime-agnostic IR generation is mostly in one file, Apple runtimes in another, non-Apple runtimes in a third. If you want to navigate the codebase and modify the Objective-C support, it’s easy to find the right place.

                                                                                                      [1] The FSF used to point to Objective-C as a big win for the GPL. I consider it a great example of failure. NeXT was forced to open source their GCC changes but not their runtime, which made the changes useless in isolation. Worse, the NeXT code was truly awful. If NeXT had offered to contribute it to GCC, I strongly suspect that it would have been rejected, but because the FSF had made such a big deal about forcing NeXT to release it, it was merged.

                                                                                                      1. 4
                                                                                                        andy@ark ~/d/zig (master)> wc -l (find src/ -name '*.zig') | sort -nr
                                                                                                         186923 total
                                                                                                          23226 src/Sema.zig
                                                                                                          11146 src/AstGen.zig
                                                                                                           7805 src/codegen/llvm.zig
                                                                                                           6745 src/clang_options_data.zig
                                                                                                           6716 src/link/MachO.zig
                                                                                                           6586 src/translate_c.zig
                                                                                                           6359 src/arch/x86_64/CodeGen.zig
                                                                                                           6187 src/type.zig
                                                                                                           5554 src/Module.zig
                                                                                                           5247 src/value.zig
                                                                                                           5181 src/Compilation.zig
                                                                                                           5171 src/arch/arm/CodeGen.zig
                                                                                                           5134 src/main.zig


                                                                                                        1. 1

                                                                                                          Maybe some of those source files are too big and ought to be broken up into smaller subcomponents?

                                                                                                      1. 6

                                                                                                        My favorite is: “Something went wrong. Please try again later”, which is a standard quality of error messages in web applications, even from software molochs like MSFT. It’s like error handling is too expensive even for them.

                                                                                                        1. 6

                                                                                                          This is addressed in the article

                                                                                                          Before we start, let me clarify that this is about error messages created by library or framework code, for instance in form of an exception message, or in form of a message written to some log file. This means the consumers of these error messages will typically be either other software developers (encountering errors raised by 3rd party dependencies during application development), or ops folks (encountering errors while running an application).

                                                                                                          That’s in contrast to user-facing error messages, for which other guidance and rules (in particular in regards to security concerns) should be applied. For instance, you typically should not expose any implementation details in a user-facing message, whereas that’s not that much of a concern — or on the contrary, it can even be desirable — for the kind of error messages discussed here.

                                                                                                          1. 3

                                                                                                            Even if you buy into the security concerns, you can make a message that isn’t completely useless. At least categorize it into if the user did something wrong or not so they have a clue as to if they can do something or when “later” might be.

                                                                                                            1. 3

                                                                                                              The “something went wrong” class usually means an exception or similar, so a bug in the app. Later is after the devs get to that point in their exception tracker and fix it.

                                                                                                              1. 2

                                                                                                                so a bug in the app

                                                                                                                Except cases when it’s not a bug in the app,

                                                                                                                Later is after the devs get to that point in their exception tracker

                                                                                                                Except when it means “never”, because the user is one of 100 people across the whole world that are triggering the bug, and it’s not economically viable to allocate the time to fix the bug (too small number of users affected).

                                                                                                                1. 5

                                                                                                                  In what case would it not be a bug? Showing such an error in a non-bug case would itself be a bug

                                                                                                                  1. 4
                                                                                                                    • Connection problems due to user’s ISP,
                                                                                                                    • User uses a non-supported browser,
                                                                                                                    • User uses an old version of the browser that doesn’t support some feature,
                                                                                                                    • User uses a browser plugin that introduces some conflict with the app,
                                                                                                                    • User uses an unsupported OS,
                                                                                                                    • Some hardware driver in user’s OS is buggy,
                                                                                                                    • I mean should I go on?
                                                                                                                    1. 7

                                                                                                                      If any of those result in a generic error message from your app, then your app has a bug

                                                                                                                      1. 3

                                                                                                                        OK, here’s a real world thing that just came up. I was on American Airlines’ website last night checking the status of a flight for someone I had to pick up at the airport. It said they were 30 mins ahead of schedule so I shut the laptop and drove up there early. Indeed, the airplane was touching down at about the time I arrived.

                                                                                                                        Today, just now, I opened that laptop back up and decided to refresh that flight status page, just curious to see what their official arrival time was. I was greeted with: “Flight Status: Something Went Wrong. Our system is having trouble. Please try again or come back later.”

                                                                                                                        OH GOD SOMETHING WENT WRONG WITH THE FLIGHT!!!!

                                                                                                                        nah just the stupid website threw up a generic error message because i had the gall to hit “refresh” instead of going back to the homepage and rerunning the search.

                                                                                                                        But think how ridiculous this error message is: what went wrong? (Apparently the search results expired. despite the flight number and date still being as unambiguous as ever but meh that’s the design they chose and it isn’t necessarily wrong just like they could have told me.)

                                                                                                                        Try what again? No matter how many times I hit “refresh” after it expires, it will give this same message. (Refreshing before it expires though will, in fact, update the information; that’s what I did last night.) So that doesn’t work.

                                                                                                                        Come back later? When? This particular flight is pretty rarely on time - half the time they’re early, half the time they’re late, sometimes very late or even cancelled. (My local airport is very small.) This information can be a bit time-sensitive, so it would be nice to know in this is anticipated to be temporary or longer term (if the website isn’t expected to come back, I might call somebody instead, or even just go to the airport early anyway just in case and check the monitors there.)

                                                                                                                        There’s a condescending attitude among some programmers that users are too ignorant to understand an error message anyway, so no point even trying. OK, maybe not everyone can find value in a stack trace, but you should still tell them something. Maybe not everyone will understand “Your search results expired” but even that alone at least gives a clue as to what might help: if the search expired, trying a new search might come to at least some of the user’s minds.

                                                                                                                        Or better yet you can say “Your search results expired, please start over.” (Which will do if you are looking at prices!)

                                                                                                                        Or better yet you can just automatically refresh the search results and not bother the user.

                                                                                                                        Now, sure, you might say this message is a bug. Someone should open a jira ticket to enhance the user flow with expired search results. Whatever, that’s their manager’s problem. But me, as a user right now, would much rather see even “ERR_EXPIRED_SEARCH” than “Flight Status: Something went wrong.” And like I hinted at earlier, if you really do think users are unable to read error messages, probably not a good idea to imply something went wrong with their sister’s flight when they’re nervously refreshing the page.

                                                                                                        1. 8

                                                                                                          Feels like the real takeaway here was that allowing pages to open new windows unprompted was a terrible mistake from the beginning.

                                                                                                          1. 4

                                                                                                            Does it need to be a new browser window? I thought it was done by painting a window using Javascript?

                                                                                                            1. 6

                                                                                                              Which is right, but if browser’s wouldn’t be allowed to open a new window, this deception would seem alarming instead of natural behaviour.

                                                                                                              1. 3

                                                                                                                This pretends to open a new window, but the other comment is still fair. Consider it websites could never open new windows - there would always be two zones that don’t overlap: the web content zone and the browser frame zone. Users could (in theory) be trained not to trust anything in the web content zone since it might be fake.

                                                                                                                But when an overlapped window pops up, that line gets blurred. Something might be surrounded by a browser frame, yet itself legitimately be another trusted browser frame (the overlapping popup window). So it erodes that strict “don’t trust things inside this box*” rule.

                                                                                                                A while ago, there was a way to make a popup window with no extra browser frame. No url box, etc. That feature was removed for exactly this reason: without a browser frame, the separation of trusted browser vs untrusted content was impossible to determine. This OP demo shows it it is still difficult to determine.

                                                                                                                • unless you put it there yourself, overlapping windows are still a nice feature but if you put it there yourself vs a popup from the browser you’re more likely to know what it is.

                                                                                                                The good news is I’m pretty sure all popup windows still get a slot on the OS taskbar……. but with the recent Windows taskbars being transformed into useless application groupings instead of actual representations of open windows, that’s not much help to anyone except the eagle-eyed check-and-double-check everything user.

                                                                                                            1. 1

                                                                                                              How about opening Terminal (on MacOS) and typing “bc”. It’s like the google calculator but without internet. You probably want to specify “-l” and also bump “scale=25”

                                                                                                              FYI: it’s the frontend for “dc” (desk calculator).

                                                                                                              1. 1

                                                                                                                The nicest thing about the google calculator to me is that it has a bunch of constants and unit conversions.


                                                                                                                for example, quite convenient.

                                                                                                                1. 2

                                                                                                                  That’s what the cli programm “units” is for ;-) granted this one is usually not in the default install, although it apparently is on MacOS?

                                                                                                              1. 8

                                                                                                                I take a bit of offense at the article contrasting wine with “native” applications. Let me ask you: what makes wine any less native than gtk or qt? Both are actually just one step above the low level foundations. You* don’t actually call XCreateWindow, you call QWindow or gtk_window or SDL_whatever or wine CreateWindow each of which call XCreateWindow.

                                                                                                                You often don’t even call socket() and read(). Nah, there’s QSocket and gtk_socket_new and SDL_net. So why is winsock any different?

                                                                                                                I get that reading the .exe file format instead of the elf file format feels different. But… is that any different than a.out vs elf? It is still running the same machine code after doing the same kind of dynamic linking etc (yes I know it isn’t identical dynamic linking but is it any less native than

                                                                                                                I just tried something too:

                                                                                                                string hello = "hello\n";
                                                                                                                void main() {
                                                                                                                        auto szptr = hello.ptr;
                                                                                                                       // linux native syscall
                                                                                                                        asm {
                                                                                                                                mov EAX, 4; // write
                                                                                                                                mov EBX, 2; // stderr
                                                                                                                                mov ECX, szptr;
                                                                                                                                mov EDX, 6; // length
                                                                                                                                int 0x80;
                                                                                                                       // Windows api call
                                                                                                                        MessageBoxA(null, "omg", "i wrote to stderr", 0);

                                                                                                                Compiled for Windows target and ran in Wine on Linux. Guess what happened? It worked. Not really a surprise - the machine code is still running on Linux! The Windows API is just another toolkit library in a Linux application, so not really a surprise you can, in fact, mix and match if you can rig up the build. Wine is not an emulator. Wine is native environment.

                                                                                                                (btw i used 32 bit just cuz i remember the 32 bit syscall asm off the top of my head, but im sure it works in 64 bit too. and if you can do this, you can do other pure linux things too)

                                                                                                                • As the author of an independent X toolkit, I do very much call XCreateWindow and it annoys me when people presume people like me don’t exist. But let’s be real, we’re very much the minority.
                                                                                                                1. 3

                                                                                                                  Until very recently the mismatch in locking primitives between Windows and Linux had very real compatibility and performance impacts for programs relying on wine that native programs weren’t subject to.

                                                                                                                  Native does have a specific meaning here that also applies. It is tied to the origin of a program, similar to the use for plant species. A linux-native program began life as a program intended to be run on linux. Your argument seems akin to saying an invasive species of plant cannot be considered non-native if it thrives in the new ecosystem.

                                                                                                                1. 8

                                                                                                                  This post would be much better if it didn’t so explicitly and emphatically call out “exactly” when it isn’t remotely close to exact. The real thing still looks much better in many obvious ways.

                                                                                                                  But I do like the win95 look, my own widget toolkit defaults to something very similar to it and i find the windows 95 gtk themes to be among the least bad of the options there. That said, I actually like a lot about its successors’ looks too. Gradients <3

                                                                                                                  1. 10

                                                                                                                    For server-to-client comms, SSE really can’t be beaten. I’ve used them in past projects and really appreciated their simple and straightforward nature.

                                                                                                                    1. 1

                                                                                                                      Did you take any measures not to get bitten by the 6 SSE connections per browser per domain limit? Is there a simple trick that would allow me not to worry about this at all, because I don’t like the thought of not being able to properly support more than 6 tabs pointing at my application.

                                                                                                                      1. 7

                                                                                                                        Looking into it, that seems to be an HTTP/1 limit. Apparently on HTTP/2, it’s 100 by default.

                                                                                                                        1. 1

                                                                                                                          Nice! thanks for mentioning this, I hadn’t noticed it. Then I see no down sides to using SSE for the typical live update needs of a web application.

                                                                                                                        2. 3

                                                                                                                          Easiest thing is probably to just distribute the connections across random subdomains if you’re worried about it.

                                                                                                                          1. 1

                                                                                                                            Afraid not — we were not running in a browser context. Sorry that I can’t help you out🙁

                                                                                                                        1. 3

                                                                                                                          I’ve been a fan of SSE since I first saw them several years ago. They’re simple to use, integrate well with websites, and degrade pretty gracefully. The main worry is the connection limit, but you can work around that with subdomains (or even emulation in websockets) if it actually hits you - which there’s a very good chance it won’t.

                                                                                                                          I just recently made a site using SSE to support a multiplayer game. It is an online bingo thing - so not the same kind of real time input but still wanted it to update quickly. The players’ clicks send an ajax POST, this saves to a database and sends SSE to the others to see it. There’s also the server database, so if you refresh, you also get the new data. This means if a connection is lost, it can simply refresh and everyone works again. I used this a lot to simplify the code - some things are just traditional html forms that send the event and everyone simply refreshes! It works even better than I expected it would. Both client and server code so simple.