Threads for mattgreenrocks

  1. 25

    TLDR: Bytes are bytes but the linked to post is actually about latency, not data integrity.

    I’m disappointment by the arrogance of users on this forum. I’ve never had great luck being %100 confident about the behavior of code without running it, but in this case, reading the actual code posted will show you that using a bad memcpy could have potentially cause a problem with audio quality. The user ran the code on their system and noticed a difference in audio quality. Can you be confident that this code, written by someone who may not be well-versed in C++ and Windows apis, wouldn’t produce different results based on the memcpy implementation used in a specific line?

    The code:

    Lets look at a cleaned up section of the code where the memcpy takes place:

    // Read audio data in wav format into memory and store it at `sound_buffer`
    BYTE *sound_buffer = new BYTE [sizeof(BYTE) * nBytesInFile];
    
    hr = mmioRead(hFile, (HPSTR)sound_buffer, nBytesInFile);
    mmioClose(hFile, 0);
    
    // Send audio data to sound card/kernel
    do {
        WaitForSingleObject(hNeedDataEvent, INFINITE);
        pAudioRenderClient->ReleaseBuffer(nFramesInBuffer, 0);
        pAudioRenderClient->GetBuffer(nFramesInBuffer, &pData);
        A_memcpy (pData, sound_buffer + nBytesToSkip, nBytesThisPass);
        nBytesToSkip += nBytesThisPass;
    } while (--nBuffersPlayed);
    

    Lower down in the code we also find the following lines:

    WaitForSingleObject(hNeedDataEvent, INFINITE);
    
    hr = pAudioRenderClient->ReleaseBuffer(nFramesInBuffer, 0);
    
    WaitForSingleObject(hNeedDataEvent, INFINITE);
    
    hr = pAudioRenderClient->GetBuffer(nFramesInBuffer, &pData);
    nFramesThisPass = nFramesInFile - nBuffersinFile*nFramesInBuffer;
    nBytesThisPass = nFramesThisPass * pWfx->nBlockAlign;
    
    A_memcpy (pData, sound_buffer + nBytesToSkip, nBytesThisPass);
    
    if (nFramesThisPass < nFramesInBuffer) {
      UINT32 nBytesToZero = (nFramesInBuffer * pWfx->nBlockAlign) - nBytesThisPass;
      ZeroMemory(pData + nBytesThisPass, nBytesToZero);
    }
    

    As far as I can tell these are just extraneous fluff.

    Sound card buffers and the need for low latency

    Now lets take a moment to understand what is happening here. The programmer is using the windows IAudioRenderClient. This allows the user to send little snippets of sound to the sound card by copying them into a buffer provided by the windows kernel. The soundcard in question (we don’t know what the specifications of his system were) probably has two buffers:

    1. The source buffer: which it reads in from a serial connection.

    2. The sink: a loopbuffer which it reads continuously into a digital to analog conversion mechanism. This loop buffer is fed by firmware from the source buffer.

    If new messages to the source buffer don’t come in fast enough, the loop buffer will restart the loop. You’ll hear a clicking noise every time the loop restarts before a new source buffer comes in. You have almost certainly heard this clicking at some point in your lives, it is a rather hard problem to solve while still allowing for low latency sound when playing video games or musical instruments. https://www.youtube.com/watch?v=5m8GoJeqras

    Going back to the code:

    So in this loop looks like this:

    1. Wait for an hNeedDataEvent, which gets triggered only after the windows kernel audio buffer is exhausted.
    2. Flush a buffer that writes to the windows kernel buffer and frees memory from their old message https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudiorenderclient-releasebuffer
    3. Request a new section of a windows kernel audio buffer for writing https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudiorenderclient-getbuffer
    4. Write to it using memcpy

    Now looking at the windows docs, the programmer is doing this wrong:

    “The client is responsible for writing a sufficient amount of data to the buffer to prevent glitches from occurring in the audio stream. For more information about buffering requirements, see IAudioClient::Initialize.

    After obtaining a data packet by calling GetBuffer, the client fills the packet with rendering data and issues the packet to the audio engine by calling the IAudioRenderClient::ReleaseBuffer method.”

    This is not their fault, the docs for ReleaseBuffer are also misleading:

    The ReleaseBuffer method releases the buffer space acquired in the previous call to the IAudioRenderClient::GetBuffer method.
    

    https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudiorenderclient-releasebuffer

    Only later in the docs do we see:

    “ Clients should avoid excessive delays between the GetBuffer call that acquires a buffer and the ReleaseBuffer call that releases the buffer. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a buffer for more than one period risk losing sample data. “

    I think the code should look like:

    do {
        pAudioRenderClient->GetBuffer(nFramesInBuffer, &pData);
        A_memcpy (pData, sound_buffer + nBytesToSkip, nBytesThisPass);
        pAudioRenderClient->ReleaseBuffer(nFramesInBuffer, 0);
        nBytesToSkip += nBytesThisPass;
    } while (--nBuffersPlayed);
    

    I’m about 30% sure that the call to WaitForSingleObject(hNeedDataEvent, INFINITE); is totally extraneous.

    Now there is also another problem with the Windows API. It does not match buffer sizes between the kernel buffer and the sound card buffer. This buffer size misalignment could mean that the buffer size on the sound card is larger than nBytesThisPass. That means that in order to send the source buffer to the sound card, it is necessary to go through the loop multiple times. All of a sudden, we’re not doing just 1 memcpy but multiple, and since we’ve already exhausted the audio data sent to the card by waiting for hNeedDataEvent, this has to happen extremely fast in order to not have glitches. Audio is being sampled at 48 kHz, or once every 0.02ms. If these memcopies and memory allocations and everything else take longer than about 0.01ms (since we have to account for data transfer to the sound card as well) we will get glitching. It seems quite likely that even a tiny amount of extra time in the memcpy call could end up causing the glitching to get worse.

    Edit: I was wondering myself how it is that two memcpy implementations could have a significant speed difference, but then I noticed his further comments:

    “Here are the optimisation settings for x64 Intel processor, /MT, /fp:fast, and the /O2 /Ob2 /Oi /Ot /Oy switches made the difference

    c/c++ section

    /Zi /nologo /W3 /WX- /O2 /Ob2 /Oi /Ot /Oy /GL /D “WIN32” /D “NDEBUG” /D “_CONSOLE” /Gm- /EHsc /MT /GS /fp:fast /Zc:wchar_t /Zc:forScope /Fp”x64\Release\playerextreme.pch” /Fa”x64\Release" /Fo”x64\Release" /Fd”x64\Release\vc100.pdb” /Gd /errorReport:queue

    -DUNICODE -D_UNICODE /Og /favor:INTEL64

    linker section

    /OUT:“C:\vs2010\playerextreme - mem - one file\source\x64\Release\playerextreme.exe” /INCREMENTAL /NOLOGO “libad64.lib” “libacof64.lib” “libacof64o.lib” “kernel32.lib” “user32.lib” “gdi32.lib” “winspool.lib” “comdlg32.lib” “advapi32.lib” “shell32.lib” “ole32.lib” “oleaut32.lib” “uuid.lib” “odbc32.lib” “odbccp32.lib” /MANIFEST /ManifestFile:“x64\Release\playerextreme.exe.intermediate.manifest” /ALLOWISOLATION /MANIFESTUAC:“level=‘asInvoker’ uiAccess=‘false’” /DEBUG /PDB:“C:\vs2010\playerextreme - mem - one file\source\x64\Release\playerextreme.pdb” /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /PGD:“C:\vs2010\playerextreme - mem - one file\source\x64\Release\playerextreme.pgd” /LTCG /TLBID:1 /DYNAMICBASE /NXCOMPAT /MACHINE:X64 /ERRORREPORT:QUEUE

    I also made it a single thread which again made a small improvement, also fixed the sizes of buffers etc so that numbers were used in the code. This means that different code would need to be used for different sampling rates. Also worked out a way to play gapless (next release) where the wav data is just appended to the buffer. Means that it won’t be able to play different sample rates gapless, but it suits my purposes.”

    Basically, I think that he was moving from a win32 memcpy (copying 32 bits for every loop) to a 64 bit c++ memcpy copying 128? bit vectors per loop if I recall correctly. That should end up being like a 5x speed up.

    1. 9

      I think the most important take away is actually that windows docs on ReleaseBuffer have been unclear. While at the very bottom of the ReleaseBuffer docs we find:

      Clients should avoid excessive delays between the GetBuffer call that acquires a buffer and the ReleaseBuffer call that releases the buffer. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a buffer for more than one period risk losing sample data.

      the top says The ReleaseBuffer method releases the buffer space acquired in the previous call to the IAudioRenderClient::GetBuffer method. which makes it sound like it’s just a memory free.

      1. 9

        Thank you for writing all this up. I’ve been bugged by the feeling of people punching down in this thread along with the fact the poster is actually talking about implementing an audio playback engine on which case these small differences can manifest as semi-perceptible aural differences due to latency, jitter, and scheduling.

        1. 7

          I’m disappointment by the arrogance of users on this forum.

          This is a programmer community, what’d you expect? Arrogance almost comes with the territory.

          Things like rowhammer/ECCploit, van Eck phreaking and other such side channel attacks show that yes, we live in a physical world and this actually has an impact on the pristine and pure thought-stuff that we manipulate, and still people here refuse to believe that a different algorithm with different CPU branching patterns and RAM access patterns might have observable effects on audio output. I suppose we’re so used to abstracting all that stuff away that we get offended by the very idea that differences in how software utilizes hardware components inside the computer can actually have an impact on “unrelated” hardware.

          1. 4

            I don’t understand. A sound card has a fixed sample rate, as long as the engine is filling the buffer fast enough*, it should be equivalent. If it isn’t, there will be obvious pops and clicks when the buffer underruns. If it is, the output is equivalent. I don’t see how some subtle sound quality difference could arise.

            A fancy sound card has a sample rate of 48khz, a sample is 3 bytes, that’s copying around 144 kbytes per second. On a memory bus. Even an 8 bit naive implementation on a gateway 2000 could do that.

            1. 1

              If you read the code, the author is literally waiting for the buffer to become exhausted before sending the data, of course that could lead to subtle and not so subtle sound quality differences.

            2. 4

              The things you describe result in defects noticeable by anyone though. That’s different from the allegedly minor differences you’d get due to using ‘gold plated cables’, which is obviously the kind of thing being referenced here. So although you very nicely explain how a memcpy implementation could influence sound, I don’t think it excuses the original inspiration for the OP.

              1. 2

                But then there’s this line in the first post linked to:

                also most players use malloc to get memory while new is the c++ method and sounds better.

                Um … doesn’t new() basically call malloc() under the hood? How can using new() “sound better” than using malloc()?

                1. 1

                  That is a mystery to me. I really don’t know what is going on there, but it appears from the extremely low quality of the code, that we shouldn’t really trust the author in telling us which exact changes made a difference. My point was to point out that in this specific broken bit of code, the speed of malloc could potentially be important and thus the people making fun of the concept were being presumptive.

                  1. 6

                    I think if the author of the code had demonstrated that they can tell the difference in a true double blind test, or on their testing methodology for determining that there existed a difference, people would take it more seriously.

                    As is, its just that the author claims it sounds different and we’re expected to believe. IMO, more likely, the author is hearing a difference because they expect to, is making a fool of themselves in public by posting about it and blaming libc, and now all the townspeople are pointing and laughing.

              1. 1

                I hope that it is not a symptom of a larger trend where programming becomes bureaucratic.

                The trend of tools over skills/experience certainly speaks to that. I think we’re at the peak of the “tools will save the industry!” belief. They help quite a bit but they can’t make a junior dev into a senior one.

                Will be interesting to see if the “tools > everything else” belief holds up once AI-written code has proliferated. Will we grant AI slack because it’s not human? Or require AIs to only write code that conforms to all our tools? Who will fix the AI generated code so it passes all the tooling checks?

                1. 5

                  Yes, you could (maybe) write a proof of concept in a weekend. Working that out to a nice smooth UI with a good onboarding process, documentation, getting decent hosting with good SLAs and speed across multiple regions, and building it out into a company that can market the product to end users so they will actually find you, have a helpdesk and legal team to handle the inevitable disgruntled user and so on and so forth and make it run in a way that’s profitable is nontrivial and requires skills beyond being a good coder and requires months, nay, years of work.

                  1. 2

                    building it out into a company that can market the product to end users so they will actually find you…and make it run in a way that’s profitable

                    IME: the engineering is relatively straightforward by comparison to this.

                    1. 1

                      The execution is where the value to society is realized.

                    1. 10

                      Would it kill Dan to put a date on his blog posts?

                      1. 3

                        Based on previous submissions and this confusing page I’m suggesting it’s from Oct 2016.

                        1. 1

                          What does a date add here?

                          Most of what Dan writes seems to be pretty insightful observations that are essentially timeless.

                          1. 3

                            It adds that I read this when it came out but I can’t tell until I reread half of it to see if he’s revisiting a topic or it’s a repost.

                        1. 1

                          Excited to see what first-class continuations can do for the effect library world.

                          1. 4

                            On continuations:

                            Say you’re in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there’s a sandwich on the counter, and all the materials used to make it are gone. So you eat it. :-)

                          1. 4

                            I believe shell can be so much better. I’m not convinced that needing arrays is a sign that shell doesn’t fit your problem anyway. Shell is about interacting with the operating system and gluing together processes with ease. I’m hopeful that www.oilshell.org will succeed and raise our expectations for shell programming.

                            1. 2

                              Arrays are nice for passing flag parameters.

                              1. 2

                                The argument list is an array in POSIX sh.

                                1. 1

                                  Sorta? A lot of programs will interpret the value in --foo 1 2 3 as three values separated by spaces, but other programs interpret it as one value which contains spaces. There’s no easy way to indicate a string must be interpreted as an array, which also means things like no multidimensional arrays.

                                  1. 4

                                    There is one and only one array in POSIX sh: $@, which is accessed by $*, $@, or $1, $2, …, and modified by shift and set.

                                2. 1

                                  Also handy for reading a whole bunch of parameters out of something like sqlite in one go

                                  IFS=$'\t' read -r -a arr -d '' < <(sqlite3 -batch -tabs -newline "" data.db "query")
                                  
                                3. 1

                                  Yeah. My experience with shell programming mostly falls in the, “only if nothing else is available,” mental category. It is tedious doing shell-like things in Python but much more readable.

                                1. 43

                                  I still like Zulip after about 5 years of use, e.g. see https://oilshell.zulipchat.com . They added public streams last year, so you don’t have to log in to see everything. (Most of our streams pre-date that and require login)

                                  It’s also open source, though we’re using the hosted version: https://github.com/zulip

                                  Zulip seems to be A LOT lower latency than other solutions.

                                  When I use Slack or Discord, my keyboard feels mushy. My 3 GHz CPU is struggling to render even a single character in the browser. [1]

                                  Aside from speed, the big difference between Zulip and the others is that conversations have titles. Messages are grouped by topic.

                                  The history and titles are extremely useful for avoiding “groundhog day” conversations – I often link back to years old threads and am myself informed by them!

                                  (Although maybe this practice can make people “shy” about bringing up things, which isn’t the message I’d like to send. The search is pretty good though.)

                                  When I use Slack, it seems like a perpetually messy and forgetful present.

                                  I linked to a comic by Julia Evans here, which illustrates that feature a bit: https://www.oilshell.org/blog/2018/04/26.html

                                  [1] Incidentally, same with VSCode / VSCodium? I just tried writing a few blog posts with it, because of its Markdown preview plugin, and it’s ridiculously laggy? I can’t believe it has more than 50% market share. Memories are short. It also has the same issue of being controlled by Microsoft with non-optional telemetry.

                                  1. 9

                                    +1 on zulip.

                                    category theory https://categorytheory.zulipchat.com/ rust-lang https://categorytheory.zulipchat.com/

                                    These are examples of communities that moved there and are way easier to follow than discord or slack.

                                    1. 9

                                      Zulip is light years ahead of everything else in async org-wide communications. The way the messages are organized makes it extremely powerful tool for distributed teams and cross-team collaboration.

                                      The problems:

                                      • Clients are slow when you have 30k+ unread messages.
                                      • It’s not easy (possible?) to follow just a single topic within a stream.
                                      • It’s not federated.
                                      1. 12

                                        We used IRC and nobody except IT folks used it. We switched to XMPP and some of the devs used it as well. We switched to Zulip and everyone in the company uses it.

                                        We self-host. We take a snapshot every few hours and send it to the backup site, just in case. If Zulip were properly federate-able, we could just have two live servers all the time. That would be great.

                                        1. 6

                                          It’s not federated.

                                          Is this actually a problem? I don’t think most people want federation, but easier SSO and single client for multiple servers gets you most of what people want without the significant burdens of federation (scaling, policy, etc.).

                                          1. 1

                                            Sorry for a late reply.

                                            It is definitely a problem. It makes it hard for two organizations to create shared streams. This comes up e.g. when an organization with Zulip for internal communications wants to contract another company for e.g. software development and wants them to integrate into their communications. The contractor needs accounts at the client’s company. Moreover, if multiple clients do this, the people working at the contracted company now have multiple scattered accounts at clients’ instances.

                                            Creating stream shared and replicated across the relevant instances would be way easier, probably more secure and definitely more scalable than adding wayf to relevant SSOs. The development effort that would have to go into making the web client connect to multiple instances would probably be also rather high and it would not be possible to perform it incrementally. Unlike shared streams that might have some features disabled (e.g. custom emojis) until a way forward is found for them.

                                            But I am not well versed in the Zulip internals, so take this with a couple grains of sand.

                                            EDIT: I figure you might be thinking of e.g. open source projects each using their own Zulip. That sucks and it would be nice to have a SSO service for all of them. Or even have them somehow bound together in some hypothetical multi-server client. I would love that as well, but I am worried that it just wouldn’t scale (performance-wise) without some serious though about the overall architecture. Unless you are thinking about the Pidgin-style multi-client approach solely at the client level.

                                        2. 7

                                          This is a little off topic, but Sublime Text is a vastly more performant alternative to VSCode.

                                          1. -4

                                            Also off-topic: performant isn’t a word.

                                          2. 3

                                            I feel like topic-first organization of chats is, which Zulip does, is the way to go.

                                              1. 16

                                                It still sends some telemetry even if you do all that

                                                https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                That page is a “dark pattern” to make you think you can turn it off, when you can’t.


                                                In addition, extensions also have their own telemetry, not covered by those settings. From the page you linked:

                                                These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting. Consult the specific extension’s documentation to learn about its telemetry reporting and whether it can be disabled.

                                                1. 4

                                                  It still sends some telemetry even if you do all that

                                                  I’ve spent several minutes researching that, and, from the absence of clear evidence that telemetry is still being sent if disabled (which evidence should be easy to collect for an open codebase), I conclude that this is a misleading statement.

                                                  The way I understand it, VS Code is a “modern app”, which uses a boatload online services. It does network calls to update itself, update extensions, search in the settings and otherwise provide functionality to the user. Separately, it collects gobs of data without any other purpose except data collection.

                                                  Telemetry disables the second thing, but not the first thing. But the first thing is not telemetry!

                                                  • Does it make network calls? Yes.
                                                  • Can arbitrary network calls be used for tracking? Absolutely, but hopefully the amount of legal tracking allowable is reduced by GDPR.
                                                  • Should VS Code have a global “use online services” setting, or, better yet, a way to turn off node’s networking API altogether? Yes.
                                                  • Is any usage of Berkeley socket API called “telemetry”? No.
                                                  1. 3

                                                    It took me awhile, but the source of my claim is from VSCodium itself, and this blog post:

                                                    https://www.roboleary.net/tools/2022/04/20/vscode-telemetry.html

                                                    https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                    Even though we do not pass the telemetry build flags (and go out of our way to cripple the baked-in telemetry), Microsoft will still track usage by default.

                                                    Also, in 2021, they apparently tried to deprecate the old setting and introduce a new one:

                                                    https://news.ycombinator.com/item?id=28812486

                                                    https://imgur.com/a/nxvH8cW

                                                    So basically it seems like it was the old trick of resetting the setting on updates, which was again very common in the Winamp, Flash, and JVM days – dark patterns.

                                                    However it looks like some people from within the VSCode team pushed back on this.

                                                    Having worked in big tech, this is very believable – there are definitely a lot of well intentioned people there, but they are fighting the forces of product management …


                                                    I skimmed the blog post and it seems ridiculously complicated, when it just doesn’t have to be.

                                                    So I guess I would say it’s POSSIBLE that they actually do respect the setting in ALL cases, but I personally doubt it.

                                                    I mean it wouldn’t even be a dealbreaker for me if I got a fast and friendly markdown editing experience! But it was very laggy (with VSCodium on Ubuntu.)

                                                    1. 2

                                                      Yeah, “It still sends some telemetry even if you do all that” is exactly what VS Codium claim. My current belief is that’s false. Rather, it does other network requests, unrelated to telemetry.

                                                  2. 2

                                                    These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting.

                                                    That is an … interesting … design choice.

                                                    1. 7

                                                      At the risk of belaboring the point, it’s a dark pattern.

                                                      This was all extremely common in the Winamp, Flash, and JVM days.

                                                      The thing that’s sad is that EVERYTHING is dark patterns now, so this isn’t recognized as one. People will actually point to the page and think Microsoft is being helpful. They probably don’t even know what the term “dark pattern” means.

                                                      If it were not a dark pattern, then the page would be one sentence, telling you where the checkbox is.

                                                      1. 6

                                                        They probably don’t even know what the term “dark pattern” means.

                                                        I’d say that most people haven’t been exposed to genuinely user-centric experiences in most areas of tech. In fact, I’d go so far as to say that most tech stacks in use today are actually designed to prevent the development of same.

                                                        1. 2

                                                          The thing that feels new is how non-user-centric development tools are nowadays. And the possibility of that altering the baseline perception of what user-centric tech looks like.

                                                          Note: feels; it’s probably not been overly-user-centric in the past, but they were a bit of a haven compared to other areas of tech that have overt contempt for users (social media, mobile games, etc).

                                                      2. 4

                                                        That is an … interesting … design choice.

                                                        How would you do this differently? The same is true about any system with plugins, including, eg, Emacs and Vim: nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                        1. 3

                                                          Kinda proves the point, tbh. To prevent a plugin from calling home, you have to actually try to design the plugin API to prevent it.

                                                          1. 4

                                                            I think the question stands: how would you do it differently? What API would allow plugins to run arbitrary code—often (validly) including making network requests to arbitrary servers—but prevent them from phoning home?

                                                            1. 6

                                                              Good question! First option is to not let them make arbitrary network requests, or require the user to whitelist them. How often does your editor plugin really need to make network requests? The editor can check for updates and download data files on install for you. Whitelisting Github Copilot or whatever doesn’t feel like too much of an imposition.

                                                              1. 4

                                                                Capability security is a general approach. In particular, https://github.com/endojs/endo

                                                                For more… https://github.com/dckc/awesome-ocap

                                                              2. 3

                                                                More fun: you have to design a plugin API that doesn’t allow phoning home but does allow using network services. This is basically impossible. You can define a plugin mechanism that has fine-grained permissions and a UI that comes with big red warnings when things want network permissions though and enforce policies in your store that they must report all tracking that they do.

                                                              3. 1

                                                                nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                Traditionally, this is prevented by repos and maintainers who patch the package if it’s found to be calling home without permission. And since the authors know this, they largely don’t add such functionality in the first place. Basically, this article: http://kmkeen.com/maintainers-matter/ (http only, not https).

                                                                1. 1

                                                                  We don’t necessarily need mandatory technical enforcement for this, it’s more about culture and expectations.

                                                                  I think that’s the state of the art in many ecosystems, for better or worse. I’d say:

                                                                  • The plugin interface should expose the settings object, so the plugin can respect it voluntarily. (Does it currently do that?)
                                                                  • The IDE vendor sets the expectation that plugins respect the setting
                                                                  • A plugin that doesn’t respect it can be dealt with in the same way that say malware is dealt with.

                                                                  I don’t know anything about the VSCode ecosystem, but I imagine that there’s a way to deal with say plugins that start scraping everyone’s credit card numbers out of their e-mail accounts.

                                                                  Every ecosystem / app store- type thing has to deal with that. My understanding is that for iOS and Android app stores, the process is pretty manual. It’s a mix of technical enforcement, manual review, and documented culture/expectations.


                                                                  I’d also not rule out a strict sandbox that can’t make network requests. I haven’t written these types of plugins, but as others pointed out, I don’t really see why they would need to access the network. They could be passed the info they need, capability style, rather than searching for it all over your computer and network!

                                                                  1. 1
                                                                  2. 1

                                                                    Sure, but they don’t offer a “disable telemetry” setting.

                                                                    What I’d do, would be to sandbox plugins so they can’t do any network I/O, then have a permissions system.

                                                                    You’d still rely on an honour system to an extent; because plugin authors could disguise the purpose of their network operations. But you could at least still have a single configuration point that nominally controlled telemetry, and bad actors would be much easier to spot.

                                                                    1. 1

                                                                      There is a single configuration point which nominally controls the telemetry, and extensions should respect it. This is clearly documented for extension authors here: https://code.visualstudio.com/api/extension-guides/telemetry#custom-telemetry-setting.

                                                          1. 1

                                                            Nice write up. I’m confused why Infalliable necessitates special syntax (!) though?

                                                            1. 3

                                                              The special syntax, !, is experimental and gated behind a feature flag in nightly. I believe the reasoning is mostly around allowing a syntactic special case. Today, Infallible is available in stable and has identical type semantics.

                                                              Infallible is useful in order to, for instance, declare that a Result will never error by giving it a type like Result<T, Infallible>. This also gives reason to its name.

                                                              Unfortunately, its name is somewhat more specific than its semantics as the following type checks but reads weirdly.

                                                              fn never_returns() -> Infallible {
                                                                  loop { }
                                                              }
                                                              
                                                              1. 2

                                                                Ah! Makes sense, also explains why Swift’s Never type pops up when using Combine, which is their reactive stream library.

                                                                1. 2

                                                                  for exactly that reason, I think the name Infallible would have been better for the whole thing – like:

                                                                  type Infallible<T> = Result<T, Void>
                                                                  // Values of type Void cannot be constructed
                                                                  enum Void {}
                                                                  
                                                                  fn never_returns() -> Void {
                                                                   loop {}
                                                                  }
                                                                  
                                                                  // using Infallible to implement some error-aware trait
                                                                  impl FromStr for () {
                                                                   type Err = Void;
                                                                   // from_str() is Infallible. Infallibly, this function returns a `()`
                                                                   fn from_str(&str) -> Infallible<()> {
                                                                     Ok(())
                                                                   }
                                                                  }
                                                                  
                                                                  1. 1

                                                                    I definitely see what you’re saying. I think Infallible is a decent historical compromise, though I also wish they just called it Void or Nothing.

                                                              1. 2

                                                                There’s so much work that needs to be done well, which means there’s plenty of work for you to do well. Focus more on that than what the comment section thinks, because they aren’t the ones actually arbitrating your success, they’re mostly just making noise.

                                                                Also: kill your tech idols. They’re just people who have worked hard, been lucky to be in the right place at the right time, and had the privilege to hold onto a wave for long enough to become known.

                                                                1. 5

                                                                  More people should try Deno. It would wipe out at least two thirds of the frustrations this list.

                                                                  start a project start another project

                                                                  “Starting a project” in Deno is “create a file ending with .ts”. It’s basically free.

                                                                  realize shared code between both projects, create another project symlink to the shared code typescript compiler rejects symlink code, “out of rootdir” add rootdirs

                                                                  Deno uses ESM for module resolution, so importing code from another project is just import * from "../other_module/mod.ts";

                                                                  tools like ts-node dont support –traceResolution to debug things use tsc directly

                                                                  Deno just runs .ts files, has debugger support, and stack traces pointing to your .ts source.

                                                                  try workspaces, mull over the countless custom solutions (lerna, nx, …) obviously nothing works, they all rely on symlinks

                                                                  You don’t need workspaces if you don’t have a dependency chain of build steps or per-project node_modules.

                                                                  And if you’re building web projects, you can use light tools like esbuild to create a bundle which runs in the browser. Never looking back!

                                                                  1. 2

                                                                    This comment definitely piqued my interest in using Deno as a toolchain for things, be they web or not.

                                                                    I’m also starting to see too-complex build processes as mostly self-inflicted wounds. The pain stops when you take actions to do so and insist on tools that limit said complexity.

                                                                  1. 7

                                                                    I think this post is worse for the reference to 10,000 hours (instead of 15 years) in the title and body, and including the Malcolm Gladwell quote. 10,000 hours of doing something is not the same as 10,000 of deliberate practice, which the Outliers quote is about (my emphasis):

                                                                    The key to achieving world-class expertise in any skill, is to a large extent, a matter of practicing the correct way, for a total of around 10,000 hours

                                                                    1. 2

                                                                      I don’t understand what you mean, isn’t it possible to achieve this with 10,000 hours of disciplined practice? In a programming setting this would be writing the very best code you can as much as possible.

                                                                      1. 10

                                                                        The original 10,000 hours claim is a misinterpretation of the researcher Malcolm Gladwell is Igonning. The researcher, Anders Ericsson, said that what matters is the amount of time spent in “Deliberate Practice”, which is practice where

                                                                        1. You are pushed out of your comfort zone,
                                                                        2. On specific skills,
                                                                        3. Using a “broadly accepted” training regiment,
                                                                        4. With immediate feedback from a trainer.

                                                                        Ericsson proposed that only deliberate practice mattered in determining your skill level. This is being challenged in modern research, which argues that genetics, talent, and inborn motivation also play significant roles. As far as I know, though, just doing the best you can in day-to-day life isn’t a major factor: time you spend practicing has to be for practicing.

                                                                        1. 5

                                                                          Ten minutes of drumming to the metronome is way more effective than one hour jamming

                                                                          1. 3

                                                                            “Perfect practice makes perfect.” My high school orchestra teacher drilled this into our heads. He would stop the entire orchestra to nitpick one students bow grip, or another’s tuning, but it was always worthwhile.

                                                                          2. 4

                                                                            Writing the very best code you can may not force you to practice the breadth & depth of skills required to develop expertise in an area.

                                                                            You could spend a day learning recursion and doing some exercises then spend a year converting a large code base to use recursion everywhere and remove all plain loops from it.

                                                                            You might encounter some challenging cases doing that, and those few cases will help you develop better skills but in all likelihood you’ll be spending most of this time mechanically applying a skill you already acquired, probably on that first day, over and over again. Those hours don’t count.

                                                                            I would refine Ericsson’s model to say that it’s less about wall time and more about cycles of “do-receive feedback-improve” you go through. The “tick” of the practice clock happens in the brain of the learner and can be perceived as a mini revelation or a sudden small improvement in performing a skill.

                                                                            The purpose of a good teacher or educational program is to improve the efficiency of learning by maximizing the number of those “ticks” within the limited resources available (e.g., time, cost, etc). The problem is those “ticks” require mental effort, so highly efficient programs are often perceived as “over complicated”. This is because they’re intentionally designed to deepen your skill or level of understanding in a stepwise way in which each step challenges your current level.

                                                                            People naturally tend to prefer learning at a lower pace or spend a lot of time doing things that don’t require as much mental effort but also don’t result in much knowledge or skill acquisition. Doesn’t mean they’re less valuable but it’s good to distinguish between education and entertainment.

                                                                        1. 5

                                                                          I’m really desperate for a tool to preserve these websites in an ‘open web” way. Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well. I think solutions like archivebox handle the archiving part well, but there’s no clear story on how to easily host archived sites and make them discoverable.

                                                                          1. 5

                                                                            Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well.

                                                                            Maybe a good idea to donate then for that to not happen.

                                                                            However I agree that decentralizing these things is a good idea. I know archive.org had some browser extension or something at some point to help with indexing things that crawlers have a hard time to reach. Maybe it would be worthwhile to base of that so both benefit?

                                                                            1. 4

                                                                              I want to move to a world where an entire web site, as of a particular moment in time, exists as a snapshot in a distributed content-addressed storage system and your browser can be readily directed to download the entire thing

                                                                              this would of course necessarily entail having fewer features that depend on server interaction, but I think uh… most sites should not be apps, heh

                                                                              I’m aware that this is sort-of throwing a technical solution at a social problem, but I think in this case the technology could dovetail well with a cultural change where site owners want to do something about preservation - it would give an easy, immediately actionable thing that people who care can do that makes a real difference

                                                                              1. 3

                                                                                Have you looked into IPFS?

                                                                                https://ipfs.tech/

                                                                                1. 2

                                                                                  I have, yes! I think IPFS is a very solid architecture, should definitely be the basis of anything like this, and probably solves about 90% of the problem. Of the part that’s left, most of it is documentation that explains what people might want to do, why, and how, and the smallest part is any small glue code that’s needed to make that easy.

                                                                                  1. 3

                                                                                    One idea I had was for an appliance thing that could bring static IPFS blog / site publishing to the masses. Something like:

                                                                                    1. A SBC (RPi, Rock64, whatevs) running a Free OS.
                                                                                    2. Some sort of file share on it that was mDNS discoverable.
                                                                                    3. Each of these appliances has a (changeable, but default) unique IPNS identifier, with a QR code sticker on it that you can scan and share however you want (social media, IRL, as text, as an image, etc.)

                                                                                    Then you just write your content, copy it to the box (Samba? SFTP? …?), it generates a static site, you eyeball it, then hit ‘publish’ when you’re happy.

                                                                                    Aim would be for it to be simple enough for non-techies to use. There is a lot of devil in that detail, though. Some things I was spiking:

                                                                                    1. How to trigger the static generation? Samba is very bad at knowing when a file operation is “done”.
                                                                                    2. How to keep the thing updated and secure? I looked into Ubuntu’s IoT infra but there’s an entire herd of yaks to shave there.
                                                                                    3. How to support Windows? which still doesn’t do mDNS well, last I looked.

                                                                                    Etc.

                                                                                    1. 1

                                                                                      I’m all for this. This is very similar to what I’ve been thinking about. I would personally choose sftp over Samba because managing ssh credentials a skill that I think is very empowering and worth teaching, and because I never like tying my future to the whims of a megacorp, but that does incur an additional burden for documentation, since most people won’t know how to use it.

                                                                                      Your point 1 brings up another possibility though, which is using git-over-ssh. Then the generation can be kicked off by an on-push trigger in git.

                                                                                      With regard to your point 2 I personally lean very heavily towards NixOS as it’s good at this sort of thing, but teaching people how to manage appliances like this is a big writing task. I’m not a technical writer, and I’m not really the right person to take that on, although I’m always happy to chat with anyone who does.

                                                                                      Windows support does seem quite challenging, I don’t have good answers there.

                                                                                      1. 1

                                                                                        which is using git-over-ssh

                                                                                        Yeaaaahhhhhhhhhhh … I’m kinda reluctant to have to expose non-techies to Git. It’d be perfect for a coding-savvy market though.

                                                                                        but teaching people how to manage appliances like this is a big writing task.

                                                                                        I was thinking of something that wouldn’t have to be managed … updates would “just happen”. That turns out to be surprisingly difficult (c.f. herd of yaks).

                                                                                        It’s surprising to me that releasing an open-source appliance like this would still be a lot of work, but honestly, it really does seem like it would.

                                                                                  2. 1

                                                                                    I’ve even started to build an archival app on top of it, but there are many thorny problems. How do you ensure the authenticity of archives published by other people? Where and how do you index archived content across the network? How do you get other people to re-host already archived content? How do you even get enough people interested in this to make it useful at all?

                                                                                2. 4

                                                                                  I’m definitely interested in this as well, I’ve started to believe that personal archives of sites/articles is the most resilient way to preserve information.

                                                                                1. 21

                                                                                  My favorite: write a version number out when serializing data. Best case, you don’t need it and you wasted a few bytes. You don’t want to be in the position of needing to know whether you’re looking at old or new data, and inventing weird rules to version it accordingly.

                                                                                  Not that I had to deal with that recently. Nope, not at all. :)

                                                                                  1. 5

                                                                                    A generalization of this that I always socialize at new jobs: save one wish so you can wish for more wishes. A version field, a flags field, an extensions count that always zero… anything is fine!

                                                                                    1. 3

                                                                                      Yes. Also, if you’re writing your own packed binary format, put a magic number at the beginning. It is best to break tools which don’t recognize your format, and to break them before they can possibly make any decisions or take any actions. (Also, don’t write your own packed binary format, etc.)

                                                                                    1. 4

                                                                                      Good take.

                                                                                      I hope that there can be a bit more balance to this discussion; discourse tends to skew negative because it attracts people who want to vent and/or justify their own disengagement. That ends up framing the characteristics of quality/reliability as too expensive or quixotic. The result is a mind-virus that people share amongst themselves: “unit testing is too hard,” “quality is for FAANG-level engineers,” “we need to use as many third-party deps as we can because library authors are super geniuses and we aren’t.”

                                                                                      It’s all so self-defeating and sad.

                                                                                      1. 5

                                                                                        The idea of shipping a massive system to users without strict types seems insane to me. How do you know your tests cover every viable path through your code? How on earth do you maintain a legacy codebase with 0 types and 0 documentation? How do you ensure any code path is valid if values can be and type at any time? I wish these were strawmen, but I keep running into abandoned (and fresh) Python apps that have these problems.

                                                                                        With typed languages, I used to think, ugh, I’d have to define types everywhere, make struct (or classes) in advance, and sometimes make wild guesses because those things were unknown or not finalised. All of these are valid complaints. If I am prototyping in Rust, I will take a lot of time. With Python, I can ship fast. But only once.

                                                                                        I hate how discourse for static vs dynamic typing pits the two extremes against each other. TypeScript and other gradual type systems are a great middle ground: gradually add types where your team sees fit. You could completely ignore types while developing and make adding type signatures a blocker for merging.

                                                                                        1. 4

                                                                                          The idea of shipping a massive system to users without strict types seems insane to me. How do you know your tests cover every viable path through your code?

                                                                                          This is one reason I ran away screaming from webdev: everyone wanted to do exactly this, complained that test times were insane (because they didn’t want to take the time to design for testability), and complained that upgrading their framework was painful (because they coupled it directly to the framework). Doing otherwise was taboo, something something not delivering business value.

                                                                                          They were so intent on “disruption” they didn’t have time to stop and think about what would enable them to do that as effectively as possible.

                                                                                          1. 2

                                                                                            because they didn’t want to take the time to design for testability

                                                                                            Keep in mind that this means very different things in dynamically-typed languages than it does in statically-typed languages. Statically-typed languages basically force you into Enterprise Code™ patterns because of the inflexibility of the language – you have to do dependency injection and inversion of control and five dozen “decoupled” layers of abstraction between you and the world, because the language itself is incapable of letting you do anything else.

                                                                                            But in dynamically-typed languages, many of those patterns stop being relevant because the language-level constraints that inspired the patterns are not present. Consider, for example, the unittest.mock module in the Python standard library: a gigantic swathe of “testability” patterns from statically-typed languages go out the window because of this one tool, which in turn can only exist because of high runtime dynamism in the language itself.

                                                                                            Meanwhile, I’m not sure why “test times were insane” is a counterargument to dynamic typing, considering how many times I hear that people are desperate for compiler speedups because of the multi-hour build times for their supposedly well-engineered software.

                                                                                          2. 2

                                                                                            Another middle ground is Crystal which I’m using more and more recently where I’d use Python otherwise. There’s enough type inference that I add a type signature maybe once every 50+ lines. Meanwhile all the rest of the code is automagically typed and the compiler will tell me where I made mistakes.

                                                                                            (Most of the typing necessary is for collections, where… yeah, they come extremely useful and stop accidents)

                                                                                            1. 1

                                                                                              How do you know your tests cover every viable path through your code?

                                                                                              By writing unit tests, the same way you’d do it in a statically-typed language. I’ve never seen a statically-typed language that automatically derives prose documentation and 100% path coverage from the type declarations, so I’m not sure why static typing is supposed to affect this.

                                                                                              How on earth do you maintain a legacy codebase with 0 types and 0 documentation?

                                                                                              Well, I hate to be the bearer of bad news, but legacy projects in any language tend not to come with useful documentation. And given how easy it is to do things like accept a HashMap of “arguments” in Java and Java-like languages, or to just declare everything as interface in Go, I don’t really see how the type signatures are expected to be a panacea. And if you think there aren’t significant real-world real codebases doing those things, you’ve got another think coming – static types don’t and didn’t do anything to prevent that.

                                                                                              How do you ensure any code path is valid if values can be and type at any time?

                                                                                              You test the paths you support. If a client of your code does something wrong, their tests will break. Not because you wrote tests to exhaustively type-check your arguments, but because passing the wrong type of argument tends to throw a runtime – and unit testing is runtime! – error anyway.

                                                                                              I keep running into abandoned (and fresh) Python apps that have these problems.

                                                                                              It’s possible to write terrible undocumented abandonware in any language.

                                                                                            1. 1

                                                                                              I have a coworker who keeps using not and and operators and such in his code. Threw me for a hell of a loop when I first saw it. I have no idea how to make him stop, since he actually is far, far better at C++ than I am. XD

                                                                                              Also, didn’t C officially deprecate the digraphs and such at some point recently?

                                                                                              1. 4

                                                                                                Trigraphs got removed … I didn’t know about the digraphs until now! They seem redundant with the trigraphs, though they’re less ugly and more mnemonic; maybe that was the point?

                                                                                                I did know about “and” and “or”, and kind of like the way they look, but I’ve never used them for fear of sowing confusion.

                                                                                                1. 3

                                                                                                  Digraphs are not redundant – they were introduced specifically so that trigraphs could be removed. Trigraphs were universally despised because, unlike digraphs, they were translated wherever they appeared in the source code, including text within strings and comments. (This was necessary because the backslash was one of the characters for which trigraphs offered an alternate encoding.) Thus e.g. a string containing the text "Huh?!?" would pass unmolested, but the string "Huh??!" would silently be transformed into "Huh|" – sometimes to hilarious effect.

                                                                                                2. 4

                                                                                                  What exactly is the problem?

                                                                                                  1. 2

                                                                                                    Ask them why they are using unidiomatic operators that force all readers to do a double take?

                                                                                                    1. 1

                                                                                                      Ask the standard why it includes unidiomatic operators instead of blaming your colleague for using the standard?

                                                                                                    2. 1

                                                                                                      I have no idea how to make him stop

                                                                                                      Have you consider violence? What about extreme violence?

                                                                                                    1. 57

                                                                                                      The way this PR was written made it almost seem like a joke

                                                                                                      Nobody really likes C++ or CMake, and there’s no clear path for getting off old toolchains. Every year the pain will get worse.

                                                                                                      and

                                                                                                      Being written in Rust will help fish continue to be perceived as modern and relevant.

                                                                                                      To me this read a lot like satire poking fun at the Rust community. Took me some digging to realize this was actually serious! I personally don’t care what language fish happens to be written in. As a happy user of fish I just really hope this doesn’t disrupt the project too much. Rewrites are hard!

                                                                                                      1. 52

                                                                                                        This is what it looks like when someone is self-aware :-)

                                                                                                        They looked at the tradeoffs, made a technical decision, and then didn’t take themselves too seriously.

                                                                                                        1. 14

                                                                                                          Poe’s Law is strong with this one. Not knowing the author of Fish, I genuinely can’t tell whether the commentary is 100% in earnest, or an absolutely brilliant satire.

                                                                                                          1. 30

                                                                                                            Given the almost 6,000 lines of seemingly high quality Rust code, I’m going to say it’s not a joke.

                                                                                                            1. 27

                                                                                                              Gotta commit to the bit.

                                                                                                              1. 3

                                                                                                                Oh, sure! I meant the explanation in the PR, not the code itself.

                                                                                                              2. 3

                                                                                                                Same. After doing some research into the PR though, I’m pretty sure it’s in earnest. XD

                                                                                                              3. 1

                                                                                                                For sure! After I looked deeper and found that this person is a main contributor to fish things made more sense. I totally respect their position and hope things go well. I just thought the way it was phrased made it hard to take seriously at first!

                                                                                                              4. 28

                                                                                                                The author understands some important but often underappreciated details. Since they aren’t paying anyone to work on the project, it has to be pleasant and attractive for new contributors to want to join in.

                                                                                                                1. 3

                                                                                                                  It only “has to be” if the project wants to continue development at an undiminished pace. For something like a shell that seems like a problematic mindset, albeit an extremely common one.

                                                                                                                  1. 13

                                                                                                                    For something like a shell that seems like a problematic mindset

                                                                                                                    Must it?

                                                                                                                    Fish seldom plays the role of “foundational scripting language”. More often it’s the interactive frontend to the rest of your system. This port enables further pursuit of UX and will allow for features I’ve been waiting for for ages

                                                                                                                    1. 2

                                                                                                                      For something like an interactive shell, I generally feel that consistency beats innovation when it comes to real usability. But if there are features that still need to be developed to satisfy the fish user base, I suppose more development is needed. What features have you been waiting for?

                                                                                                                      1. 11

                                                                                                                        https://github.com/fish-shell/fish-shell/pull/9512#issuecomment-1410820102

                                                                                                                        One large project has been to run multiple fish builtins and functions “at the same time”, to enable things like backgrounding functions (ideally without using “subshells” because those are an annoying environment boundary that shows up in surprising places in other shells), and to simply be able to pipe two builtins into each other and have them actually process both ends of the pipe “simultaneously”.

                                                                                                                        There have been multiple maintainer comments over the years in various issues alluding to the difficultly of adding concurrency features to the codebase. e.g. https://github.com/fish-shell/fish-shell/issues/238#issuecomment-150705108

                                                                                                                2. 24

                                                                                                                  Nobody really likes C++ or CMake, and there’s no clear path for getting off old toolchains. Every year the pain will get worse.

                                                                                                                  I think that the “Nobody” and “pain” there may have been referring to the dev team, not so much everyone in the world. In that context it’s a little less outlandish a statement.

                                                                                                                  1. 28

                                                                                                                    It’s also not really outlandish in general. Nobody likes CMake. How terrible CMake is, is a common topic of conversation in the C++ world, and C++ itself doesn’t exactly have a reputation for being the language everyone loves to use.

                                                                                                                    I say as someone who does a whole lot of C++ development and would pick it above Rust for certain projects.

                                                                                                                    1. 13

                                                                                                                      Recent observation from Walter Bright on how C++ is perceived:

                                                                                                                      He then said that he had noticed in discussions on HN and elsewhere a tectonic shift appears to be going on: C++ appears to be sinking. There seems to be a lot more negativity out there about it these days. He doesn’t know how big this is, but it seems to be a major shift. People are realizing that there are intractable problems with C++, it’s getting too complicated, they don’t like the way code looks when writing C++, memory safety has come to the fore and C++ doesn’t deal with it effectively, etc.

                                                                                                                      From https://forum.dlang.org/post/uhcopuxrlabibmgrbqpe@forum.dlang.org

                                                                                                                      1. 9

                                                                                                                        That’s totally fine with me.

                                                                                                                        My retirement gig: maintaining and rescuing old C++ codebases that most devs are too scared/above working on. I expect it to be gross, highly profitable, and not require a ton of time.

                                                                                                                        1. 7

                                                                                                                          C programmers gonna have their COBOL programmer in 1999 moment by the time 2037 rolls around.

                                                                                                                        2. 4

                                                                                                                          C++ appears to be sinking

                                                                                                                          And yet, it was the ‘language of the year’ from TIOBE’s end-of-year roundup for 2022, because it showed the largest growth of all of the languages in their list, sitting comfortably at position 3 below Python and C. D shows up down at number 46, so might be subject to some wishful-thinking echo-chamber effects. Rust was in the top 20 again, after slipping a bit.

                                                                                                                          TIOBE’s rankings need to be taken with a bit of a grain of salt, because they’re tracking a lot of secondary factors, OpenHub tracks more objective things and they’re also showing a steady increase in the number of lines of code of C++ changed each month over the last few years.

                                                                                                                          1. 40

                                                                                                                            TIOBE has +/- 50% error margin and even if the data wasn’t unusable, it’s misrepresented (measuring mentions picked by search engine algorithms over a historical corpus, not just current year, not actual usage). It’s so bad that I think it’s wrong to even mention it with “a grain of salt”. It’s a developer’s horoscope.

                                                                                                                            TIOBE thinks C popularity has halved one year and tripled next year. It thinks a niche db query language from a commercial product discontinued in 2007 is more popular in 2023 than TypeScript. I can’t emphasize enough how garbage this data is, even the top 10. It requires overlooking so many grave errors that it exists only to reinforce preexisting beliefs.


                                                                                                                            Out of all flawed methods, I think RedMonk is the least flawed one: https://redmonk.com/rstephens/2022/10/20/top20-jun2022/ although both RedMonk and OpenHub are biased towards open-source, so e.g. we may never learn how much Ada DoD actually uses.

                                                                                                                            1. 10

                                                                                                                              My favourite part about the RedMonk chart is that it shows Haskell going out through the bottom of the chart, and Rust emerging shortly afterwards, but in a slightly darker shade of red which, erm, explains a lot of things.

                                                                                                                    2. 17

                                                                                                                      The rationale provided tracks for me as someone who is about to replace an unpopular C++ project at work with Rust. Picking up maintenance of someone else’s C++ project who is no longer at the company vs. picking up someone else’s Rust project have looked very different in terms of expected pain / risk IME.

                                                                                                                      “Getting better at C++” isn’t on my team’s dance card but “getting better at Rust” is which helps here. Few working programmers know anything about or understand native build tooling these days. I’m the resident expert because I know basics like why you provide a path argument to cmake. I’m not actually an expert but compared to most others in my engineering-heavy department I’m as good as it gets. Folks who do a lot of C++ at work or at home might not know how uncommon any thoroughgoing familiarity with C and C++ is getting these days. You might get someone who took one semester of C to say “yeah I know C!” but if you use C or C++ in anger you know how far that doesn’t go.

                                                                                                                      I’m 34 years old and got my start compiling C packages for Slackware and the like. I don’t know anyone under 30 that’s had much if any exposure unless they chose to work in embedded software. I barely know what I’m doing with C/C++ despite drips and drabs over the years. I know enough to resolve issues with native libraries, FFI, dylibs, etc. That’s about it beyond modest modifications though.

                                                                                                                      tl;dr it’s difficult getting paid employees to work on a C++ project. I can’t imagine what it’s like getting unpaid volunteers to do so.

                                                                                                                      1. 13

                                                                                                                        It does seem weird. We find it easier to hire C programmers than Rust programmers and easier to hire C++ programmers than either. On the other hand, there do seem to be a lot of people that want a project to hack on to help them learn Rust, which might be a good opportunity for an open source project (assuming that you are happy with the code quality of learning-project Rust contributions).

                                                                                                                        1. 27

                                                                                                                          The difficulty is that you need to hire good C++ programmers. Every time some vulnerability or footgun in C++ is discussed, people say it’s not C++’s fault, is just a crappy programmer.

                                                                                                                          OTOH my experience from hiring at Cloudflare is that it’s surprisingly easy to onboard new Rust programmers and have them productively contribute to complex projects. You tell them not to use unsafe, and they literally won’t be able to cause UB in the codebase.

                                                                                                                        2. 4

                                                                                                                          I personally don’t care what language fish happens to be written in

                                                                                                                          You might not, but a lot of people do.

                                                                                                                          I wrote an tool for myself on my own time that I used often at work. Folks really liked what it could do, there’s not a tool like it, and it handled “real” workloads being thrown at it. Not a single person wanted anything to do with it, since it was written in an esoteric language. I’m rewriting it in a “friendlier” language.

                                                                                                                          It seems like the Fish team thought it through, weighed risks and benefits, have a plan, and have made good progress, so I wish them the best.

                                                                                                                          1. 4

                                                                                                                            Not a single person wanted anything to do with it, since it was written in an esoteric language.

                                                                                                                            Oo which language?

                                                                                                                            1. 1

                                                                                                                              I’d rather not say, I don’t want anyone to feel bad. It’s sufficient to say, “As of today, not in the TIOBE Index top 20.”

                                                                                                                              The bigger point is that it was a tool I had been using for over a year, which significantly improved my efficiency and quality of life, and it got rejected for being an esoteric tech, even though I provided executable binaries.

                                                                                                                              1. 1

                                                                                                                                That sucks. Yeah, I don’t mean to ask to hurt anyone’s feelings, I’m just always curious to know what people think are “esoteric”, cuz esoteric on lobste.rs (Factor, J, one of the advent of code langs) is going to be very different than esoteric at my job (haskell, rust).

                                                                                                                          2. 4

                                                                                                                            As a happy user of fish I just really hope this doesn’t disrupt the project too much. Rewrites are hard!

                                                                                                                            Same here. As a user, it doesn’t bother me in which language it is written in. They should absolutely pick the language that allows them to be more productive and deliver more. I have been an happy fish user for 13 years, it is a software that proved useful from.day one. And every realease there are clear important improvements, often times new UX additions. I wish them a smoot migration.

                                                                                                                            1. 4

                                                                                                                              If you’re curious about the size of the rewriting project: I ran tokei on the repo and it counted 49k lines of C++ 8k lines of headers 1k lines of CMake (and 57k lines of Fish, so there’s also a lot that won’t need to be rewritten)

                                                                                                                              1. 3

                                                                                                                                They posted this little bit later:

                                                                                                                                Since this PR clearly escaped our little bubble, I feel like we should add some context, because I don’t think everyone caught on to the joking tone of the opening message (check https://fishshell.com/ for similar writing - we are the shell for the 90s, after all), and really got what the idea here is.

                                                                                                                                1. 3

                                                                                                                                  The follow up contains:

                                                                                                                                  Fish is a fairly old codebase. It was started in 2005

                                                                                                                                  Which means I still can’t tell the degree to which he’s joking. The idea that a codebase from 2005 is old is mind boggling to me. It’s not even 20 years old. I’ve worked on a lot of projects with code more than twice that age.

                                                                                                                                  1. 1

                                                                                                                                    To put things into perspective, 2005 to 2023 is 18 years — that is the entire lifespan of the classic MacOS.

                                                                                                                                    Or, to put things into perspective, the Mac has switches processor architectures twice since the Fish project was started.

                                                                                                                                    Most software projects just rot away in 18 years because needs or the surrounding ecosystems change.

                                                                                                                                    1. 2

                                                                                                                                      To put things into perspective, 2005 to 2023 is 18 years — that is the entire lifespan of the classic MacOS.

                                                                                                                                      Modern macOS is a direct descendent of NeXTSTEP though, which originally shipped in 1989 and was, itself, descended from 4BSD and CMU Mach, which are older. Most of the GNU tools are a similar age. Bash dates back to 1989.

                                                                                                                                      Most software projects just rot away in 18 years because needs or the surrounding ecosystems change.

                                                                                                                                      That’s probably true, but it’s a pretty depressing reflection on the state of the industry. There are a lot of counter examples and a lot of widely deployed software is significantly older. For example, all of the following have been in development for longer than fish:

                                                                                                                                      • The Linux kernel (1991)
                                                                                                                                      • *BSD (1991ish, depending on when you count, pre-x86 BSD is older)
                                                                                                                                      • Most of the GNU tools (1980s)
                                                                                                                                      • zsh (1990)
                                                                                                                                      • NeXTSTEP / OPENSTEP / macOS (1989)
                                                                                                                                      • Windows NT (1993)
                                                                                                                                      • MS Office (1990)
                                                                                                                                      • SQL Server (1989)
                                                                                                                                      • PostgreSQL (1996)
                                                                                                                                      • Apache (1995)
                                                                                                                                      • StarOffce / OpenOffice / LibreOffice (original release was 1985!)
                                                                                                                                      • MySQL (1995)
                                                                                                                                      • NetScape Navigator / Mozilla / Firefox (1994)
                                                                                                                                      • KHTML / WebKit / Blink (1998)
                                                                                                                                2. 2

                                                                                                                                  This is the actual world we live in. This is what people really think.

                                                                                                                                  1. 1

                                                                                                                                    Why does everyone hate CMake so much?

                                                                                                                                    I find it far easier to understand than Makefiles and automake.

                                                                                                                                    Plus it runs on ancient versions of Windows (like XP) and Linux, which is not something most build systems support. And it mostly “just works” with whatever compiler you have on your system.

                                                                                                                                    1. 20

                                                                                                                                      Makefiles and automake are a very low bar.

                                                                                                                                      Cargo can’t do 90% of the things that CMake can, but it’s so loved, because most projects don’t need to write any build script at all. You put your files in src/ and they build, on every Rust-supported platform. You put #[test] on unit tests, and cargo test runs them, in parallel. You can’t write your own doxygen workflow, but cargo doc gives you generated reference out of the box for every project. The biggest criticism Cargo gets about dependency management is that it’s too easy to use dependencies.

                                                                                                                                      This convention-over-configuration makes any approach requiring maintaining a DIY snowflake build script a chore. It feels archaic like writing header files by hand.

                                                                                                                                      1. 15

                                                                                                                                        I find it far easier to understand than Makefiles and automake.

                                                                                                                                        Why does everyone hate being punched in the face? I find it far more pleasant than being ritually disemboweled.

                                                                                                                                        And it mostly “just works” with whatever compiler you have on your system.

                                                                                                                                        CMake is three things:

                                                                                                                                        • A set of core functionality for running some build tasks.
                                                                                                                                        • A truly awful macro language that’s been extended to be a merely quite bad configuration language.
                                                                                                                                        • A set of packages built on the macro language.

                                                                                                                                        If the things that you want to do are well supported by the core functionality then CMake is fairly nice. If it’s supported by existing packages, then it’s fine. If it isn’t, then extending it is horrible. For example, when using clang-cl, I was bitten by the fact that there’s hard-coded logic in CMake that adds the /TC or /TP flags to override the language detection based on the filename and tell it to use C or C++. This made it impossible to compile Objective-C. A few releases later, CMake got support for Objective-C, but I can’t use that support to build the Objective-C runtime because it has logic in the core packages that checks that it can compile and link an Objective-C program, and it can’t do that without the runtime already existing.

                                                                                                                                        I’ve tried to use CMake for our RTOS project, but adding a new kind of target is incredibly hard because CMake’s language is really just a macro language and so you can’t add a new kind of object with properties of it, you are just using a macro language to set strings in a global namespace.

                                                                                                                                        I’ve been using xmake recently and, while there’s a lot I’ve struggled with, at least targets are objects and you can set and get custom properties on them trivially.

                                                                                                                                        1. 3

                                                                                                                                          Plus it runs on ancient versions of Windows (like XP)

                                                                                                                                          Only versions no one wants to run anymore (i.e. 3.5 and older).

                                                                                                                                          1. 3

                                                                                                                                            its an entire set of new things to learn and it generates a makefile so I worry that I’ll still have to deal with the problems of makefiles as well as the new problems cmake brings

                                                                                                                                        1. 3

                                                                                                                                          An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn’t handle traffic that fast. I couldn’t combat that, even by explaining with very small words “but we disable interrupts”.

                                                                                                                                          Groups are/were never wrong, even when they change to the opposite opinion. A recent incarnation of this phenomenon occurred (at the macro level) with the widespread adoption of TypeScript. With it, types were ret-conned from being “useless ceremony,” that “slows me down,” to being a “useful tool for rapid development.”

                                                                                                                                            1. 10

                                                                                                                                              Been in industry almost 20 years. The solo tech stack diagram terrifies me. I don’t see how that is productive at all for side projects.

                                                                                                                                              1. 7

                                                                                                                                                It’s not a side project, it’s for his business.

                                                                                                                                                1. 2

                                                                                                                                                  I was wondering why someone would make so terrifying for new programmers and why it was so widely shared. Now it all makes sense!

                                                                                                                                              2. 4

                                                                                                                                                I’ve always liked Ansible for small-to-medium sized projects; it seems to strike a nice balance between simplicity and power. It doesn’t solve the provisioning problem but IME most shops very rarely need to provision new hardware or services, so doing it manually thought AWS, DO or whatever usually works well enough. Also, since it’s basically just an SSH for-loop on steroids it’s pretty easy to debug—just repeat the command manually!

                                                                                                                                                Also, since it doesn’t pretend to be declarative you can easily do things like run a database migration, something that seems to be treated in the Cloudformation / K8s world as something weird and esoteric that require a bunch of esoteric hacks just to get working.

                                                                                                                                                Am I the only one? I only ever hear about Ansible once in a blue moon, but I don’t know of any alternatives that work better for my use case. Or are there better alternatives I don’t know about?

                                                                                                                                                1. 4

                                                                                                                                                  I’ve tried Ansible only a little bit, but my experience with it is definitely not positive. For example, you say it doesn’t pretend to be declarative, but it kinda does, and you have to work your way around the bits where the declarativeness doesn’t work. For example, if you add a cron entry or a user or something like it in your script (or maybe you’re looping over a list and create the things in the list), when you take out one entry, that entry doesn’t get deleted. So you have to adjust your script to delete it conditionally (because the host may or may not have executed the previous version of your Ansible recipe). Also, you have to take care to make sure your recipe is idempotent, which is not always obvious.

                                                                                                                                                  And of course you may end up with different systems which have slightly different state. For example, Ansible supports multiple operating systems, but the subtle differences will still show up when you’re writing recipes. And what if users have manually made some changes one a subset of hosts that your Ansible recipe will run on?

                                                                                                                                                  At work, we’ve had a colleague write some Ansible tasks, which were basically unmaintainable and we had to scrap the entire thing and rewrite a different solution from scratch - but that might just have been the rest of the team’s unfamiliarity with the tool at the time.

                                                                                                                                                2. 3

                                                                                                                                                  Those are mostly fine. That’s just a normal three tier app, plus they outsourced authentication to a service (Cognito) and they outsourced syslog to a service (CloudWatch).

                                                                                                                                                  One thing that does suck is the usability of Application Gateway (this is the thing for invoking lambdas from HTTP requests) but eh oh well.

                                                                                                                                                  (Edit: deleted an incorrect paragraph.)

                                                                                                                                                1. 2

                                                                                                                                                  Professionalization, for lack of a better word: tech workers became a flatter workforce rather than a couple thousand creative weirdos.

                                                                                                                                                  a larger percentage of us…just don’t care to think about computers much.

                                                                                                                                                  Glad the author was able to put into words what I couldn’t. Tech sometimes feels like it tries to be about everything except actually programming. You’re weird if you write things without 1000 dependencies, and there’s an army of people eager to bro-splain that you just invented your own framework when you wrote hello world. Rust is really cool now, because we all decided it was finally cool. (We’ve always thought Rust was cool, even when we were ignoring it.)

                                                                                                                                                  It’s all somewhat alienating to me if you can’t tell. :) I mostly don’t pay much attention these days.

                                                                                                                                                  1. 4

                                                                                                                                                    In my experience if this is your life it’s time to get out of California Start-Up Country and work for a company that does something real and concrete for a change, instead of a company that builds frameworks for building frameworks for building frameworks. It’s not perfect, but it’s a start.

                                                                                                                                                    1. 3

                                                                                                                                                      Oh I have! More commenting on the online discourse being pretty shallow overall.

                                                                                                                                                    2. 2

                                                                                                                                                      This article was strangely comforting. I feel like I run into a lot of people with strong opinions, loosely held and it’s just exhausting. Even if you “win” an argument some times you end up hearing your own argument parroted back with higher intensity than you originally argued with. The even tone in this article is a welcome breath of fresh air.

                                                                                                                                                      1. 2

                                                                                                                                                        In my experience, this pushback mostly comes from insecure engineering managers that got sidetracked from programming into management mostly because they weren’t very good at it to begin with.

                                                                                                                                                        Good managers are able to tell that an organization owning the code it operates is usually a net asset, and can simplify things a lot by only having the features that are actually needed and integrating straightforwardly with the already existing ecosystem.