Threads for timvisee

      1. 5

        A neat easter egg on this site is if you scroll down, click on the crowbar at the bottom :) Then click on the zombie and headcrab.

    1. 4

      Firefox has this amazing HTTPS-only mode.

      It blocks HTTP traffic and shows an easy prompt/button to bypass it once if desired.

      1. 2

        Like many others, I used to use the HTTPS Everywhere extension, which was deprecated in 2021 because of the wide availability of these HTTPS-only modes, including in Firefox and Chrome, so now I use that.

    2. 6

      Use It removes so many pains from me

      Everything just works as expected.

      Or just accept WSL2

      1. 14

        Or just accept WSL2

        A lot of advice assumes WSL2 is better than WSL, but it’s really not that clear cut. WSL is similar to the FreeBSD Linux compat layer: Linux processes under WSL are processes managed by the NT kernel with a different system call layer that uses NT kernel functionality to implement POSIX behaviour. WSL2 is a Linux VM with a load of extra config to make it integrate with Windows.

        WSL isn’t a real Linux kernel and so you may experience some mismatches or missing functionality. WSL2 VMs always boot with a MS-provided kernel and so you may find that it’s missing some features that things targeting your distro of choice expect.

        Both have similar functionality but they’re implemented very differently:

        If you run a Windows .exe from WSL, it just passes it to the kernel’s CreateProcessEx machinery. If you run one from WSL2, it will pass the path via a command channel to something on the host system that will try to run it.

        If you’re accessing the Linux filesystem in WSL, you’re actually accessing an NTFS directory in a special namespace that implements POSIX semantics via a filter driver. The filter drivers make this quite slow from WSL but quite fast from outside. If you’re accessing the WSL filesystem from Windows, it’s just another NTFS directory. If you’re accessing a WSL2 filesystem, it’s a 9p-over-VMBUS share from the guest VM, which is slow.

        If you’re accessing the Windows filesystem in WSL, it’s just another NTFS directory and it’s about as fast as accessing the ‘Linux’ filesystem. If you’re accessing it in WSL2, it’s a remote filesytem shared via 9p-over-VMBUS. If you mmap a file with MAP_SHARED in WSL and MapViewOfFile from Windows, it should just work because they are both using the same buffer cache. If you do the same from WSL2, I don’t believe it will work.

        If you open a Windows named pipe or UNIX domain socket from WSL, you’re using the same Windows kernel functionality and so communication between a WSL process and a Windows process will be as fast as between two Win32 processes or between two WSL processes. If you do the same from WSL2, the Windows pipe will be will be passed to a Hyper-V socket, which will then be connected to a pipe in Linux, which adds two copies and a bunch of latency.

        If you allocate memory rapidly in WSL, the NT kernel will give you memory until the commit charge is exhausted and then abruptly fail. If you allocate memory rapidly in WSL2, the Linux kernel will hit low memory conditions and poke the Hyper-V balloon driver. It will then start killing processes. Eventually, the balloon driver will return more memory. If you’re wondering why clang crashes randomly in WSL2 but then works fine if you try it a second time, this is why.

        I mostly use WSL to run bash, vim, and ssh (which, mostly, connects to a FreeBSD Hyper-V VM). WSL has far less overhead than WSL2 for this use. I also use Docker Desktop, which uses WSL2, and so I can connect from a lightweight WSL instance into a container backed by a full Linux kernel when I need to.

        1. 5

          I thought WSL2 was a deflating sort of sad move. Microsoft is full of smart folks and the WSL design was very clever.

          1. 8

            I’ve actually read some of the WSL code because I wanted to see how easy it would be to add a WSF[reeBSD]. Aside from a commenting style that I’ve never seen anywhere else (functions first have the function definition line, then the doc comment, then the open brace) it’s very clean code. Far more readable than the real Linux kernel or the FreeBSD kernel. I kind-of want to see what an NT kernel with WSL and without the UWP / Win32 layers would look like (mind you, the NT kernel’s memory management layer gives me nightmares and is fully understood, as far as I know, by one person. Its only saving grace is that he is one of the smartest and kindest people that I’ve had the good fortune to work with, if only fairly briefly).

            1. 2

              Aside from a commenting style that I’ve never seen anywhere else (functions first have the function definition line, then the doc comment, then the open brace)

              It’s python style :)

            2. 1

              I’ve actually read some of the WSL code because I wanted to see how easy it would be to add a WSF[reeBSD]. Aside from a commenting style that I’ve never seen anywhere else (functions first have the function definition line, then the doc comment, then the open brace) it’s very clean code. Far more readable than the real Linux kernel or the FreeBSD kernel. I kind-of want to see what an NT kernel with WSL and without the UWP / Win32 layers would look like (mind you, the NT kernel’s memory management layer gives me nightmares and is fully understood, as far as I know, by one person. Its only saving grace is that he is one of the smartest and kindest people that I’ve had the good fortune to work with, if only fairly briefly).

              The closest I’ve seen is probably Minoca, which looks very much like NT with a POSIX-first personality. You get little touches like its equivalent to the Object Manager being represented as the real FS root, and chroots into a mount to preserve Unix normalcy.

              More amusingly, years ago, I remember an April Fools’ joke about GeNToo - a Gentoo distribute that used a pared down Windows install that just went to Interix/SFU with a Gentoo prefix installed. I don’t think it was real, sadly.

          2. 3

            I dunno I like WSL2 and the perf for me is way better than WSL1.

            Also, I get graphics and audio compat via Wayland which wasn’t in WSL1.

            To each their own I guess!

            1. 3

              Oh sure, if you want to use software to do useful things, fine!

        2. 2

          Have you considered writing your workflow / environment choices up in a blog post? Might be useful to others.

          1. 2

            Not really. I just replaced my aging MacBook Pro and I very much hope to stop using Windows soon. A lot of the choices that I make are probably not very interesting to other people because I work on:

            • FreeBSD (and Linux if I have to).
            • Experimental hardware.
            • LLVM / clang

            My choices are quite different from those of someone working in a different space. They’re also shaped a lot by habit. There are some things I like about vim, but mostly I like the fact that I’ve written hundreds of thousand lines of code and thousands of pages of prose in it, so I use it without thinking. Without that, I’d probably use something different.

            Happy to answer questions but, in general, replicating my setup would be a bad idea for most people.

            1. 1

              I specifically meant the distinctions between WSL and WSL2 and how you use both to best effect.

              1. 1

                I mostly use Hyper-V to run a FreeBSD VM, and do real work in there. I use WSL to run bash and ssh, to connect to my FreeBSD VMs, so I’m probably not very representative here. If I were, the folks who shipped WSL2 would have had an incentive to ship a FreeBSD version.

      2. 3

        Releasing packages to it is also a breeze. A lot better than to Chocolatey or WinGet.

        1. 1

          Huh? winget is just a PR on github

      3. 2

        Thanks for reminding me! I forgot the package management section :)

        I use winget because it’s MSFT’s tool but Scoop and Chocolatey are amazing.

        1. 3

          Personally, I prefer Scoop because it does not litter the OS with stuff scattered everywhere. It keeps everything nice and tidy in just one folder.

          1. 2

            Updated with Scoop and Chocolatey mention. Thanks again!

    3. 2

      Probably not what you’re looking for, but prs.

      It’s a CLI password store that uses GPG. You can open and edit files from the store. Plaintext is opened from /dev/shm to prevent it touching the disk. You can set EDITOR to your favorite editor globally.

      export EDITOR=gedit
      prs edit myfile
      prs show myfile

      It’s a password store, but can function as notepad library.

    4. 42

      I do not understand why “Don’t spy on people without their consent” is such a hard thing for programmers to accept.

      1. 21

        On the other hand, I don’t understand how collecting anonymous usage data that is trivial to opt out of is at all equivalent to spying or is harmful to anyone. I was hopeful when reading the original post that having an example of a well designed anonymous telemetry system would encourage other people to adopt that approach, but given it wasn’t treated any differently as non-anonymous telemetry by the community I don’t know why anyone would go through the effort.

        1. 23

          There is no such thing as “anonymous data” when it’s paired with an IP address.

          Even when it’s trivial to opt out, it’s usually extremely difficult to never use the software in a context where you haven’t set the opt-out flag or whatever. Opting out for one operation might be trivial, remaining opted out continuously across decades without messing up once is non-trivial.

          Just. Don’t. Spy. On. People. Without. Consent.

          1. 7

            I agree IP address is non anonymous, which is why this system doesn’t collect it. Most privacy laws also draw the line at collecting PII as where consent is required and I think that’s a reasonable place to draw the line.

            Most software and websites I use has far more invasive telemetry than this proposal, and I think my net privacy would be higher taking an approach like Go proposed rather than the status quo, which is why I was excited about it being a positive example of responsible telemetry. Good for you if you can go decades without encountering any of the existing telemetry that’s out there.

            1. 12

              How does the telemetry get sent to Google’s servers in a way which doesn’t involve giving Google the IP address?

              I agree that website telemetry is also an issue. But this discussion is about Go. There is no good example of responsively spying on users without their consent.

              1. 11

                You do have to trust Google won’t retain the IP addresses, but the Go module cache also involves exposing IP addresses to Google. I think the on by default but turn it off if you don’t trust Google is reasonable. I also trust that the pre-built binaries don’t contain backdoors or other bad code, but if you don’t want to trust that you can always compile the binaries from source.

                Anyways, I’m not trying to change your mind just trying to explain why some people don’t consider anonymous telemetry that’s opt-out to be non-consensual spying.

          2. 3

            guidance of both GDPR and CCPA is that an IP address is not considered PII until it is actively correlated / connected to an individual.

            None of the counters that are proposed to be collected contain your name, email, phone number or anything else that could personally identify you.

            1. 3

              IANAL, but collectioning data associated with an IP address (or some other unique identifier) definitely required consent under the GDPR.

              An IP address or UUID is considered pseudonymous data:

              ‘pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person;


              Pseudonymous data is subject to the GDPR:

              What differs pseudonymisation from anonymisation is that the latter consists of removing personal identifiers, aggregating data, or processing this data in a way that it can no longer be related to an identified or identifiable individual. Unlike anonymised data, pseudonymised data qualifies as personal data under the General Data Protection Regulation (GDPR). Therefore, the distinction between these two concepts should be preserved.


              1. 1

                That is some really creative copy pasting you did there. I am also not a lawyer but I don’t think it is super relevant for this proposal since they follow the first principle of data collection: “do not collect personal data”.

                Imagine the discussion goes like this:

                You: “Hello Google, I am a Go user and according to the GDPR I would like you to send me a dump of my personal data that was sent via the Go tooling telemetry. To which I OPTED-IN when it was released.”

                Google: “That data is anonymized. It is not connected to any personal data. We have the data you submitted but we cannot connect it to individuals.”

                You: “Here is my IP address, will that help?”

                Google: “No, we do not process or store the IP address for this data. (But thank you! now we know your IP! Just kidding!)”

                You: “Here is the UUID that was generated for my data, will that help?”

                Google: Unfortunately we cannot verify that is actually your UUID for this telemetry. And thus we don’t know whether you are requesting data for yourself.”


                1. 1

                  That is some really creative copy pasting you did there.

                  You can find all this in the GDPR. At any rate, I wasn’t criticizing The Go proposal, only the statement:

                  guidance of both GDPR and CCPA is that an IP address is not considered PII until it is actively correlated / connected to an individual.

                  But I see now that this is a bit ambiguous. I read it as analytics associated with IP addresses is not PII, which is not really relevant, since it is pseudonymization according to the GDPR and pseudonymous data is subject to the GDPR. But I think what you meant (which becomes clear from your example) was that in this case there is no issue, because even though Google may temporarily have your IP address (they have to if you contact their servers), but they are not storing the IP address with the analytics. I completely agree that the analytics data is then not subject to the GDPR. (Still IANAL.)

      2. 6

        For programmers, or the rest of the business?

    5. 3

      Yes!! Great article.

      People don’t expect better

      I really am pulling my hair out because of this. People just accept everything these days, software being slow and buggy as hell. It is horrendous.

    6. 1

      Small release but great nonetheless.

      I love seeing internal improvements such as with std::sync::mpsc.

      1. 2

        I haven’t used the mpsc module (multi-producer single-consumer) myself, but an update like this seems like a good sign.

        To lay out my reasoning: If the task remains the same, the implementation changes, and the API can remain the same — then the API is probably at the task level, and doesn’t make you think about the incidentals of its implementation. Which I have always appreciated, whenever I’ve coded using such modules.

        1. 4

          mpsc has an interesting history, as far as I know this is like the third major implementation change for it. IIRC the first one was a ring buffer, the second was a linked list, the latest seems to be… different things depending on context? It’s a pretty cool example of crossbeam acting as a testbed for the std lib though.

    7. 7

      Fantastic overview and comparison of methods to tweak compile time. I hoped to see a magical solution as well.

      I must say that Rust’s safety makes it up to me.

    8. 4

      Awesome, thanks for sharing! age is super useful for this due to its simplicity.

      I’m therefore also planning to add age support in prs soon!

    9. 2

      I’m not seeing this, at all, with a similar setup.

      I wonder if it isn’t just the Microsoft URL security/malware scanner that is visiting the URLs, rather than Bing indexing it and seeing random visitors.

      When using magic links like this, always use a POST, or require it to be opened in the same client (uniquely identified by a cookie).

      1. 6

        Fun story:

        Our company has put in place security training, which include sending fake phishing email that we are supposed to report without clicking on the “free iPad” link. For the first few months, everybody had abysmal score of 100% malware link clicked.


        Office 365 security scanner systematically followed the link to inspect it for malware. So the poorly designed software assumed that we had clicked it. We complained and the training provider fixed it. I will note that this was not a free account but a big business paid for email service. They are certainly not the only security service to do this.

        So yes : “link in emails are opened by humans” is another one of those “thing programmer believe” that are completely wrong. You should avoid doing anything sensitive with them.

        1. 3

          Sending fake phishing emails to your own employees is a good sign that your CTO has no real work to do and should be fired. It’s just obviously a waste of time. Just add the external link banner to the emails and be done with it.

        2. 2

          This is really interesting to me. I’m in the middle of implementing something like this and in my research keep finding stories like yours.

          I’m thinking of emailing a one-time code to the user instead of sending them a link now

    10. 20

      Honestly, I don’t really have many problems with GitHub. It works decently, and if it goes to hell, I can just push somewhere else and deal with the fallout later. Actually finding projects/code is useful with code search (ignoring ML sludge), and I really don’t see how people can get addicted to the whole stars thing. Besides, if it’s public, something like Copilot will snarf it anyways.

      1. 23

        I was a long-time holdout from GitHub. I pushed every project I was contributing to and every company that I worked for to avoid it because I don’t like centralised systems that put control in a single location. I eventually gave up for two reasons:

        It’s fairly easy to migrate from GitHub if you ever actually want to. Git is intrinsically decentralised. GitHub Pages and even GitHub wikis are stored in git and so can just be cloned and take elsewhere (if you’re sensible, you’ll have a cron job to do this to another machine for contingency planning). Even GitHub Issues are exposed via an API in machine-readable format, so you can take all of this away as well. I’d love to see folks that are concerned about GitHub provide tooling that lets me keep a backup of everything associated with GitHub in a format that’s easy to import into other systems. A lot of my concerns about GitHub are hypothetical: in general, centralised power structures and systems with strong network effects end up being abused. Making it easy to move mitigates a lot of this, without requiring you to actually move.

        The projects I put on GitHub got a lot more contributions than the ones hosted elsewhere. These ranged from useless bug reports, through engaged bug reports with useful test cases, up to folks actively contributing significant new features. I think the Free Software movement often shoots itself in the foot by refusing to compromise. If your goal is to increase the amount of Free Software in the world, then the highest impact way of doing that is to make it easy for anyone to contribute to Free Software. In the short term, that may mean meeting them where they are, on proprietary operating systems or other platforms. The FSF used to understand this: the entire GNU project began providing a userland that ran on proprietary kernels and gradually replaced everything. No one wants to throw everything away and move to an unfinished Free Software platform, but if you can gradually increase the proportion of Free Software that they use then there becomes a point where it’s easy for them to discard the last few proprietary bits. If you insist on ideological purity then they just give up and stay in a mostly or fully proprietary ecosystem.

        1. 2

          Even if it’s possible, even easy, to copy your content from Github when they cross some threshold you’re no longer ok with, there will be very little to copy to unless we somehow sustain development of alternatives during the time it takes to reach that threshold.

          IMHO it would be better if the default was at least ”one of the three most popular” rather than ”Github, because that’s what everyone uses”.

      2. 7

        If you use their issue tracker, pull requests and so on, that will be voided too. That isn’t easily pushable to another git host. Such things can tell a lot about a project and the process of it getting there, so it would be sad if that was lost.

    11. 2

      Can I just say thank you so much for the detailed README! Even big professional projects sometimes are sometimes difficult to get a start with and understand with just the README. I was already familiar with FF Send but I feel like even if I wasn’t I would understand it deeply and know how to use it before I even cloned the repo.

      And also the tool is awesome! I was definitely sad when Mozilla shut down their public one.

      1. 2

        Thanks a lot for the wholesome comment! :)

    12. 5

      Dev here! Happy to answer any questions.

    13. 10

      This is one of the reasons I started using DuckDuckGo. It doesn’t have these garbage widgets that suddenly pop up 2 seconds after the page is ‘loaded’ making everything jump around causing miss clicks.

      1. 5

        Funny you should say that, because I have had that exact problem with DDG because of their “instant answers” or whatever they call it that pop in at the top of the results.

        1. 1

          DDG is similar in my opinion.

          At least they have a decluttered version - DDG lite - that I switched to because I’m so fed up with the lack of results - 10 after the initial search plus other features I don’t like - “more results”, embedded image or video results above the actual results - there are already tabs for images or videos.

          I set up 2 keyword searches (in firefox) - one for lite and one for regular search pages.

          The good thing about keyword searches is that you can take full advantage of their URL parameter support to control the look, feel and functionality (including turning instant answers off). Some of those options may no longer work though but most of them do.

    14. 1

      I’m having some serious annoyances with their window management (related to alt+tab, full-screen, windows vs apps) too. I don’t think they are bugs, just the way it’s implemented. I should make a list some time.

      1. 2

        When I use a mac, I have to install a program that changes alt+tab to be more like Windows/Linux, I think it’s actually called “Alt-Tab”.

        1. 1

          My favourite feature of that program is that it can set the timeout of the popup window to 0. That delay (which has unfortunately been copied by KDE at some point, too) is the most annoying anti-feature of them all and so far the only thing I really had to work around on macOS because it was driving me nuts.

          According to Internet wisdom (no idea if that’s the actual motivation), the idea is that if you’re hitting Alt-Tab just once, you’re likely doing it in order to switch to the most recent window because you’re alt-tabbing back and forth between two apps. So in order to minimise the amount of visual noise, the icon list window is not shown immediately, but popped up after a certain delay.

          That only really works if you have no more than two applications open in the first place, though, or if you alt-tab between two of your open applications every thirty seconds or so, and nothing else. If you do it less frequently (write code in a window, compile in a terminal window, watch some output in another one maybe etc.), by the time you alt-tab again, you’ve certainly forgotten what the next window in the stack is. So in practice, almost all the time, I find myself either pressing alt-tab for too little time and switching to the wrong app (because I’ve alt-tabbed to, say, the music player, but I’ve forgotten that I did, so alt-tabbing takes me to the music player again instead of the terminal). Or pressing it longer than I need to and tabbing way past the window I meant to switch to, because it was very close to the top of the stack, and now I have to alt-tab my way through the whole bloody list again.

          inb4 “but virtual workspaces”: even with animations disabled in Accessibility options, the transitions are really slow (with animations on it’s unbearable, if I move back and forth a couple of times I get dizzy). I swear to God it’s like everyone in Cupertino has PTSD from Mac OS 9’s multitasking and doesn’t run more than two apps at a time because who knows what might happen.

          1. 1

            I might be misunderstanding, but I usually hit “option+tab”, and then release tab, but keep option down. This keeps the most recently accessed window selected, but shows the UI with all the windows. Then while holding “command”, I either release it and switch directly, or keep hitting tab to get the window I want. Alternatively, I then also start holding shift down and hit tab to go backwards. At this point it’s just muscle memory - I don’t really think about it.

            The model of switching between applications instead of windows still annoys me though. I’ve switched between Windows, Linux, and Mac enough that regardless of the platform I’m on I forget and accidentally start using the wrong shortcut to switch (on Windows accidentally trying to “alt+", and on Mac forgetting that I need to use "option+”, and trying to using “option+tab” to switch browser windows).

            My general philosophy is that I don’t think any model is correct, they’re all just arbitrary designs. So I do my best to learn the platform shortcuts, and if something still annoys me enough I will try and find a hack to change it.

            1. 1

              Nah, you got that right 100%, I just never managed to get myself to do what you’re doing. Having used systems with practically zero latency when switching windows since like forever, when the damn thing doesn’t show up immediately, I’m forever tempted to think it didn’t work, like, maybe I missed the Tab key, pressed it right on the edge or it didn’t go all the way through or whatever, especially since the rest of the interface is generally pretty snappy.

              I’m not a big fan of the app/window split either but I could probably get used to it. The timeout, on the other hand, feels really to me. I use Electron applications that take less time to start up than it takes to pop up a window list, my brain is just unable to cope. Maybe I got some weird and super-specific form of OCD, hell knows :-).

    15. 17

      I’d also love to see an entry for Firefox with uBlock Origin. I have a lot of reasons not to use Brave, while Firefox seems to do quite bad with these synthetic tests. I’m sure that with such plugin, it would do much better.

      1. 10

        Another interesting comparison would be with FF with multi account containers and third party cookies disabled. It’s one of those “removing a bug class” ideas. What use is tracking for Facebook if they’ll only see their own pages in their own container.

        It makes a few of the entries in the table irrelevant. Ok, you get some cookies or signatures. They’re not shared between pages, so they’re not going to cause privacy issues.

        1. 2

          I believe the first-party isolate setting in Fx obsolesces the security part of multi-account containers (though still useful for multiple accounts). CanvasBlocker may be a more valuable 2nd pick.

          1. 3

            True, but first party iso has some downsides. For example it breaks some cases of SSO.

            1. 3

              ISTM that any sort of privacy or harm mitigation on the web cuts across how it fundamentally works, and such, will always cause breakage. This seems to put anyone trying to make things better in the privacy direction in an impossible position.

            2. 1

              This is true, but it’s often safer to just have a separate password and 2FA

              1. 4

                For companies with many employees, SSO allows better security through things like easier offboarding, enforcing 2fa policies, forced credentials rotation on compromise, access auditing, etc. For a single person, sure, use a separate account rather than FB login. But for corps you want the opposite (still not FB though :-) )

    16. 8

      You can do quite a few smart things with native bookmark keywords, as they also support search queries:

    17. 6

      I’m sorry, but this is stupid. The reasoning you give doesn’t make going TLS’less better. Serving LE TLS from a shared box is still much better than not using TLS at all, for a bunch of reasons.

      E.g., now connections to you are open for attack from all points it goes through, rather than just a single (?) point. You can’t assume ISPs will just take care of link security.

    18. 2

      Awesome! Great results. Thanks for linking my 2020 post as well.

      This year I’m down to 50ms, though I still have to finish the last two days. Sadly I’m busy.

      Funnily enough I did get better results with Dijkstra for day 15. I wonder which is better.

      1. 3

        woha, 50ms is very impressive, well done.

    19. 13

      Wow, what a masterpiece of an article with numerous great visualizations! I learned a lot!

      1. 2

        Be sure to check out his archives. He has a lot of articles with amazing interactive visualizations.

      2. 2

        I’m glad I read your reply, I was planning on skipping the article because I assumed I basically knew how GPS worked. Not only is it beautifully presented, it builds up each step very carefully and pointed me at quite a few bits of the problem that I’d skipped in my mental model (why the orbits were chosen and how the time synchronisation works, for example). Even the bits that I did know well, I thoroughly enjoyed reading the description and playing with the animations.

        I’m going to keep this as my gold standard reference for how to do scientific communication.

        I still find it amazing that this was launched at a time where there was still sufficient uncertainty about relativity that they built the system to operate in Einstein-was-right and Einstein-was-wrong modes, just in case they accidentally disproved his theories (as I recall, the first GPS satellites were the first clocks put into orbit that were sufficiently sensitive to measure relativistic effects). The fact that I can now buy a cheap consumer device that can receive signals from four such systems and tell me my precise location anywhere in the world is a phenomenal achievement. The fact that four such systems need to exist because four large political entities don’t trust the other three is much less of an achievement for the species.

        1. 1

          GPS is just so well-designed, you put it well.