Threads for xfbs

    1. 1

      I considered doing something similar, but using Qdrant and vector search. That could also be used to show similar stories and/or suggest tags for new stories.

    2. 4

      In order to minimize the impact of logging on the main application – and therefore on the host – it’s important to avoid large batches of work. They may clog the PCI bus at the wrong moment.

      I don’t know what kind of programming this guy does, but I feel like we are not on the same level — I mostly write async backends where the cost of logging is very low compared to network latency and such. I don’t even think about clogging the PCI bus — I don’t think I am anywhere close to the amount of throughput there, considering what the PCIe transfer rate is.

      Awesome article but it would be good to give some examples of where these techniques are needed and why, otherwise I feel like it is over engineered (and I am sure the author had good reason to use these techniques)!

      1. 2

        High frequency trading, probably. So not actually useful, but the ideas and ways of thinking may be translatable to other domains, even if not the exact specific techniques.

        1. 1

          I would not say that it is not useful — some of the ideas could be applicable to other domains. High-traffic deployments can benefit from cheap logging, and so can embedded systems. But it might be useful to add a disclaimer to say that this is not something we normies need (not yet anyways, but you can be sure I’ll cook up some project idea to have an excuse to play with it at some point, hehe).

    3. 2

      TL;DR: Similar to What If OpenDocument Used SQLite?.

      Instead of building a XML file with assets (optionally inside a ZIP file), just use an embedded database. Updates are a lot cheaper. Interoperability is a lot easier. You get atomicity “for free”.

      1. 2

        Yeah I saw that being reposted and remembered having read this, so I posted it as a reply.

    4. 7

      Are there multiple interoperable implementations of the sqlite file format? Is the format specified somewhere? Does the format remain backwards compatible indefinitely?

      I don’t know the answers, but it feels like these are more important questions when considering a document format.

      1. 6

        I think your latter two questions are addressed right on the SQLite home page:

        The SQLite file format is stable, cross-platform, and backwards compatible and the developers pledge to keep it that way through the year 2050.

        1. 2

          I’m probably just anxious after the sqlite2 -> sqlite3 breakage, though maybe that taught them the value of keeping things stable.

          1. 4

            Would you care to elaborate? Docs suggest that sqlite3 was released 2004-08-09, I have not read anything about instability or migrations issues.

          2. 4

            That’s a long time to be anxious for, it may be time to let that go ;-)

      2. 2

        D. Richard Hipp addressed that in a comment on Hacker News: https://news.ycombinator.com/item?id=37558809

        SQLite file format spec: https://www.sqlite.org/fileformat2.html

        Complete version history: https://sqlite.org/docsrc/finfo/pages/fileformat2.in

        Note that there have been no breaking changes since the file format was designed in 2004. The changes shows in the version history above have all be one of (1) typo fixes, (2) clarifications, or (3) filling in the “reserved for future extensions” bits with descriptions of those extensions as they occurred.

    5. 13

      It is both scary and funny that the biggest commercial operating system requires scary hacks (such as code injection, or writing temporary JavaScript scripts) just to allow you to delete a file (which happens to currently be executed).

      In Linux and macOS, you are free to delete files. Even if they are open or being executed. It’s as simple as that. No hacks required!

      But, an even bigger issue is that something such as an uninstaller even exists. The fact that you need to not only write your software, but every software that you want to release you need a separate software to install and uninstall it. That is crazy! Even though they are not perfect, Linux’s package managers are amazing at solving that problem. MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨

      </rant>

      1. 8

        MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨

        I’ve never been convinced this really worked right when the app will still leave things like launchd plists around that it automatically created…

        1. 3

          True, I have experienced that as well. It is not very common thankfully.

          Also, some applications do require installers even on macOS. An example (shame on you!) is Microsoft Office for Mac. At least those are standardized, but it is annoying. I will not install software that requires an installer on any of my systems.

      2. 4

        Windows has the technology. it’s called “Windows Installer” and it’s built into the OS. However it required using a MSI file, which people don’t like because of the complex tooling.

        More recently there is msix which simplifies things greatly while having more features but people don’t like it because it requires signing.

        1. 6

          Kind of. The root problem here is that you cannot, with the Windows filesystem abstractions, remove an open file. With UNIX semantics, a file is deleted on disk after the link count drops to zero and the number of open file descriptors to it drops to zero.

          This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself. The traditional hack for this was to use a script interpreter (cmd.exe was fine) that read the script and then executed it. This sidesteps the problem by running the uninstaller in a process that was not part of the thing being installed. MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

          It’s far more problematic for updates. On *NIX, if you want to replace a system library (e.g. libc.so), you install the new one alongside the old then rename it over the top. The rename is atomic (if power goes out, either the new version will be on disk or the old one) and any running processes keep executing the old one, new processes will load the new one. You probably want to reboot at this point to ensure that everything (from init on down) is using the new version, but if you don’t then the old file remains on disk until the open count drops to zero. You can update an application while it’s running then restart it and get the new version.

          On Windows, this is not possible. You have to drop to a mode where nothing is using the library, then do the update (ideally with the same kind of atomic rename). This is why most Windows updates require at least one reboot: they drop to something equivalent to single user mode on *NIX, replace the system files, then continue the boot (or reboot). Sometimes the updates require multiple reboots because part of the process depends on being able to run old or new versions. This is a big part of the reason that I wasted hours using Windows over the last few years, arriving at work and discovering that I needed to reboot and wait 20 minutes for updates to install (my work machine was only a 10-core Xeon with an NVMe disk, so underpowered for Windows Update), whereas other systems can do most of the update in the background.

          1. 3

            This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself

            I think this is only half-true, because WinAPI gives you the “delay removal until next reboot” (MOVEFILE_DELAY_UNTIL_REBOOT), so it should be possible for the uninstaller to uninstall the application, and then register itself, along with its directory, for removal until next reboot. Then Windows itself will remove the uninstaller on next reboot.

            On servers this could mean that it will be removed next month, but this in turn is a virtual problem, not a real one.

            1. 1

              Windows servers list “application maintenance” as a reason for a reboot, so it’s not culturally weird to reboot after an application update.

          2. 2

            MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

            Yep, that was my point. Or to put it another way, Windows can handle the management of a package so you don’t have to. Which was the complaint in the OP.

            But on your point, it is totally possible to do in-place updates to user software. On modern Windows most files can be deleted even without waiting for all handles to close. And any executables you can’t immediately delete (due to being run) can be moved. The problem is software that holds file access locks. Unfortunately standard libraries are especially guilty of doing this by default, even newer ones like Golang do this for some inexplicable reason.

        2. 2

          True, arguably Windows also has an app store nowadays and NuGet and WinGet. I did not know about msix! Maybe a bit of an XKCD 927 situation there.

          1. 4

            Windows also has:

            So the existence of installers/uninstallers is a “cultural” thing, not a technical necessity.

            1. 1

              “if you want to use our product, install [my chosen package manager]” is pretty non viable. I write the installer for a game, none of that would be an option.

              1. 3

                Sure you do. You just call it “Steam” instead.

          2. 2

            WinGet simply downloads installer programs and runs them. This is visible in its package declarations

            NuGet is a .Net platform development package manager right? Like Maven for the JVM it is not intended to distribute finished programs but libraries that can be used to build a program. But perhaps it can be used to distribute full programs just like pip, npm et al.

            1. 2

              In theory, NuGet is not specific to .NET. You can build NuGet packages from native code. Unfortunately, it doesn’t have good platform or architecture abstractions and so it’s not very useful on non-Windows platforms for anything other than pure .NET code.

    6. 3

      Have you looked at Cosmo? I’ve been wanting to try it, but the non-ANSI keyboard put me off. They seem to be real (there’s a healthy supply on eBay) and sell devices with tons of layouts.

      https://store.planetcom.co.uk/products/cosmo-communicator

      1. 3

        Neat! I have a kind of weird obsession with little devices that have keyboards. But at the same time — are these actually usable or do you end up swearing because typing is so hard and then it lands in the “fuck it bucket” that is the ultimate fate of most gadgets we seem to purchase?

        1. 2

          I have used Cosmo until doing too many unfortunate things to it (of the physical abuse type), and I now use PinePhone with keyboard case. Both devices are… not very pocket-friendly, but a small belt case works fine. I would rate both keyboards as «need some learning»; speed feels OK, objectively — for me — much faster than touchscreen keyboard but of course nontrivially slower than fullsize keyboard. Maybe 30/60/90.

          PinePhone keyboard cases seem to die easier than Cosmo, but on the bright side they can be replaced separately, I hope to get around to learn to repair the issue, but I haven’t yet…

          Having an SSH server on all sides (handheld, laptop, cheap VPS), setting up VCS synchronisation with a bit of scripting is quite straightforward.

          It does help with some stuff, but I don’t have a workflow that would cover everything on which I probably should take notes. On the other hand, the issue doesn’t seem technologically-limited…

          1. 1

            I’ve had both the Cosmo and its predecessor the Gemini and can confirm the above assessment with regards to pocket-friendliness and the keyboard learning curve. As for build quality: The keyboard is quite sturdy but the same cannot be said for the rest of the device unfortunately. With both models, the hinge cover became very loose after about a year and I had to tape it. Also, the Cosmo’s external mini-screen is a bit of a joke and using it as a plain phone is pretty much impossible without a headset.

        2. 1

          I got a GPD P2 Max and run linux on it; photo here. The keyboard is fantastic for its size.

    7. 60

      all of my knowledge is encoded in my open browser tabs

      1. 24

        Me too. One of them contains a strategy for making my way through them, but I can’t seem to find it.

      2. 4

        I have a weekly, recurring todo for “read all open browser tabs and save to pocket if needed”. Works, somewhat!

        1. 1

          I quite like Firefox’s Tab Stash extension for that, where I periodically put all my deduped open tabs into an unnamed collection showing the date of creation.

      3. 4

        I zap all my open tabs 2-3x a week. I can think of maybe one or two things I haven’t been able to find again over the past ~15 years. I’ve come to terms with lossy information retention.

        1. 1

          I try to do it once a day. I’ve come to realise that if it’s important and worth caring about (it almost always isn’t), I’ll either remember it or be able to find it in my browser history.

      4. 2

        How to destroy all your tabs in 3 steps:

        1. Open your main browser
        2. Open a private window from it
        3. Close your main browser.

        That’s it. All my tabs, gone, and I never knew how to recover them. I use Firefox on Windows, and it happens to me about once or twice a year.

        Edit: sometimes I get a warning about closing a window with multiple tabs opened. I believe the reason for this warning is because there is no “undo” on this: if I go through the data is gone for good.

        1. 7

          In Firefox, the History menu should list recently closed tabs and windows. For a whole window, Ctrl/Cmd-Shift-N should revive it.

          If that fails, there should be several backups of your recent sessions in your Firefox profile directory. Check the sessionstore.js file and other files in the sessionstore-backups directory.

          Double-check that you have session restore enabled in Firefox settings (“Open previous windows and tabs”).

        2. 4

          Maybe Ctrl-Shift-N ? I have used this to recover an accidentally closed Firefox window.

        3. 3

          Ctrl+Shift+T (reopen closed tab) should also reopen the closed window!

          1. 2

            I believe that kind of thing shortcut only work when my browser still has one window open, when I realise my mistake right away. But I don’t:

            • (4) Close the private window.
            • (5) Restart the main browser
            • (6) Watch in horror as all tabs are gone and Ctrl+Shift+T no longer works.

            And even then I’m not entirely sure step (4) is actually required.

            1. 2

              This window should be in history > recently closed windows, ready to be resurrected.

            2. 2

              If closing the last window closes the entire application, it sounds like you need to switch to MacOS!

              1. 4

                I’ll never support Apple. They popularised the idea that it’s okay not to own the computer you bought, and for this they will get my undying hatred.

          1. 4

            I have. It shows the history, most notably the most recently opened pages. I haven’t found anything relating to tabs there, especially what I wanted: recently closed tabs.

            1. 3

              I think look again then, because there is a menu item called exactly “Recently closed tabs” in the history menu.

      5. 1

        When I get too many, I click the OneTab button to save and close them all. Then (much) later triage at my leisure.

    8. 10

      Note that this ‘simple’ use of Make includes several GNU extensions that are either not present in bmake / Solaris Make or are available with different syntax.

      1. 10

        Does anyone use bmake or Solaris Make? I know that those exists (also nmake for Windows). But I have always considered them to be largely unused/remnants of the past. Please correct me if I am wrong.

        1. 8

          Does anyone use bmake or Solaris Make?

          bmake is the default make on NetBSD and FreeBSD, not sure what OpenBSD uses. It’s used to build the entire FreeBSD base system, orchestrate builds from the the ports tree, and for a bunch of other things. It has a number of features not present in GNU make, including meta mode which monitors all of the files opened by child processes and updates dependency rules to include them.

          Solaris Make is the default make on Solaris, but I don’t know what uses it. Last I checked, it supported very little other than the core POSIX make behaviour.

          Neither is common on GNU/Linux systems which, unsurprisingly, tend to default to GNU Make. On non-GNU platforms, GNU make is often installed as gmake. It looks for files called GNUmakefile as well as Makefile, so if you want to be kind to people on non-GNU systems and use GNU extensions, you can use this as the name for your make files and other Make implementations won’t get confused trying to execute things that they don’t understand.

          macOS ships with the last GPLv2 version of GNU Make (3.81, from 2006) and so anything using newer GNU features won’t work with the default toolchains available there either.

          But I have always considered them to be largely unused/remnants of the past

          bmake, at least, is actively maintained and some big companies build some large pieces of critical infrastructure using it.

          I would consider all make variants to be remnants of the past. They are painful to use in comparison to pretty much any modern alternative. bmake is no more so than gmake though.

          1. 3

            meta mode which monitors all of the files opened by child processes and updates dependency rules to include them

            That’s fucking rad. I’ve used -MD before but meta mode sounds way more comprehensive. I don’t use any kind of make regardless, I just generate ninja files ever since reading Julia Evans’ blog about ninja.

            1. 2

              The fun thing with meta mode is that it automatically does things like recompile if your compiler has changed, or if one of the shared libraries linked by a tool that you’re using has changed. This is really great for reproduceable builds, where some change in the compiler version may change an optimisation and change the output, but if you don’t do a clean build after updating your compiler you accidentally mix .o files from both versions and end up with something no one else can reproduce.

    9. 1

      Trying to convince some friends to help me build builds.rs, a binary build service for Rust crates (DM me if you are interested in participating!). Releasing and working on cindy. Talking to some companies to try to get hired. Procrastinating on doing my taxes. The usual!

    10. 1

      As always, I have a lot more ideas for this weekend than I have actual time. But here’s the todo list:

      • Polish up the restless crate I recently created which lets you define HTTP requests in terms of trait implementations and you get type-safe clients for free
      • Polish up the wasm-cache crate I recently created which implements an in-memory request cache for Rust WASM frontend apps,
      • Build some prototypes for some apps that are on my wish list,
      • Figure out how to easily compile stuff for Windows in GitLab CI so I can release binaries for it,
      • Collect some more ideas for blog posts and get back into the habit of writing,
      • Handle some tax-related bureaucracy.

      Happy weekend everyone!

    11. 1

      Private LLM. Enjoying the hype, but want to build my own mini expert to bounce ideas off in highly technical programming / complex areas of engineering.

      1. 5

        I submitted khoj as a story here a couple of days ago. It lets you run a local LLM that can ingest your own documents (PDF, Markdown, and a few other formats), with local indexing and chat. It also has some stuff that lets you run it on your own machine somewhere but query it from mobile devices.

        It looked interesting and it’s the first local AI assistant thing that I’ve seen. pushcx deleted the story because ‘Personal productivity is off-topic.’

        There’s an open issue on the llama.cpp repo for building a local copilot-like assistant too, which looks like it has a few people working on it. I’m really looking forward to what it produces.

        1. 1

          Checking this out now - I went down the route of llama2 (i.e starting from scratch) - much to learn and your project is basically exactly what I want! Gonna have a play and then take next steps (if any). Thanks for putting it together!

          1. 1

            your project is basically exactly what I want

            It’s not my project, I just saw it and thought it looked cool. I was very sad to see that it’s off-topic for lobste.rs.

            1. 1

              Yeah not sure how it is off topic. Had a good play - I need to make some changes (no GPU usage, being able to specify a model to use) - but it’s very good. After I fed it my data it suddenly became very knowledgable which is what I was after!

        2. 1

          Khoj is a standout, and it’s why I’ve been nudging my FileBot users (just a few folks that I personally know) towards it. It’s eerily similar to what I imagined for FileBot – wild, right? Your mention here was the first time I heard of Khoj.

          FileBot does hold its ground, especially in producing detailed answers across multiple files and being transparent about sources. In rarer cases, I find FileBot more accurate, but it’s about neck-and-neck. But these days, I’m a Khoj-FileBot pendulum. Khoj, with its dedicated team, gets my vote for a primary tool.

          Stumbled upon Arx recently – a nifty tool for file anonymization & de-anonymization. But it’s oddly obscure; haven’t bumped into anyone using it. If it’s as slick as it says, why isn’t it part of Khoj or other easy-to-use projects?

          Speaking of which, I’m cooking up an auto-anonymization and de-anonymization layer, targeting 99.99% precision in scrubbing and restoring docs. Think: set-and-forget, integrated quietly with tools like Khoj or FileBot, using a mix of CRTD-techniques and pattern-recognition. I’m yielding good results, and it can handle more abstract privacy concerns. I have to combine and fine-tune my scripts into a single easy to use little python library.

          Here’s the kicker: It’s a privacy shield for folks using API endpoint LLMs like OpenAI’s, keeping data from straying into the wild. It’s poised to be the unseen guard in an age when big players might offer superior, cost-effective LLMs compared to small open-source options.

          Encountered Arx or a superior, easy-to-integrate alternative? Eager to swap notes.

          1. 1

            My former team at MSR (now Azure Research) is doing some work on privacy preserving machine learning, but it turns out to be an incredibly hard problem. Adding fairly small amounts of differential privacy have quite large impact on utility. I suspect that this is an intrinsic property. In theory, differential privacy removes things from your sample set that are not shared across the population, but if you actually knew what things were shared then you would not need ML, you could build a rule-based system for a tiny fraction of the compute cost.

            A lot of their focus is running these models in TEEs. We recently had a paper about some work we did with GraphCore on adding TEE functionality to their IPUs. The most recent NVIDIA chips have something based on this work. This should let a cloud provider build a model with a large training set and then let you fine tune in in a TEE with fine-grained egress policies so that you can guarantee that none of your personal data ever leaves the device except to your endpoint. Deploying this kind of thing at scale is still a little way out though.

            1. 1

              I see. The degradation from training on privatized data makes a lot of sense. The fine-tuning in the trusted executions environments sounds promising though. Let’s see how this experiment goes!

        3. 1

          Saved that link, looks interesting. Thanks for sharing!

    12. 6

      nothing, I promise. No side projects.

      1. 2

        Fitting to your usename :) Recharging is important!

    13. 7

      I read this at first as Snorting Lines in Emacs.

      1. 6

        M-x snort-lines

        1. 6

          M-x snort-lines is really just an unnecessary specialization of M-x insufflate.

    14. 3

      Vim has been my code editor of choice ever since I was introduced to it by my mentor in 2011. Although I have explored other editors, it is still my daily driver and probably will be for a while. Thank you Bram, you will be remembered! My condolences to his family.

    15. 4

      This is amazing! I have always been in love with well-made cheat sheets, and with beautiful diagrams. This one ticks both boxes.

    16. 1

      It is appalling to me that there are still systems out there relying on security by obscurity.

    17. 3

      Happy birthday Lobsters!

      Thank you @pushcx and all the wonderful crustaceans floating around for running this site that keeps me fed with interesting articles and news every morning.

      I credit part of my curiosity for computer science to slashdot and lobsters.

    18. 2

      I hope that the controversies around the Rust language leadership subside. The trademark policy was not very well-received. I especially did not like the politicized nature of it. In an ideal world, the Rust project will get it’s act together and that CrabLang will be forgotten. Despire the recent fallout, I regard anyone working on it highly and am hopeful that things will improve.

      1. 1

        Yes but the trademark has nothing to do with the project. In contrast the original speaker who resigned was very fond of the rust foundation (which handles the trademark). As opposed to the project which manages the conference.

        Hopefully the effective power vacuum since the original mod team resigned will be resolved soon.

    19. 3

      One question: they keep referring to the “Intel® 64” architecture. I’ve only ever seen this be called the “AMD 64” architecture (with the tag amd64), I believe that is because AMD is the one that came up with it (as Intel was pursuing Itanium, which failed). Am I missing something here? Are they referring to the same thing?

      1. 5

        Intel 64 is what Intel eventually named their version of AMD’s “AMD64” architecture (since 2006, ~7 years after AMD published AMD64). Intel 64 and AMD64 are nearly the same thing, but there are various small differences plus vendor specific extensions.

    20. 1

      The --utf8 command-line option omits all translation to or from MBCS on the Windows console for interactive sessions, and sets the console code page for UTF-8 I/O during such sessions. The --utf8 option is a no-op on all other platforms.

      SQLite CLI can now do UTF-8 on Windows? I’m surprised it didn’t do that before!

      Anyways, very happy to see improvements in SQLite, it is such an awesome piece of software, and incredibly widely used. While we’re talking about databases, I must also mention libsql that are bringing an open collaboration model to SQLite (which is famously closed for outside contributions).