1. 2

    This may have some impact on Windows (because they have core architectural mistakes that make processes take up to 100 milliseconds to spin up)

    Do you happen to have a link with more details on this? I’ve heard that Windows is slow for processes/IO several times and I’d be curious to know why (and why they can’t fix it in a backwards-compatible way).

    1. 4

      There are a number of highly upvoted answers here but it’s hard for me to distill anything. It may be that these aren’t good answers.

      https://stackoverflow.com/questions/47845/why-is-creating-a-new-process-more-expensive-on-windows-than-linux

      1. 5

        I think these are good answers, but I’ve had a lot of exposure to Windows internals.

        What they’re saying is that the barebones NT architecture isn’t slow to create processes. But the vast majority of people are running Win32+NT programs, and Win32 behavior brings in a fair amount of overhead. Win32 has a lot of expected “startup” behavior which Unix is not obliged to do. In practice this isn’t a big deal because a native Win32 app will use multithreading instead of multiprocessing.

        1. 2

          I don’t think that is strictly correct. WSLv1 processes are NT processes without Win32 but still spawn relatively slowly.

          1. 2

            Hm, I remember WSLv1 performance issues mostly being tied to the filesystem implementation. This thread says WSLv1 process start times were fast, but they probably mean relative to Win32.

            I suspect an optimized pico process microbenchmark would perform competitively, but I’m just speculating. The vast majority of Win32 process-start slowdown comes from reloading all those DLLs, reading the registry, etc. That is the “core architectural mistakes” I believe the OP is talking about.

      2. 3

        I don’t remember for sure where I saw this, but it may have been in the WSL1 devlogs? Either way I may have been wrong about the order of magnitude but I remember that Windows takes surprisingly long to spin up new processes compared to Linux.

      1. 7

        Nix is great for tracking dependencies but you really need to go all-in on making it your platform. To use a blood type analogy, Nix is a universal recipient of software; the polar opposite of a universal donor. Software like this helps, but it’s only useful for distributing programs with fully Nix’d dependencies.

        I’m working on a project right now which involves a handful of Haskell executables and a Cythonized Python wheel. Nix bundle can easily package the Haskell programs since they don’t link against anything external; their only public interface is the command-line.

        Building and distributing a Cython wheel which links against arbitrary other Python libraries is a separate challenge altogether. NixOS/Nixpkgs attacks this problem as a competing distribution of packages, when I really need something one level more meta.

        Poking around Nixpkgs, I did find various libraries for building RPM packages, managing VMs and building containers

        1. 2

          Nix is great for tracking dependencies but you really need to go all-in on making it your platform. To use a blood type analogy, Nix is a universal recipient of software; the polar opposite of a universal donor. Software like this helps, but it’s only useful for distributing programs with fully Nix’d dependencies.

          What would it mean for a platform manager to be a ‘donor?’ Are there examples of that?

          It’s true that ‘Nix’-ifying a program sometimes requires modifying software. However, this modification - removing hardcoded dependencies - typically bears no long-term maintenance cost. For this reason, maintainers tend to be happy to do it. That’s been my experience, at least.

          1. 1

            I guess the “software donor” I’m thinking of would be better described as a build system. Imagine a tool which provides similar determinism/reproducibility as Nix, but helps you build “native” deb/rpm packages for target distros.

            BuildStream is the closest fit I know of.

        1. 18

          I love reading these Nix success stories and then last night trying to simply install Grub on a system with zfs filesystem being literally impossible with the way my Nixos system was failing to derive something for no discernible reason, with no documentation anywhere and any reportings of that issue completely ignored :D

          1. 7

            I should just ruin it all and tell my secret. I don’t run NixOS. Plain nixpkgs on an LTS Ubuntu (boomer edition!) all the way.

            I use Nix as a development helper, and, on rare occasions, deploy a service by sticking it in a systemd unit file and nix-copy-closure my way to success. Of course, that’s just for my gear. At $DAYJOB it’s the usual k8s circus.

            1. 6

              Setting up Linux to boot from ZFS root is tricky even under the best of circumstances. What was the reported issue?

              I’m a huge fan of both NixOS and ZFS, but in the future I might aim for a tmpfs root and use ZFS only for persistent data.

              1. 5

                Nix is one of those technologies I think is amazing, but at the same time is practically unusable because of hard semantics and an inscrutable command line interface. It’s kind of like rust: dipping your toes and getting the tutorial working is easy enough, but the first time you are confronted with a real problem, finding the solution requires so much ancillary knowledge of arcane the minutiae of how the system works, it becomes a waste of time and solving it ‘the way I know’ is easier.

                1. 13

                  I’ve also been totally consumed by the same obsession. Lots of money, too: I want to be able to distribute VQGAN models on multiple gpus, which is far beyond my minimal pytorch knowhow. So I’m now looking to pay a contractor.

                  I have this dream of making 2000x2000 pixel murals and printing them to canvas. AWS has EC2 configs with 96 gigs of gpu. I can’t stop thinking about this, and it’s disrupting my life.

                  But it’s also exhilarating. I know it’s “just” an ai generator, but I’m still proud of the stuff I “make”. Here are some of my favorites:

                  My daughter wants to be an artist. What should I tell her? Will this be the last generation of stylists, and we’ll just memorize the names of every great 20th century artist to produce things we like, forever?

                  I worry about this too, but also am excited to see what artists do when they have these tools. And I think it’ll make artists turn more to things like sculpture and ceramics and other forms of art that are still out of the machine’s reach.

                  EDIT: also, a friend and I have been making games based off this. “Guess the city this from this art” or “guess the media franchise”. It does really funny stuff to distinct media styles, like if you put in “homestar runner”

                  1. 5

                    Just my random observation but “your” pieces and the post’s all give the vague appearance of something running through endless loops of simulacra. Said another way, they all share similar brush strokes.

                    I think we’re headed into the (while looking at a Pollock) — “humph, my AI could have painted that!” era

                    1. 4

                      There are a bunch of known public Colab notebooks but one is very popular. It’s fast but has this recognizable “brush stroke” indeed. Some GAN jockeys are tweaking the network itself though, and they easily get very different strokes at decent speeds. You don’t even need to know neural network math to tweak, just the will to dive in it. Break stuff, get a feel for what you like. If this is to become a staple artist’s tool it’ll have to be like that, more than just feeding queries.

                    2. 3

                      These are cool. The “Old gods” one especially… if that was hung in your house and you told me you’d purchased it from an artist I wouldn’t blink. When you make them, are you specifying and tweaking the style, and then generating a bunch, and then hand-picking the one you like?

                      1. 3

                        Starting out I was just plugging intrusive thoughts into colab to see what I’d get. If it didn’t produce something interesting (not many do) I’d try another prompt. Recently I spent a lot of time writing a “pipeliner” program so I can try the same prompt on many different configs at once. I got the MVP working on Monday, but I’m putting it aside a while so I can focus on scaling (it only works on one GPU, so can’t make anything bigger than 45k square pixels or so)

                        1. 1

                          Are you saying you’ve managed to get this to run locally? All the guides I’ve found are simply how to operate a Colab instance.

                          1. 2

                            I got it running locally, but I don’t have a GPU so upload it to an EC2 instance. I recently found that SageMaker makes this way easier and less burdensome, though.

                        2. 1

                          I printed the old gods to canvas and it came out pretty good.

                          1. 1

                            Nice. Do you have pic?

                        3. 1

                          There are neural nets intended specifically for upscaling images. Pairing one of these with VQGAN image generation (which is pretty low res) might let you make larger scale art without a huge unaffordable GPU.

                        1. 8

                          A path length which exceeds 1GB, holy moly!

                          1. 1

                            My thoughts, exactly.

                          1. 7

                            There are some excellent upgrades I’m embarrassed to say I only learned about recently. Perils of working in a Microsoft environment so long, I suppose.

                            • rg ripgrep instead of grep, way faster and respects .gitignore
                            • fd instead of find. No need to type find -iname "*foo*" all the time, just fd foo
                            • et Eternal Terminal instead of ssh, to keep my session live through reconnects

                            And I’ve been experimenting with some newfangled toys tools as well

                            • zsh + powerlevel10k for an extremely fast prompt with doodads
                            • zsh + fzf for fuzzy history searching
                            • fish for incredible ergonomics: autocomplete, advanced syntax highlighting, multiline command editing
                            • fish + tide instead of powerlevel10k
                            • fish + fzf, same reason as zsh
                            • bat instead of cat for pretty colors and line #s
                            • poetry for managing Python builds in a declarative way that I’m used to from Nix
                            • spacemacs as a drop-in replacement for vim. the syntax highlighting is way better

                            Most of these upgrades boil down to either ‘go fast’ or ‘more intelligent use of color’.

                            1. 1

                              Have you tried treesitter with neovim 0.5? I’d like to hear from someone who thinks vim’s syntax highlighting is bad and something else is good. I’m on the edge of trying to switch for a week, but haven’t had the time…

                              1. 1

                                I haven’t yet, but NeoVim is definitely on my radar. I’m excited to try out Neorg as well.

                              2. 1

                                et Eternal Terminal instead of ssh, to keep my session live through reconnects

                                Wait, are you Jason Gauci? I heard of this through him. He has an excellent podcast besides making that.

                              1. 7

                                I love the heatmap of from-language * to-language migration.

                                Hottest spots:

                                • 17% Go to Rust
                                • 14% Python to Go
                                • 14% JavaScript to TypeScript
                                • 13% JavaScript to Go
                                • 13% Java to Kotlin
                                1. 2

                                  Did anyone ever manage to find out what the license is for the POWER ISA? A few months after the launch of OpenPOWER, I spent an hour searching their web site and didn’t find anything that told me more than it was ‘open source’.

                                    1. 3

                                      Thanks. It looks as if that’s from about two years after after I looked and about 7 years after IBM started telling everyone the ISA was ‘open source’.

                                  1. 1

                                    I like the exploration of the assumptions, but that seems a long way away from “F# is the best…”

                                    For the sake of discussion, let’s just grant the whole “functional is better” bias underlying them. Looking 1-by-1:

                                    1. Tooling: Doesn’t F# limit you to visual studio on windows and the associated build tools? Unless “writing software for Windows” is the main objective, I find working in Visual Studio to be miserable. And “writing software for Windows” is something I do at the end after all the growth and learning and concept proving that might lead me to try a new language or new set of libraries. That’s the kind of job listing-style qualifier that OP said would make assumption #2 not apply… to then require it for assumption #1 puts the argument in a rough place.

                                    2. Accommodating a wide range of styles is an interesting point, but it does hamper learning in some ways too. If it weren’t for the damned borrow checker in Rust making some bad habits unworkable, I don’t think the improvement occurs. Over-accommodation can lead you to stall out; how does F# prevent that?

                                    3. I’d need some persuasion to see why F# isn’t dragging up the rear behind Kotlin, Rust and Clojure here, and the article offers none.

                                    Maybe it’s just incomplete and needs a follow-up, as others have suggested.

                                    1. 4

                                      Doesn’t F# limit you to visual studio on windows and the associated build tools?

                                      No it doesn’t.

                                      .NET Core runs well on other platforms these days. And the open-source F# tooling has very good support for Visual Studio Code, but there is a LSP server so other editors (vim, emacs, …) should be supported as well (I haven’t tried that recently though).

                                      1. 1

                                        Yep. The only lack of support I’ve bumped into is that dotnet doesn’t have an implementation for ppc64le yet. issue #13277

                                        1. 1

                                          Nice. I didn’t know the dev tooling had made it off windows yet. (I was aware it could be built and run, in some measure, but my impression from the last time I messed with it for C# was that most of the stuff people used in the software they wrote for distribution was still very much tied to windows.)

                                      1. 5

                                        Since “Copilot isn’t magic and will perform worse than a human coder on average,” I wonder what the ideal language to use with Copilot is?

                                        Something simple like Go, easier to read, understand, and debug? Or something complex like Haskell, with more static checking?

                                        Edit: Also see this other tool https://lobste.rs/s/5qzbbq/wingman_for_haskell_focus_on_important

                                        1. 4

                                          I’m thinking golang is probably the sweet spot for something like this, because it’s so repetitive.

                                          1. 1

                                            This is a very good question. Presumably each line of code in haskell has more information than a line in go. Since the latent space is likely the same for either languages (transforms code to the same latent space), I would guess haskell would win because more information is encoded in that latent space. Though if that space is small and both fill it easily (handles only a small amount of context) then it won’t matter.

                                          1. 3

                                            What is stopping the community from building a net new (fully compatible) web browser at this point?

                                            I would love to hear from those who have the relevant experience (Chromium/Firefox developers, hobbyist browser developers).

                                            I see the answer to this come up often enough, that the endeavour is simply too big to try and make a new one at this point. I think if it’s worth it, then the amount of effort shouldn’t stop people from at least attempting to build something better.

                                            I’m intentionally being naive here with the hopes to spark some discussion.

                                            1. 4

                                              Rendering basic HTML is easy enough. Ensuring complex modern webapps like Google Docs work performantly is multiple orders of magnitude harder. Even Microsoft with all its corporate backing struggled to get the old Edge engine to run competitively.

                                              1. 3

                                                I’m curious what makes it orders of magnitude harder? Is it the amount of moving pieces? Is it the complexity of a specific piece needed to make modern web apps work? Maybe existing browser code bases are difficult to understand as a point of reference for someone starting out?

                                                1. 5

                                                  A good way to understand the complexity of modern browsers is to look at standards that Mozilla is evaluating for support.

                                                  You’ve got 127 standards from “new but slightly different notification APIs” to “enabling NFC, MIDI I/O, and raw USB access from the browser”. Now, obviously lots of these standards will never get implemented - but these are the ones that were important enough for someone to look at and consider.

                                              2. 3

                                                Drew DeVault goes through some of the challenges here. Short version: enormous complexity.

                                              1. 39

                                                Updates causing a reboot

                                                Again, I’ve simply not experienced this. Now, back in the Windows 2000/XP days, yeah I think we all experienced this. However, for many years now, I’ve not seen this happen.

                                                I have, many times, left my computer on overnight to come back to it being at the login screen with literally zero intervention. Looking at my update history, I think this happens about twice a month or so.

                                                1. 20

                                                  Oh God! I really wish the author would tell us how in the world they achieved this state!

                                                  I have a Windows 10 machine that I use for work and it’s driving me nuts. Despite having disabled every single switch that Google told me to disable, this machine will automatically wake up and reboot to install updates if it’s on sleep.

                                                  Thing is, it doesn’t even work: this is a laptop, that I take with me on the road for my one customer whose stack depends on Windows, so the hard drive is encrypted. It reboots at 3 AM and then I wake up to… the Bitlocker prompt, which is followed, of course, by a good half hour of “hang on”. Should I seriously believe that nobody at Microsoft ever tried to update a computer with an encrypted hard drive?

                                                  It’s driving me nuts because the reason why that machine is on sleep is that it runs a diverse array of rather arcane applications, some of which aren’t exactly top-notch and lack various features like, uh, workspace management. So it takes me like 20 minutes to start up all of them and set things up back the way they were, so it’s easier to just let the laptop sleep.

                                                  On a tangential note: I think this is a hole that Microsoft have dug themselves into.

                                                  20 years ago there was really no question of whether you wanted to upgrade to one of the service packs. At best you’d wait two or three weeks to let early adopters find any of the truly bad, hard drive-wrecking blunders, but after that you’d update.

                                                  Nowadays, when you reboot after an update, you may boot to a more secure system with the latest security patch. Or a full-screen ad for Microsoft Edge that you have no idea how to hide, oh, and it’s set itself to be the default browser (guess who had to drive halfway across town in the middle of the pandemic because their parent had to teach online classes and they didn’t know how to make the ad go away – then drive back again because “they couldn’t find Firefox”). Or a brand-new version of Candy Crush Saga on Windows 10 Pro (I’ve given up wiping it, I now accept it as a fact of life, the way I accepted My Briefcase on Windows 95).

                                                  Of course nobody trusts the updates now, so now it’s a silent, subversive war, in which Microsoft has to find new ways to force updates because the users have been burned so many times they will never update on their own (fool me once etc. etc.) and everyone else tries to figure out ways to dodge them, at least until they can get their work done.

                                                  1. 6

                                                    I have had similar rage, and eventually wrote a little utility to just solve the problem: https://github.com/wheybags/Win10BSFixer

                                                    Every 10 seconds, it checks if the windows update service is running, and if it is, kills and disables it. Have had zero problems since, and it does allow you to pause the update blocking and manually update when you want to.

                                                    1. 2

                                                      You are my new favourite programmer now!

                                                  2. 6

                                                    Very common. Many, many times this has messed up cryptomining for me. I haven’t used windows for mining, or for anything for awhile now, so maybe that has changed in the last year or two.

                                                    1. 22

                                                      Very common. Many, many times this has messed up cryptomining for me.

                                                      I never thought I’d look on this behaviour as a good thing, until now.

                                                      1. 2

                                                        It’s still not. The real problem, giant mining firms with hundreds of thousands of GPUs calculating useless hashes 24/7, know how to get around it. It makes literally zero difference whether slylax puts their GPU to use overnight every now and then. Hell, they may even use their desktop as a crypto mining thermostat for all we know, in which case it’d make literally no difference compared to any other form of resistive heating.

                                                    2. 5

                                                      This was my major gripe. I never felt comfortable leaving the machine on a long running process overnight, even if I had remembered to go to the updates menu and disable updates for 7 days, which is the maximum AFAIR.

                                                      After having switched to Linux with Windows running in a VM for the few things I need, my machine runs faster and lighter, the fans spin less and I get more battery life. And I trust my computer again.

                                                      1. 1

                                                        The cancer is making its way to Linux, though. Ubuntu snaps have a similar “updates happen by our command and you can delay them a bit but that’s it” philosophy. It is infuriating if you are managing the rule lists for host-level intrusion detection.

                                                      2. 4

                                                        It even wakes itself up in the middle of night to do some updating and then reboots itself, only to end up in Linux because of the dual-boot. And it’s not exactly easy to convince it not to do that.

                                                        1. 3

                                                          I think this also depends on whether and how the IT department manages this. It is hard to discover who is responsible for this.

                                                          1. 3

                                                            This is my personal desktop, so my IT department is me.

                                                          2. 3

                                                            I own a windows 10 machine that I boot about once a month and it feels like I always have to do an update which requires a reboot. So, at least in my experience, this update/reboot thing is absolutely common.

                                                            1. 3

                                                              Updates causing a reboot

                                                              Again, I’ve simply not experienced this. Now, back in the Windows 2000/XP days, yeah I think we all experienced this. However, for many years now, I’ve not seen this happen.

                                                              I have, many times, left my computer on overnight to come back to it being at the login screen with literally zero intervention. Looking at my update history, I think this happens about twice a month or so.

                                                              Same here. Worse, I’ve lost work because of it more than once. Much more recently than the Win 2000/XP days.

                                                              About a year ago, when I got a document that I could only handle on Windows for some reason, fired up my not-too-frequently used Windows machine with Outlook on it, and read/started composing a reply. This was a bit earlier than my normal workday, around 4:30 AM. When I walked upstairs around 5:30 to make myself some more coffee and returned to my basement office around 5:50, the machine was sitting there, rebooted. It discarded my in-progress draft response. It didn’t save it to my IMAP drafts folder. It didn’t save it to a local drafts folder. It was just gone.

                                                              I’ve been using computers long enough to know I shouldn’t walk away from even a laptop without saving, but this age of auto-saved drafts has made me soft.

                                                              I think this phenomenon is especially bad for systems that don’t get booted and used all the time, which may account for the seemingly outsized perception of the problem by people who primarily use other OSes. If it’s been more than a couple weeks since I booted a VM, I feel like I need to allow an hour for it to update, reboot, update again and settle down post-update before I can really use it.

                                                              1. 4

                                                                I think this phenomenon is especially bad for systems that don’t get booted and used all the time, which may account for the seemingly outsized perception of the problem by people who primarily use other OSes.

                                                                This is exactly the issue with my gaming PC.

                                                                I’m really cooling off on the idea of a gaming PC as of late because of BS like this. When I finally do arrange a time with friends to play every 2-3 weeks, it’s a dice roll whether W10 will pull some inane crap like this because you actually had the gall to boot it up and use it: “you need to update! You have to wait an unspecified amount of time!”

                                                                What I really want to know is: what exactly is it doing after it applies updates? It seems like it just sits there for 5-10 minutes “finalizing” settings.

                                                                The absolute worst: W10 thinking it is okay to upgrade your video driver while you are using it. Most games/apps just crash outright because they aren’t designed to handle this.

                                                                1. 3

                                                                  I’m really cooling off on the idea of a gaming PC as of late because of BS like this. When I finally do arrange a time with friends to play every 2-3 weeks, it’s a dice roll whether W10 will pull some inane crap like this because you actually had the gall to boot it up and use it: “you need to update! You have to wait an unspecified amount of time!”

                                                                  Same for the XBox 360. It would disable a lot of features (e.g. networking) if you didn’t update (maybe it still does). We were not very regular gamers, so we would often start the XBox after a couple of weeks to call family on Skype with the Kinect camera. Only to disappoint them that they had to wait 30-60 minutes while the XBox fetches some update.

                                                                  1. 2

                                                                    what exactly is it doing after it applies updates? It seems like it just sits there

                                                                    One of the most annoying aspects of Windows updates is that user profile data seems to need migration to the latest schema. For whatever reason, this can’t occur during the system update and is instead ran the next time that user logs in. Which means updating in the middle of the night hardly saves me any time.

                                                                    1. 2

                                                                      If I’d guess, I’d say that user profile stuff is probably encrypted with the user’s password, so it has no way to read the user profile until the user logs in and it has the password. That would make sens to me at least.

                                                                      Not that that’s an excuse. There are definitely solutions here which would move the schema migration time to a less inconvenient time than right as you’re logging in, or ways to make the schema migration process faster. (Or ways to keep the user profile schema stable across most updates.)

                                                                    2. 1

                                                                      I’ve found that like 99% of the time, I can get a game to run on Linux now. I still sometimes reboot into Windows to play a game with friends if it’s a game I haven’t played before, but after that initial session, I usually go back into Linux and spend some time getting it to run in my main set-up. So far, there’s one game we’ve tried to play which I haven’t gotten to run in Linux - and that’s not due to a technical limitation, but because the anti-cheat detects that it’s not on Windows somehow and therefore disables online functionality.

                                                                      In a surprising amount of cases, all it takes is literally just downloading the game on Steam and letting Proton automatically do its thing. Sometimes, I have find an installer on lutris.com, but then it usually works fine once installed.

                                                                      I wish “getting the game to work on my system” wasn’t a thing I had to spend time on. But overall, it’s much nicer to have to do that once per game than to have to boot into Windows and deal with its updates, an environment I’m less used to, losing all my Linux state, etc.

                                                                1. 4

                                                                  I haven’t read this yet but it looks pretty interesting. Monero is the one cryptocurrency I’ve consistently been interested in. Like others have already mentioned it seems to actually deliver on its promises of security and privacy.

                                                                  1. 2

                                                                    It does seem to [he expounded, having skimmed ⅔ of the document skipping most of the math]. But I was disappointed to see that the coin mining is based on proof-of-work, which means that like Bitcoin it tends toward profligate energy consumption.

                                                                    1. 4

                                                                      The key difference between Monero’s PoW and Bitcoin’s though is that Monero is best mined with general-purpose hardware. This improves network security through decentralization of hashing power, and also should result in fairer reward distribution (since one does not have to make as significant a capital investment to become a miner). So at least with Monero’s PoW you get more bang for your buck per unit energy, in terms of network benefits.

                                                                      1. 1

                                                                        On the flip side of that coin, there is a lot more general-purpose hardware which could be repurposed to 51% attack Monero (imagine all of AWS co-opted for this purpose). Bitcoin ASICs are the most efficient silicon for mining SHA256 and would be resistant to attack even from massive corporate clouds. Through this lens, it’s Bitcoin which gets more “bang for your buck” in terms of network security per kWh.

                                                                        Furthermore, the Monero devs have to keep hardforking in order to change their mining algorithm to keep it ASIC-resistant. Even if it helps decentralize hashrate, it puts lots of power in the hands of the developers, which is a different form of centralization.

                                                                        Lastly, the lion’s share of energy spent on mining Bitcoin and Monero is to earn newly created coins, not transaction fees. Monero has permanent tail inflation to incentivize mining, whereas Bitcoin asymptotically approaches zero inflation (and therefore far less energy consumed per market cap).

                                                                        All that said, Monero has very interesting cryptography and I hope we can learn from it. I’m not sure if RingCT privacy is worth sacrificing supply auditability, but better privacy is great if you can achieve it without significant tradeoffs.

                                                                  1. 2

                                                                    I’m trying to get NixOS set up on my ppc64le Talos! The NixOS Discourse forum has been helpful, although there are still plenty of rough edges. There are a couple of people using Nix, and at least one person who has managed to install NixOS itself.

                                                                    I would love to also get ZFS on root if I can manage it, but I should probably start with ext4 to avoid blowing my project complexity budget. I think petitboot’s Linux kernel is too old to support ZFS natively, but a separate /boot partition might work, or I could get crazy and try to rebuild petitboot with a different kernel. If this page is accurate, then Petitboot uses Linux 5.3 which could be compatible with OpenZFS.

                                                                    1. 2

                                                                      this is unfortunate for projects that have CI and rely on new contributors.

                                                                      it also seems strange that they would just collect payment info. how does that deter abuse? if i give them a prepaid credit card with $2 on it, then i can circumvent this? what other reason are they collecting this info for that they aren’t disclosing right now?

                                                                      1. 3

                                                                        it raises the effort required for abusers, now they would need to buy a new card before abusing CI. in addition, gitlab may ban abusive accounts on an ad-hoc basis, and add the card’s token to an internal banlist. so if an account is suspected of CI abuse, their card is also burned.

                                                                        it’s also possible to ban easy-to-get credit cards by examining the card’s BIN.

                                                                        1. 1

                                                                          It’s even easier to just let project maintainers approve CI runs for new users manually, rather than ‘require’ payment info from them, which can (and will) be compromised/leaked later.

                                                                          A bad situation (shitcoin mining in CI) is not an excuse to implement bad policies.

                                                                        2. 1

                                                                          Perhaps this solution could be extended to bill that payment info for the cost of the CI run? It’s probably not worth credit card transaction fees at this stage, though.

                                                                        1. 1

                                                                          These abuses are just one aspect of why ASIC-resistant Proof of Work is a bad idea. I guarantee you nobody is bothering to mine Bitcoin this way, because SHA256-HMAC isn’t remotely competitive on GPUs or CPUs anymore. It’s not even worth stealing cycles. Black hats are probably mining Ethereum (GPU) or Monero (CPU).

                                                                          1. 1

                                                                            From an energy usage standpoint, all mining is bad, but ASIC mining is particularly bad, since energy was used up to make devices that can’t be repurposed for more useful computations.

                                                                          1. 5

                                                                            I thought this was talking about the R7RS revision of the Scheme spec, but it’s actually talking about an implementation of Scheme.

                                                                            Although it is a descendant of tinyScheme, s7 is closest as a Scheme dialect to Guile 1.8. I believe it is compatible with r5rs and r7rs

                                                                            1. 5

                                                                              That’s what I was thinking too. Here is the S7 page for the curious.

                                                                              1. 1

                                                                                s7 looks great - there is a max/msp and pd api project for it but it lack syntax-rules and the scheme numeric tower.

                                                                              1. 8

                                                                                I find it a bit underwhelming that this article is from 10 months ago, but that flakes are still not available in a main release of nix. This leads to a situation where one part of the community is invested in it, and uses it, while another considers it unusable for now, and continues using other tools for pinning and importing.

                                                                                1. 2

                                                                                  Very good observation. Just because I have to enable flakes, I’m actively avoiding them. Sticking to niv until flakes are official, maybe.

                                                                                  1. 3

                                                                                    I had the same opinion of yours, but then I took the time to look at the material that’s available around like the nice nixos wiki page maintained by Mic92 and the yet unstable manual of the Nix that will be. That together with the fact that it’s possible to let collaborators start using Nix by directly installing the flake-enabled version convinced me to start using it. It think that as a community we should contribute at least by using it and giving feedback. What would be of projects like Linux or Debian if all their users installed the “stable” releases?

                                                                                    1. 2

                                                                                      It looks like the flakes branch was merged into master back in July. Do I still need to use a special flakes-enabled version of Nix, or is the feature included by default in recent releases?

                                                                                      1. 4

                                                                                        It is included by default in recent unstable releases. I.e. the nixUnstable attribute in nixpkgs.

                                                                                        1. 3

                                                                                          On my NixOS unstable updated a week ago the stock Nix release is still 2.3.10, not 2.4

                                                                                  1. 6

                                                                                    The problem with this argument is that it is entirely possible to do that with a non-proof-of-work system as well. In fact, a blockchain may not be necessary at all.

                                                                                    I don’t think anyone would deny that centralized databases are more performant than distributed ones, pretty much across the board. The key tradeoff Bitcoin makes here is trustless immutability. Gold was our previous trustless money, and over the past couple hundred years all credit monies and fiat currencies have massively depreciated against it.

                                                                                    Centralized ledgers do work, but they cannot provide an ironclad guarantee that the rules of the game will remain fixed into the future. We don’t even know what the supply of dollars will be six months from now. Bitcoin’s supply is predictable decades into the future.

                                                                                    The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.

                                                                                    — Satoshi Nakamoto

                                                                                    1. 9

                                                                                      Gold was our previous trustless money, and over the past couple hundred years all credit monies and fiat currencies have massively depreciated against it.

                                                                                      I recently read Valerie Hansen’s The Silk Road and she makes the point that gold was not the trustless money. It was more accepted than coinage, but you need steady, reliable trading partners to make it useful as currency. The real universal currency in the oasis kingdoms was bolts of cloth.

                                                                                      1. 4

                                                                                        The real universal currency in the oasis kingdoms was bolts of cloth.

                                                                                        Bolts of what cloth, from which producer, what quality, at what time was it produced?

                                                                                        Meanwhile a chunk of gold is a chunk of gold.

                                                                                        1. 2

                                                                                          [Warning: speculation ahead]

                                                                                          I’m imagining the bolts to be silk.

                                                                                          Gold can be alloyed with base metals in ways that are hard to detect using technology known to the merchants of the Silk Road. Silk can be more easily assayed.

                                                                                          1. 1

                                                                                            I’m sure the oasis kingdoms would have been convinced by your brilliant analysis.

                                                                                          2. 2

                                                                                            Nice book, I will have to add that to my list.

                                                                                            Certainly, different commodities have served as trustless money at different times. Shells and pelts are two other examples. Gold eventually won out for global trade, but it wasn’t universal until fairly late in history.

                                                                                          3. 2

                                                                                            You’re falsely equating the space of “not proof-of-work” and “not blockchain” with “centralized.”

                                                                                            The claim is that one can achieve similar decentralized feats (depending on the goal) without requiring the planet killing compute power of a proof-of-work blockchain.

                                                                                            1. 3

                                                                                              Hmm, well it comes up twice. First they claim that FedWire can provide properties such as transaction finality and Sybil resistance. Which is true, it can!

                                                                                              This entire kludge is negated in FedWire because all participants are known: it is permissioned.

                                                                                              With 25 core nodes FedWire has a degree of replication, but it is definitely not permissionless. Most importantly, it can’t provide the guarantee I highlighted about the rules remaining fixed.

                                                                                              Second, near the end of the article they mention proof-of-stake, but it’s a bit of a throwaway line.

                                                                                              Through the usage of either permissioned systems (like an RTGS) or a proof-of-stake chain, the energy consumed by PoW chains did not need to take place at all. In fact, PoS chains can provide the same types of utility that PoW chains do, but without the negative environmental externalities.

                                                                                              They mention that the “transition to proof-of-stake is beyond the scope of this article” and don’t really dive into how PoS achieves any of these goals.

                                                                                              The fatal flaw with proof-of-stake is that 51% attacks are unrecoverable. If one entity ever manages to get more than half the PoS coins, they forever control the rules of the network. PoS networks can be decentralized, but they can never be permissionless. In order to get new coins, you have to buy them from someone who already owns them.

                                                                                              In contrast, PoW is both decentralized and permissionless. Anyone can participate in the mining process without a prior investment. 51% attacks can temporarily interrupt a PoW chain, but an attack is ultimately recoverable.

                                                                                              So to clarify my position I would add that it’s not just decentralization which is important, but permissionlessness.

                                                                                            2. 2

                                                                                              Centralized ledgers do work, but they cannot provide an ironclad guarantee that the rules of the game will remain fixed into the future. We don’t even know what the supply of dollars will be six months from now. Bitcoin’s supply is predictable decades into the future.

                                                                                              Being able to reinterpret or change the rules is a feature, since it makes it possible to fix mistakes that were made at the inception of the rules. Generally speaking, if you had a traditional contract where a random participant can just set every other participant’s stake on fire, you can probably convince a legal entity that wasn’t the intention and roll back the contract without affecting everyone else using that currency. If the use of the system goes from “currency you can use to buy pizza, drugs, fake IDs, or murder” to “thawing the tundra and flooding my neighborhood so a few really rich guys get even richer” maybe the rules should change, and in a way that doesn’t require the few really rich guys’ consent.