Threads for option

    1. 32

      One thing that stood out to me is that systemd’s configuration approach is declarative while others are imperative, and this unlocks a lot of benefits, much like how a query planner can execute a declarative query better than most hand-written imperative queries could.

      1. 3

        It pairs very well with NixOS for this reason.

        1. 1

          Yea, it’s unreasonbly effective, and in general all the components hang together really well.

          In my experience the nicest systems to administer have kind of an “alternating layer” approach of imperative - declarative - imperative - declarative - … Papering over too much complexity with declarative causes trouble, and so does forcing (or encouraging, or allowing) too much imperative scripting at any given layer. Systemd (it seems to me, as a user) really nailed the layers to encapsulate declaratively - things you want to happen at boot, that depend on each other. Units are declarative but effectively “call into” imperative commands, which - in turn - ought to have their own declarative-ish configuration files, which - in turn - are best laid down by something programmable at the higher layer (something aware of larger state - what load balancers are up - what endpoints are up - what DNS servers are up, yadda yadda).

          As an aside, my main beef with Kubernetes is that is has a bad design - or perhaps a missing design - for that next level up of configuration management. Way way way too much is squashed into a uniform morass of the generalized “object namespace”, and the only way to really get precise behavior out of that is to implement a custom controller, which is almost like taking on the responsibility of a mini-init system, or a mini-network config daemon.

          What I want at this layer instead is a framework for writing a simple and declarative-ish DSL that lets me encode my own business domain. Not every daemon is a Deployment object. Not every service is a Service. I think we were just getting good at this in 2014 with systems like Salt and Chef, but K8s came in and sucked all the oxygen out of the room for a decade, not to mention that Salt and Chef did themselves no favors by simply being a metric boatload of not-always-easy-to-install scripting runtimes.

          Years ago when I looked to contribute to Salt I was pretty astounded by the bugginess, but it worked for enough people and there was not and still is not a lot of competition in that space, so VMware acquired them.

          We have yet to see the next generation of that sort of software. System Initiative may be the sole exception, and I am enthusiastic about the demos they put out. But I do think we’re missing something here.

        2. 4

          Random sidenote: I wish there was standard shortcuts or aliases for frequently typed commands. It’s annoying to type systemctl daemon-reload after editing a unit, e.g. why not systemctl dr? Or debugging a failed unit, journalctl -xue myunit seems unnecessarily arcane, why not --debug or friendlier?

          1. 5

            I’m using these:

            alias sc="sudo LESSSECURE_ALLOW=lesskey SYSTEMD_LESS='$LESS' systemctl"
            alias jc="sudo LESSSECURE_ALLOW=lesskey SYSTEMD_LESS='$LESS' journalctl"
            

            this is shorter to type, completion still works and I get my less options

            1. 3

              Typing this for me looks like sy<tab><tab> d<tab> - doesn’t your shell have systemd completions ?

              1. 1

                It does but what you describe doesn’t work for me.

                $ systemctl d
                daemon-reexec  daemon-reload  default        disable
                
                1. 2

                  what doesn’t work ? in any modern shell when you are here and type tab twice you will get to daemon-reload. ex: https://streamable.com/jdedh6

                  1. 1

                    your shell doesn’t show up a tab-movable highlight when such prompt appears? If so, try that out. It’s very nice feature.

                2. 3

                  journalctl -u <service> --follow is equally annoying

                  1. 15

                    journalctl -fu

                    1. 3

                      My favorite command in all linux. Some daemon is not working. F U Mr. Daemon!

                      1. 2

                        so this does exist - I could swear I tried that before and it didn’t work

                        1. 19

                          I wasn’t sure whether to read it as short args or a message directed at journalctl.

                          1. 1

                            Thankfully it can be both! :)

                          2. 1

                            You gotta use -fu not -uf, nothing makes you madder then having to follow some service logs :rage:

                            1. 13

                              That’s standard getopt behaviour.

                              1. 2

                                Well I guess fu rolls better of the tongue than uf. But I remember literally looking up if there isn’t anything like -f and having issues with that. Oh well.

                        2. 3

                          Would it be “too clever” for systemd to wait for unit files to change and reload the affected system automagically when it changed?

                          1. 13

                            I’m not sure it would be “clever”. At best it would make transactional changes (i.e. changes that span several files) hard, at worst impossible. It would also be a weird editing experience when just saving activates the changes.

                            1. 2

                              I wonder why changes should need to be transactional? In Kubernetes we edit resource specs—which are very similar to systemd units—individually. Eventually consistency obviates transactions. I think the same could have held for systemd, right?

                              1. 6

                                I wonder why changes should need to be transactional

                                Because the services sd manages are mote stateful. If sd restarted every service each moment their on-disk base unit file changes [1], desktop users, database admins, etc would have terrible experience.

                                [1] say during a routine distro upgrade.

                          2. 3

                            Shorter commands would be easier to type accidentally. I approve of something as powerful as systemctl not being that way.

                            Does tab completion not work for you, though?

                          3. 30

                            Rust’s compile times are slow. Zig’s compile times are fast.

                            TBH, I’d take this with a grain of salt. Zig’s compile times are not that fast yet (as of Zig 0.13.0). For example, rebuilding TigerBeetle after a trivial change in Debug takes 7 seconds:

                            matklad@ahab ~/p/tb/work ((1af79195))
                            λ ./zig/zig build
                            matklad@ahab ~/p/tb/work ((1af79195))
                            λ vim /src/vsr/replica.zig
                            matklad@ahab ~/p/tb/work ((1af79195))
                            λ git diff
                            diff --git a/src/vsr/replica.zig b/src/vsr/replica.zig
                            index cbd54adfc..529d09e00 100644
                            --- a/src/vsr/replica.zig
                            +++ b/src/vsr/replica.zig
                            @@ -7357,7 +7357,7 @@ pub fn ReplicaType(
                                         // Using the pipeline to repair is faster than a `request_prepare`.
                                         // Also, messages in the pipeline are never corrupt.
                                         if (self.pipeline_prepare_by_op_and_checksum(op, checksum)) |prepare| {
                            -                assert(prepare.header.op == op);
                            +                assert(prepare.header.op != op);
                                             assert(prepare.header.checksum == checksum);
                            
                                             if (self.solo()) {
                            matklad@ahab ~/p/tb/work ((1af79195) *)
                            λ /usr/bin/time ./zig/zig build
                                    7.42 real         7.33 user         0.37 sys
                            

                            That feels like it’s way too high. And, of course, this is mostly due to LLVM (and, upstream of that, due to monomorphisation).

                            I think, as of today, the bigger difference for a typical project would be not the language choice, but rather how carefully, with respect to compile times, the project code is written.

                            Though, Zig is definitely positioned to compile snappily eventually, both culturally (compilation speed is the main concern, folks are not afraid replacing LLVM) and architecturally (Zig intentionally doesn’t try to fit the compilation model from the 70s, where the main problem was that the source code doesn’t fit in RAM).

                            1. 7

                              I think Richard tasted the Zig custom backend juice already and wants more.

                              Or, more seriously put, he does mention custom backends in the Gist, so I’m guessing he’s using that as the point of comparison.

                              1. 6

                                Yeah, I revised the wording to clarify that I’m talking about Zig features that aren’t stable yet. 😄

                                We only have like 10K lines of Zig for the standard library, and the new compiler stuff is even less than that (so far). The bet is that those features will have stabilized before our Zig code base gets so big that we’d have painful compile times without them.

                                In contrast, there isn’t anything comparable on the Rust roadmap in terms of performance improvements. The Cranelift backend has been WIP since 2019 when Roc had its first line of code written, so although I do expect it will land eventually, it’s not like Zig’s x86_64 backend which is (as I understand it) close enough to done that it may even land in the next release in a about a week, and which of course will be significantly faster than a Cranelift backend would be. (And as I understand it, an aarch64 backend is planned after x86_64 lands - to say nothing of incremental compilation etc.)

                              2. 3

                                Yes, I haven’t understand the take that Zig is faster than Rust. For now, a Hello, world! comparison favors Rust, for instance. People probably need to be a bit more specific than that.

                                1. 13

                                  It’s absolutely clear why Zig should be significantly faster to compile (assuming non-distributed builds), once all the things in the pipeline are finished:

                                  • The big one is a sane linking model. Rust and C++ approach to compiling generic containers is absolutely an emperor without clothes situation. It’s a gigantic waste to translate Vec<usize> in every compilation unit just to have all but one copy eliminated by the linker in the end.
                                  • I think lazy compilation likely would help a lot, but I don’t have hard numbers here
                                  • Parsing is embarrassingly parallel, while in Rust it requires name resolution, macro expansion, and, in case of procedural macros, arbitrary code execution.
                                  • I think comptime reflection fundamentally creates less work for compiler than syntactic macros
                                  • The compiler is architectured with performance as a primary goal, rather than a nice-to-have.
                                  • Things like hot-patching of binaries are on the roadmap.

                                  It’s just that, until you get rid of LLVM, you can’t actually measure the speed of everything else. Maybe codegen is fundamentally so slow that you don’t have to really optimize everything else, besides the linking model, but I am 0.7 sure that Zig+native backend will run circles around Rust+cranelift.

                                  1. 4

                                    I have read your blog articles since you started working in Zig and have been super curious about the process — hence why I embraced it as well. I agree that it’s expected that Zig should be faster (I think I mention it in my article). In the end, Rust has much more work to perform (whether we talk about proc-macro or not; think all the static checks it does regarding ownership, for instance).

                                    As mentioned as first sentence of my article, it’s a love-hate relationship, and it’s pretty hard to fully appreciate the language right now because of all the holes it has everywhere.

                                2. 2

                                  Our rust compiler implementation definitely could compile a lot faster. A huge pain to the compile times is the fact that it grew organically. If we were to rewrite it again in rust, I’m sure it would compile a lot faster.

                                  These times are from an M1 mac. They are approximately the same as an intel i7 linux gaming laptop (used to be the m1 was way faster, not sure when they became even). All of the below is just building the roc binary. Building test and other facilities is much much worse (and we already combine many tests binaries to reducing linking time, though we could do it more).

                                  After change something in the cli (literally zero dependency and best case possible):

                                  Finished dev [unoptimized + debuginfo] target(s) in 4.15s
                                  

                                  After changing something insignificant in the belly of the compiler:

                                  Finished dev [unoptimized + debuginfo] target(s) in 16.95s
                                  

                                  And for reference, clean build:

                                  Finished dev [unoptimized + debuginfo] target(s) in 1m 58s
                                  

                                  And for reference, rebuilding tests (just build, not execution) after the same belly of the compiler change:

                                  Finished test [unoptimized + debuginfo] target(s) in 33.55s
                                  
                                  1. 10

                                    I don’t think you’d necessarily need to rewrite it. I faced similar problems in the past with the compiler for Inko and found that moving a bunch of code into separate crates helped improve compile times dramatically. I also aggressively reduced the amount of dependencies (as much as that’s possible in a Rust project at least), which also helps.

                                    To provide some numbers: a clean debug build on my X1 Carbon (which has a i5-8265U) takes about 24 seconds, while a release build takes 45 seconds. An incremental build for a trivial change takes about 2-3 seconds at most. That’s for about 80 000 lines of Rust code, including tests.

                                    In short, a rewrite can help but I suspect there are easier ways to improve compile times that don’t require a full rewrite.

                                    1. 3

                                      while compile times are a factor I think that:

                                      1. We already know we need to rewrite much of the compiler for other reasons (correctness, robustness, maintainability)
                                      2. We generally find zig makes better tradeoffs for writing a compiler like roc than rust, so we would prefer to use it.
                                      3. We have tried to move the needle on compile times multiple times in the past and it has not been fruitful.

                                      Also, we have 300k+ lines of rust currently. So it may partially be a scale thing.

                                      1. 2

                                        Last I checked, the Rust compiler was much larger than that.

                                        (I have no opinion on a rewrite other than I think it would be cool.)

                                      2. 2

                                        I would love to know what’s causing this, but that’s probably a lot of work to profile. Are you using a faster linker than the default one, like mold?

                                        1. 2

                                          Linking is definitely a bottleneck when building tests. We have seen no gains with mold over lld, but see significant gains with lld over system linkers.

                                          That said, even ignoring linking, we still have a pretty heavy rust compilation loop. I think part of it is bad structure leading to a lot of pieces recompiling for minor changes (though some of it is likely more fundamental).

                                          Also, I would assume the biggest gains will be when we can use zig’s self hosted backends.

                                          1. 1

                                            We tried mold and unfortunately it didn’t move the needle noticeably; we didn’t even bother to add it to CI or to write a readme note about how to configure mold for local development.

                                            1. 1

                                              I can somewhat echo these findings: using lld or mold over the Linux system linker can have a dramatic improvement, but the difference between e.g. lld and mold is quite small and seems to depend greatly on the project in question. I also could’ve sworn the difference used to be bigger (as in mold being faster than lld), so perhaps lld’s performance has improved recently. For example, if I compile Inko’s test suite without optimizations (producing 1300 object files in the process), the link timings are as follows:

                                              • GNU ld: 0.92-1.0 seconds (it varies slightly between runs)
                                              • lld: 0.21 seconds
                                              • mold: 0.26 seconds
                                        2. 1

                                          Zig intentionally doesn’t try to fit the compilation model from the 70s, where the main problem was that the source code doesn’t fit in RAM

                                          I’d like to understand this better. How does Zig differ in this regard, as compared to, say, Rust or Go?

                                          1. 24

                                            The way C compilation work is that you compile each individual .c file, in isolation, into an object file containing machine code, and then combine the resulting set of object file into a single exe using a linker.

                                            This compilation model is incompatible with generic programming / templates / monomorphiation. If you have a function with a type parameter in a.c, you can’t compile this function down to machine code, until you see the actual type it is being used with. Eg, if you have fn mystery<T>(t: T), you can’t really compile mystery, you can only compile mystery::<i32>.

                                            Languages like Rust and C++ want to re-use the C compilation model, but they also have monomorphization. So, the way that do this is via redundant compilation and linker-stage pruning. If two compilation units use mystery::<i32>, than it will be compiled twice, in the context of each compilation unit. When linking the object code of the two corresponding units together, the redundant copy will be eliminated.

                                            Zig solves this by not compiling things in isolation. Everything is compiled at the same time, so monomorphizations are globally deduplicated.

                                            1. 2

                                              Very clear, thank you.

                                            2. 16

                                              In my understanding, which may be wrong, it’s basically that zig does two major things differently in this area.

                                              The first is that it doesn’t support separate compilation. You compile your whole program, not compile some code into libraries that you then link against other code.

                                              The second is that they do what the JavaScript folks call “tree shaking,” which is like dead code elimination but in reverse. That is, they look for live code, and compile only that, rather than compiling everything and then looking for things that aren’t referenced and can be removed.

                                            3. 1

                                              I think, as of today, the bigger difference for a typical project would be not the language choice, but rather how carefully, with respect to compile times, the project code is written.

                                              I do think though that for compilers and stuff like that Zig makes it easier to write code that it compiles first, compared to Rust.

                                              1. 7

                                                I would think so, as, it seems to me, Zig would be more economical with its abstractions.

                                                Though, both Rust and Zig make it rather easy to step into excess monomorphisation trap. At least with TigerBeetle we definitely are already in the territory where we monomorphise a bit too much, and we need to pare that back a bit (not because we hit excessive compile time/binary size already, it’s just that it makes sense to virtualize a couple of things, we didn’t get to it yet, and what lied on the path of least resistance originally was the bloaty solution).

                                                1. 1

                                                  I do think though that for compilers and stuff like that Zig makes it easier to write code that it compiles first, compared to Rust.

                                                  Could you elaborate?

                                                  1. 1

                                                    I meant to say “compiles fast”. As for why I think Zig makes it easier: Zig’s does not care much about memory safety but gives you explicit control over memory instead. It makes writing very generic code less convenient than Rust, so you do less of that.

                                                    But as matklad pointed out, you can also fall into a monomorphisation trap in Zig. From my limited exposure to Zig I find that to be slightly less trappy though.

                                                    1. 4

                                                      Seems unlikely that upholding memory safety rules (data flow analysis + type constraints) would be the biggest distinguisher in compile times between the two. Usually, its processes that require slow/re-attempted evaluation like macros, comptime, and generics. Or uncached/serialized code generation (solved by codegen-units, incremental compilation, and faster linkers).

                                                      It’s also pretty common to write generic code in Zig. For example, std.io contains type-based stream wrappers similar to Rust Iterator composition. And most all TigerBeetle components are generics for DST/mocking.

                                                      1. 3

                                                        I think it’s pretty easy to avoid overusing generics in Rust too. And if you need something heavily generic from a library, you can use the ugly trick or instantiating all the copies you need in a separate crate, so they can’t accidentally affect your incremental builds too much.

                                                        Since using lots of generics is something you have to choose to do, rather than something you might do by accident, I feel like you can always just choose to not use lots of generics. Or choose to use more trait objects and fewer monomorphized generics.

                                                        1. 1

                                                          Or choose to use more trait objects and fewer monomorphized generics.

                                                          I don’t know all that the Rust compiler is capable of,n so I’m hoping somebody can confirm my understanding. My understanding is trait objects are kind of like a Box<>, hence it’s kind of like a vtable pointer, so calling a method on a trait object is “chasing a pointer”. Is that always the case? Or are there cases where the compiler makes other choices? Just curious.

                                                          1. 4

                                                            yes, a &dyn Foo or Box<dyn Foo> has one pointer to the object and one pointer to a vtable. Calling a method needs to load the address of the function from the vtable. It’s not too bad, though, at least in my opinion.

                                                2. 3

                                                  A little bit of copy-paste, and a little dash of build it yourself. That’s one way to fight dependency bloat.

                                                  I wonder what other tools we have in our toolbox, though? The newer tools like cargo and go get make it easy to fetch and update dependencies, but what about vendoring? Specifically I mean managing forks, either with patches or not.

                                                  Adding a feature to an existing dependency is sometimes (not always, but sometimes) a way to avoid importing that feature from an additional dependency. While this workflow is technically supported in many ecosystems, it’s kind of scary:

                                                  • Adding a patch to a dependency is a gamble, because upstream might not ever accept it.
                                                  • Vendoring third party is kind of goofy and ad hoc, and not a first class citizen in any place I’ve worked

                                                  At an old job, every Go module came from a fork in a technical sense, because the entire ecosystem was mirrored into our on prem Artifactory instance. But if we wanted to patch, we had to make another VCS repo and that fork was kind of a “swapped” dependency, not a patched version of the same thing.

                                                  It’s a bit ironic, because vendoring/forking and pushing contributions upstream ought to be the default, but it feels like a neglected workflow, at least where I’ve worked.

                                                  1. 2

                                                    I wish people chose native alternatives to Electron, but that’s going to be a tough sell until the next generation of ground-up native UI frameworks start to mature. Real ones that can claim to compete with Qt and GTK, but with modern guts written in Rust, Zig, modern C++, whatever.

                                                    That’s a lot of work, but it’s going to happen, because there is a lot of value and a lot of dare I say business agility in shipping lean code: you can deploy more places, faster. I think you’re going to see more products like Slint, and then you’ll continue to see the OSS do-it-yourself versions of those.

                                                    1. 6

                                                      This is one of those Git features where unfortunately some commands do the same thing in orthogonal ways:

                                                      • git commit -s adds Signed-off-by: $name <$email>
                                                      • git cherry-pick -x adds (cherry picked from commit $hash)

                                                      Would be great if they’d chosen Cherry-picked-from: $hash instead! I wonder which came first.

                                                      1. 3

                                                        Agreed.

                                                        On the one hand, it’s undeniably useful. Along with the associated trailer. config options, it’s a way to impose a little bit of structure on the raw strings you append to the commit message.

                                                        % printf "Add pizza recipe\n\nPizza-sauce-type: gravy" | git interpret-trailers --parse
                                                        Pizza-sauce-type: gravy
                                                        

                                                        But it seems like there’s no support for using these kinds of flags to append data to git-notes, which seems like an oversight. All that “append if exists” logic would be useful there, too.

                                                        1. 3

                                                          SoB probably came first, as it resulted from the SCO lawsuit.

                                                        2. 11

                                                          Let me know if anybody was surprised by this result.

                                                          1. 9

                                                            The people whose posts regularly show up on my LinkedIn home page would like to have some words.

                                                            1. 7

                                                              LinkedIn douchebags: “AI can cure cancer.”

                                                              Correct reply: “Then why hasn’t it, motherfucker?”

                                                              1. 2

                                                                Unfortunately being disrespectful on the social credit system LinkedIn can hurt my life outcomes, so I hold my tongue.

                                                              2. 5

                                                                You and I both have that same experience. Frankly, I’m quite surprised that any of us useless humans are left working at this point, because my understanding (from reading LinkedIn) is that these fancy auto-complete engines can now do everything humans can do, but better … at everything!

                                                            2. 12

                                                              I used wscons in OpenBSD to make SerenityOS’s GUI run on top of OpenBSD’s kernel. The actual changes were 1 2 3 (adding DRM/GPU support) and 4.

                                                              I am now sad that SerenityOS went nowhere :(

                                                              1. 5

                                                                I am now sad that SerenityOS went nowhere :(

                                                                Aw, come on. You ought to be more charitable and optimistic! :) GitHub shows the last commit as 5 hours ago. No, you can’t daily drive it and log into your bank’s website yet, but this is a marathon, not a sprint.

                                                                1. 3

                                                                  I am now sad that SerenityOS went nowhere :(

                                                                  I haven’t really been keeping up on things, but I wasn’t aware SerenityOS had stalled. Has the project really slowed down since Ladybird broke off?

                                                                2. 20

                                                                  I think one thing that this person misses is what happens when AI starts eating its own emanations. I predict models will get continually worse as more and more content is generated by AI. Mad AI disease.

                                                                  1. 7

                                                                    I think this idea - of “model collapse” - is almost entirely science fiction. The one paper everyone cites about this demonstrates it with a 2022 era 125M parameter model.

                                                                    People really want to believe it because it’s such a deliciously poetic way for AI to destroy itself! Doesn’t make it true though.

                                                                    Today most of the leading models have been deliberately trained on synthetic data. I wrote about that here.

                                                                    More to the point, the idea that AI models will deteriorate presumes that the AI labs are incompetent: that they aren’t measuring the quality of their models and wouldn’t notice if the training data was causing models to get worse.

                                                                    1. 2

                                                                      that the AI labs are incompetent: that they aren’t measuring the quality of their models and wouldn’t notice if the training data was causing models to get worse.

                                                                      There’s a paper from December titled Training on the Test Task Confounds Evaluation and Emergence. The author summarized in a Tweet:

                                                                      “After some fine-tuning, today’s open models are no better than those of 2022/2023.”

                                                                      1. 1

                                                                        I don’t fully buy that. That’s an interesting paper but I’m not convinced that today’s models are equivalent to the 2022/2023 ones if you fine-tune everything on the same data - it doesn’t match the vibes-based evaluations I’ve been doing myself over the past year.

                                                                        If a bunch of other sources and benchmarks and papers show similar results I’ll start taking it more seriously.

                                                                    2. 3

                                                                      That’s under assumption that AI companies will feed raw Web data to train their models. I predict they’ll stop doing that, and lean on reputation of the source, curated datasets, and throw even more machine learning at the problem to filter the data.

                                                                      Also note that the stupid slop that Google prints is especially stupid because it must be cheap and fast to generate. Expensive, slower models are less gullible, and can be used to filter the inputs.

                                                                      1. 3

                                                                        …At which point AI becomes a very expensive, poor quality Cliff’s Notes for Wikipedia?

                                                                        1. 1

                                                                          It’s already stupidly expensive, but curation of datasets is nothing new. I think it’s safe to assume that the current state-of-the-art models already use LLMs (among many others) to process the dataset.

                                                                        2. 3

                                                                          lean on reputation of the source, curated datasets, and throw even more machine learning at the problem to filter the data

                                                                          Well gosh durnit I could really use a search engine for those!

                                                                      2. 16

                                                                        Sad to see they’re wasting money of this fad instead of focusing on the core product.

                                                                        1. 13

                                                                          Remember when Mozilla was gonna make a mobile OS, and then an app store, and then a link aggregator, and then whatever other nonsense bandwagon they jumped on? I like, and use, Firefox, but the Mozilla organization is BROKEN.

                                                                          1. 5

                                                                            Am I the only one who kind of wishes Firefox OS had worked out? It was only ever a flicker of possibility, but it seemed like a good long-term investment that could leverage the kind of development they were good at.

                                                                            1. 3

                                                                              It would have been good for the world, but it was never going to work out, it wasn’t even worth betting on, IMO. Mozilla shouldn’t chase fads, they can’t afford it, and it distracts from the things they already do well.

                                                                          2. 4

                                                                            Sad to see they’re wasting money of this fad instead of focusing on the core product.

                                                                            They’d have to figure out a way to make the core product make money first.

                                                                            1. 3

                                                                              I don’t see how this will make them any money though, considering it’s a very compute-intensive free service that very much straddles the line when it comes to being useful.

                                                                              1. 1

                                                                                Doesn’t the majority of Mozilla’s revenue come from Firefox, by selling the right to define the default search engine?

                                                                                1. 3

                                                                                  Yeah, and most of Mozilla’s developers do work on Firefox. But such a non-diverse income stream is risky. They are quite beholden to Google or the advertizing industry in general. There is currently the very real possibility that such search engine deals will become illegal in the US, which would pretty much kill Mozilla over night.

                                                                                  If anything, they ought to be pouring way more resources into diversifying their income. People online get quite upset at the fact that Mozilla gets most of its money from Google, but for whatever reason even more people get even more upset whenever Mozilla does anything other than develop Firefox.

                                                                                  1. 1

                                                                                    on the other hand there’s no guarantee that any attempt to diversify income pays off. it could just amount to a waste of resources that could have been saved to sustain the core product in the event of a crisis, legal or otherwise.

                                                                                  2. 2

                                                                                    Yes. It’s uppercase Majority and measured in billions. The problem is that it’s single line items.

                                                                                    1. 2

                                                                                      One can only imagine how healthy Firefox would be if they had created an endowment.

                                                                                      1. 1

                                                                                        If we’re talking about yearly revenue, in 2023 it was $653 million, so not measured in billions at the moment.

                                                                                  3. 3

                                                                                    Doing it as an extension allows them to corral the bullshit so that people have to opt in to it. And I’m sure there are plenty of people for whom the lack of ai would be a deal breaker

                                                                                  4. 4

                                                                                    Ask yourself: if you couldn’t use Docker to accomplish task X, how would you do it?

                                                                                    1. 18

                                                                                      This post isn’t exactly wrong, but the framing is objectionable.

                                                                                      The message of this post is “do what you’re told”. Again, this isn’t exactly bad advice, but it doesn’t have a lot to recommend it. It leans on a structural tautology: Bosses decide what what value is, and they decide what features are. Executing on that, or at least appearing to do so, is “delivering business value”. You have delivered business value, by definition, if bosses say you have.

                                                                                      That’s true, I suppose.

                                                                                      1. 4

                                                                                        Yeah, I’m surely biased by having only worked at a startup where individuals happen to be given a lot of autonomy and ownership, but this seemed bonkers to me. Cultivate frequent feedback is literally one of my company’s values. ICs are heavily involved in quarterly planning exercises by design and are expected to surface ideas and concerns about direction/what we should be working on/etc. at all times (not just at planning time). “Strategic thinking” is a performance metric. And to be clear, this structure regularly results in better outcomes.

                                                                                        The idea that management just hands down edicts about what’s valuable, and then ICs do it without there being any kind of collaboration or input, is completely foreign to me. If you’re not doing this, why are you bothering to hire smart people? (Unless you’re not, in which case… okay, I guess?)

                                                                                        That all being said, I can understand how at a certain organizational scale this may become more and more difficult to sustain, and perhaps even verge into naivety. But it’s working well so far at $dayjob.

                                                                                        1. 4

                                                                                          I feel like this is, at best, advice for junior to mid-level engineers in large organizations. For smaller organizations, and for senior/staff/principal engineers in large organizations, part of the expectation is usually that you are considering the business-level objectives in the work you do. That means knowing when glue work is necessary and executing on it if so. I don’t really know of any orgs that want their staff engineers to just be cogs in the machine, at least on paper.

                                                                                        2. 4

                                                                                          In 2025 they have a goal:

                                                                                          Complete system logging overhaul

                                                                                          I am interested to see what they do here, and how it will integrate with dinit. The logging situation is my biggest gripe with non-systemd systems.

                                                                                          1. 1

                                                                                            The logging situation is my biggest gripe with non-systemd systems.

                                                                                            How do you handle this now without systemd?

                                                                                            1. 1

                                                                                              I run a personal Void Linux server, and for that I install the nanoklogd and socklog packages

                                                                                              https://docs.voidlinux.org/config/services/logging.html#socklog

                                                                                              I don’t need to look at the logs often, but when I do there’s a little of re-learning where everything is. I am very interested in something that integrates deeply with dinit.

                                                                                              On my Void Linux laptop I don’t even bother to install a logging daemon.

                                                                                          2. 4

                                                                                            For those of us unfamiliar with it, what is Chimera Linux?

                                                                                            I read their “about” pages and learned basically that it’s a Linux distribution not based on other Linux distributions that aims to re-impliment things they think other distributions do poorly. But that doesn’t tell me anything about its actual strengths.

                                                                                            1. 14

                                                                                              It’s the Linux kernel with BSD userland instead of GNU, musl instead of glibc, and LLVM instead of gcc. There’s apk for packages and systemd is supported.

                                                                                              1. 7

                                                                                                It’s not correct that systemd is supported, chimera uses dinit for init and service management.

                                                                                                There are some individual systemd tools used such as sysusers and tmpfiles and systemd-udevd is used as well.

                                                                                                Currently logind is used but the intent is to extend turnstile to fully replace it.

                                                                                                1. 5

                                                                                                  musl instead of glibc

                                                                                                  As well as mimalloc (a high performance malloc) instead of musl’s built in malloc (mallocng, which has somewhat lackluster performance).

                                                                                                  1. 3

                                                                                                    Thanks for the explanation, I originally confused this with ChimeraOS, a gaming-focused Linux distribution.

                                                                                                  2. 3

                                                                                                    I’m still trying to see big end-user changes here, most of the changes are rather “internal”. BSD user space, yet another systemd replacement. I mean, people have been complaining about “GNU bloat” since the 90s. I distinctly remember “Mastodon Linux”, where the author also didn’t quite like the change from a.out to ELF. There also seems to be some void Linux DNA here, one of the more prominent musl-based distributions.

                                                                                                    While I appreciate lean projects, I’m not sure whether that really matters at all here. Given what you’ve got with the kernel and the desktop and browser stack, shaving off a few glibc/coreutils options barely matters. Reading the site, the authors seem quite pragmatic (as compared to e.g. the suckless people, the most “prominent” torchbearers of minimalism for minimalism’s sake).

                                                                                                    But again, not quite sure how much of that will be visible in the end.

                                                                                                    1. 7

                                                                                                      Chimera isn’t really in the “minimalist” camp, but could perhaps be in the “power user” camp. And, like Void, they do have a little bit of that BSD vibe where you’re installing a full system of components and there’s some opinions on how they fit together.

                                                                                                      One cool thing is the cports system, which is also reminiscent of Void Linux. But it uses Python instead of shell, and is very readable. Look how nice this is: https://github.com/chimera-linux/cports/blob/master/user/incus/template.py

                                                                                                    2. 2

                                                                                                      Isn’t it the Linux kernel with a BSD username?

                                                                                                      1. 2

                                                                                                        I actually looked for the same thing, but trying to do new things can be a goal in of itself. May be there are some unknown unknown advantages of this tech stack.

                                                                                                      2. 7

                                                                                                        Glad to see this released! I’m excited about libghostty.

                                                                                                        Maybe a random technical detail, but I browsed around the source, and noticed Wuffs:

                                                                                                        https://github.com/ghostty-org/ghostty/tree/main/pkg/wuffs

                                                                                                        Is the Wuffs png decoder used in Ghostty? Or some other component?

                                                                                                        I couldn’t quite tell from looking at the source. I was expecting to see a bunch of C code generated from Wuffs code there, but it looks like all Zig.

                                                                                                        1. 7

                                                                                                          It is used for PNG decoding Kitty graphics protocol.

                                                                                                          1. 4

                                                                                                            Oh wow, looks like you’re relying on C-to-Zig translation not for only the headers but for the implementation as well. That’s fun.

                                                                                                        2. 1

                                                                                                          Colemak-DH with caps-lock as escape rather than backspace. Pressing both shifts switches back to QWERTY so coworkers can type on my machine. Holding Alt Gr (right of spacebar) with my thumb gives me QWERTY temporarily so I can still use HJKL directional keys.

                                                                                                          I don’t care much about typing speed and changed for comfort reasons. I was finding it too hard and uncomfortable to learn touch typing QWERTY using all ten fingers, and learning a new layout was fun. I am glad I can still do QWERTY even after 3 or 4 years off it, possibly because it’s on my phone. I almost never need it, but good to know it’s still there.

                                                                                                              xkb_layout gb,gb
                                                                                                              xkb_variant colemak_dh,basic
                                                                                                              xkb_options grp:shifts_toggle,grp:switch,caps:escape_shifted_capslock,compose:menu,compose:rctrl
                                                                                                          
                                                                                                          1. 2

                                                                                                            This is my layout, too. I have been typing Coleman-DH for almost 2 years.

                                                                                                            My qwerty skills have disappeared, though. All I can touch type now are a few login passwords.

                                                                                                          2. 7

                                                                                                            Another ircd with different advantages: https://robustirc.net/

                                                                                                            We love distributed consensus algorithms at work and this uses raft to prevent netsplits and provide high availability. It even has a client side irc proxy to prevent logging in and out all the time without a bouncer.

                                                                                                            It doesn’t have some of the modern features that this does (like history), but we are using it at Google, so there is active development happening.

                                                                                                            1. 4

                                                                                                              As someone forced to use Google Chat for work, I am irrationally mad at you by proxy.

                                                                                                              But really I’m more surprised and curious? What are you using IRC for? Is it officially supported or merely tolerated? Presumably it has to run on company infra, so how does that work?

                                                                                                              1. 1

                                                                                                                It looks like a normal ircd wrapped under a specialized aan(8).

                                                                                                                aan(8): https://plan9.io/magic/man2html/8/aan

                                                                                                                1. 1

                                                                                                                  Most companies just need basic bash / make knowledge, a single instance SQL processing engine (DuckDB, CHDB or a few python scripts), a distributed file system, git and a developer workflow (CI/CD).

                                                                                                                  What are some examples of distributed file systems here? Are we talking about Hadoop, Ceph, or even NFS?

                                                                                                                  1. 2

                                                                                                                    From experience: I’d probably lean on something like AWS EFS / Azure Files / Google Cloud Filestore or just go to pure object store. For on-prem, something like SeaweedFS or Ceph, yeah.

                                                                                                                  2. 5

                                                                                                                    Alex is right that there’s a lot of bloat in JS on the Web, but I think he has one huge blind spot: tracking and advertising.

                                                                                                                    Good’ol plain HTML navigations can even be faster than navigations via SPA. They can be so fast that there’s no visible flash of page being reloaded. Modern HTTP-level compression makes full HTML and a JSON of the same content mostly the same size, but HTML can be streamed and immediately rendered incrementally, without waiting for a JS framework to assemble it.

                                                                                                                    BUT, this really fast HTML rendering only works if there are no blocking scripts. It definitely does not work when a page has hundreds of blocking scripts.

                                                                                                                    React has overhead, and is overused, but just look what a Tag Manager does to sites! Sites keep accumulating janky marketing tools that have been selected for having the best sales demo, not the best code. Sites get bogged down serving “ad creatives” made in a rush by the cheapest sub-sub-sub contractor of other people’s marketing agencies (there are ads that will download 300 JPEGs to work around lack of autoplay videos). Nobody can optimize that, because thanks to the wonders of real-time bidding and “personalization”, nobody even knows what they’re serving. It just goes through a long long chain of scripts from countless middlemen who have no power to improve anything, except data collection. The 3rd party script pull even more scripts and iframes from their partners and affiliates, most of them wanting to scan the DOM. Each script also must come from a unique domain (in the ad biz you can’t trust anyone), so each one pays for the whole DNS/TCP/TLS dance for itself, making most HTTP/2 features pointless.

                                                                                                                    So in this reality where HTML needs to load tons of tracking scripts and ads, page reloads are expensive. The tracking scripts are so slow, that they make React seem fast in comparison, because React-based navigations don’t have to reload all the crappy trackers.

                                                                                                                    1. 1

                                                                                                                      I’m mystified why any organization goes for this stuff. It must seem like “free money” to let the trackers onto your website. But the old saying, “if it’s free, YOU are the product” honestly goes for the businesses, too.

                                                                                                                      The revenue from letting these slimy slugs slow down your pages cannot be that big in 2024. But it’s hard to justify taking it away once it’s there, I guess.

                                                                                                                      1. 3

                                                                                                                        The revenue from letting these slimy slugs slow down your pages cannot be that big in 2024.

                                                                                                                        That’s where you’re wrong. At $WORK, the majority of our revenue comes from advertisements. For a long time, that was over 95% of our income. That number is lower now both because of ad pricing dips and because we’ve put in a lot of work on getting subscription revenue (because of ad market volatility).

                                                                                                                        But you really should not underestimate the value of ads when you’re serving millions and millions of page views per month.

                                                                                                                        1. 1

                                                                                                                          Orgs have sales and marketing teams. They need tools to attract and direct prospects to the right sales funnel, and experiment and measure effectiveness of everything they do. They have their own sales targets and end-of-quarter crunch time, and don’t want to be blocked on external dev teams.

                                                                                                                          Similar thing for monetization and ad sales — they want to sell placements, one-off promotions, special tie-ins, and get all the data about what audience they have, what stuff worked, and don’t want to hear devs moaning about a roadmap when they’ve already signed a takeover deal to go live on Monday.

                                                                                                                          And there are SaaS companies that know exactly how to sell to such org: by requiring just a single <script> to paste in, which the non-dev teams can do themselves. These teams will paste in whatever crap it takes to build their workflows. They will end up depending on these scripts to get their job done. And these are the teams that bring money.