1.  

    Is it wrong of me to consider popularising a YYYY-DD-MM format to remove the unambiguity?

    It could be called “lobster format” to maximally offend.

    1. 5

      Please don’t.

    1. 12

      We go through this every time some language designer comes up with a way of packaging and distributing: CPAN,pip,gems,npm,crates and it goes on and on. It seems like everybody likes re-inventing the distribution wheel.

      Short version is, RPMs and debs have been around for 25 years, and, while they were originally designed with C in mind, they’re flexible enough to incorporate programs written in any language.

      Yes, they have restrictions, and some of these restrictions are uncomfortable to people who do all their work in one or two particular languages, but distros like Debian and Redhat target people who just want to have a system that works out of the box, with programs written in many different languages co-existing in one reasonably coherent system. This is the problem distro packages like RPM and deb are trying to solve.

      I appreciate that if I’m working in ruby, I usually have something like rvm, and bundler, and other utilities for keeping multiple ruby development environments in a sane way, and I appreciate that others like these tools for programming in their preferred environment.

      However. If I just want to be able to install and use a script written in python (take for example, “Terminator” that is written in python), as a distro, I just want to be able to install the script, and ensure that it works cleanly with the rest of the system, I don’t care about setting up a special environment and managing dependencies, and all of the other baggage that these other distribution methods involve.

      1. 10
        • apt doesn’t handle multiple versions of the same library well. You can do it by renaming packages and an indirection layer of virtual packages, but this is tortured compared to auto deduplicating dependencies according to semver.

        • People in charge of Linux package managers generally don’t care about Windows. There is nothing technically preventing RPM/deb from working on Windows, except that it “sucks” (for shits and giggles I’ve made cargo-deb Windows compatible, and it happily builds Windows-only deb packages).

          npm has got a huge traction as a method of distributing dev tools, because it supports Windows as a first-class platform.

        • RPM/deb as an archive format doesn’t make that much of a difference. Rust/Python/JS could use .deb as their format, after all it’s just a bunch of files together. The real issue is where do you publish to, and how other people find it. Linux distros set themselves as gatekeepers, which has its own value, but it’s very different from npm/cargo, etc. that have npm publish free for all with zero hassle, way easier than maintaining PPAs.

        Having written all that, I realize it all comes to usability. The distinction between “it can do it” vs “it just works”.

        1. 17

          Short version is, RPMs and debs have been around for 25 years, and, while they were originally designed with C in mind, they’re flexible enough to incorporate programs written in any language.

          No, they don’t. They don’t even work well for C++, and have been hobbling the development of template libraries for over 10 years now because of how bad they are at handling API-stable-ABI-unstable libraries.

          1. 2

            What’s the problem? It’s been a while since I looked at these in Linux environments, but in the FreeBSD ports collections if you upgrade a C++ library, it bumps the PORTREVISION of all of the ports that depend on that library. The next time a package set is built, the packages use the new version of the library. The FreeBSD project explicitly moved away from building packages outside of package sets to address this kind of thing - it’s easy for things to get out of sync and computers are so fast now that building everything (30,000+ packages) on a single machine is feasible in about a day.

          2. 8

            I never want to care about “the rest of the system”. I want to write complete programs, not shells of programs that work if slotted into the right system.

            The more programs are like this, the less a system that works out of the box is a problem to think about.

            1. 5

              I’d argue the packaging system put up by Linux distros is actually not flexible, but rather the human sitting between applications to be packaged and the Linux distribution sinking hours into patching software to fit into the packaging system is. If rpm/deb were actually flexible we would not have these conflicts.

              It seems like everybody likes re-inventing the distribution wheel.

              I can say the same about the state of Linux distributions. It seems that I as an application developer can only rely on the featureset that is the intersection of all the distro’s package managers if I were to follow your advice.

              1. 2

                rpm/deb is fairly flexible. The reason you get these arguments is mostly because of distribution policies, not because deb/rpm can’t do it.

                https://lwn.net/Articles/843313/rss

                1. 2

                  I did not claim rpm/deb cannot deal with large bundled apps (that’s fairly trivial, curl|sh can do that too). I’m saying rpm/deb cannot deal with dependency management to the granularity npm/cargo can, and then not in an efficient manner. Kornel already replied with other examples where rpm/deb can’t do things.

              2. 7

                Then the distribution maintainers should solve that problem within the constraints of the language and ecosystem the tool was developed in. The language and ecosystem are not going to change nor should they. If RPMs and debs can handle the change then they should just package them and move on. Complaining that new ways of doing development and deployment make the job harder helps no one. Either the distributions will adapt or they will lose the next generation of software developed in these new paradigms.

                1. 9

                  A CVE gets assigned some widely popular library. For fun we will say they have a monthly release cadence and the bug is published mid release cycle. Upstream is not done with their integration tests and don’t want the release just for a 3 line patch, even if the issue is severe. Lets say it’s used for around 30 binaries.

                  What do you do?

                  If the solution here is to do some wonkey patching of Cargo.toml and Cargo.lock across 30 packages to ensure they are pulling the correct patch (is it even possible), how does this scale?

                  This isn’t the question of distributions adapting to anything, this is “the next generation of software” digging a deep grave and ignoring almost 3 decades worth of experience. This isn’t me claiming we should pretend Rust is C and package all crates like individual packages. Nobody has the time for that. But pretending this isn’t a problem is deeply unsettling.

                  1. 10

                    I don’t know, it’s not my problem, but if it were, I guess I would try solving it rather than trying to wedge everything into this old C paradigm.

                    1. 4

                      Not having a solution doesn’t mean you can just paint an old solution as terrible.

                      Lots of smart people are working on separating language ecosystems from system administration, and we have these problems. So now, what do we do?

                    2. 4

                      Upstream is not done with their integration tests and don’t want the release just for a 3 line patch, even if the issue is severe.

                      In the case of uncooperative and irresponsible upstreams, what Debian does is say “we will package this, but it is not covered by the security support that we provide for the rest of the OS”. They used to do this for webkit and node.

                      What else can you do? At some point packaging implies cooperation.

                      1. 3

                        Cargo has multiple features for replacing dependencies. For individual packages you drop [patch.crates-io] (it works across the whole tree, so no need to patch deps-of-deps recursively). To do it at a distro scale, you can globally configure Cargo to use a local copy of the index instead of crates-io, and replace deps in your copy (it’s just a git repo with JSON, easy to modify).

                        Binaries are built from lockfiles, so you can read them to know exactly what to rebuild. There are tools like cargo-audit that already deal with this.

                        1. 1

                          So there are then 30 patches to modify cargo. You would also need a secondary package repository to provide the patched cargo package? Does cargo build everything from source or would it require the patched packages to be built?

                          1. 3

                            You can tell Cargo to get code from a local directory. Distros already replace crates-io with their own packages, so they already have it all set up.

                            Distros have managed to tame dozens of weird C build systems. Cargo is quite simple in comparison.

                            1. 2

                              Well, no. Most doesn’t. Just the two largest ones because of distro policies. But if you look at the raw numbers most distribution bundle everything vendored and does not provide this at all. I’m not even sure if Ubuntu follows the Debian guidelines?

                              This is why I bring it up to begin with.

                        2. 2

                          I get it. Distro’s are downstream of everything. They get the sewage and the refuse and whatever else that results from trying to get hundreds if not thousands of different applications and libraries to work together and keep them patched appropriately. But that’s the job. If you don’t want that job then don’t work on a distro.

                          In your particular example I would feel free to blacklist a project that doesn’t want to patch and test their code when it has a CVE. If the code is absolutely necessary and blacklisting it isn’t an option then patch it locally and move on. This isn’t substantially different from a terrible maintainers of a C application. Distributions have been carrying patches forward for libraries and applications for as long as I’ve been using distributions and longer.

                          1. 7

                            Back in the C days, it was considered basic etiquette to make sure your Makefile worked with DESTDIR properly.

                            What happens now is simply Parkinson’s law in it’s finest. Flatpak, Snap and Docker included.

                            It puzzles me that nobody is worried about our inability to maintain a coherent culture given the influx of programmers, but then again… Eternal September, right?

                            Must be weird for old geezers to live through that the second time. I am far too young for that so I can’t tell.

                            We need to negotiate a packaging standard that would not suck for most and then push it hard so that it gets adopted far and wide from Rust to ES. Funny times.

                            I’m especially curious whether it can be done without effectively merging all distros into one. But hey, project maintainers seem to know the best how is their code supposed to be built and configured. Maybe it’s time.

                            1. 3

                              I am one of the old geezers and I’m fully on board with the new way of doing things. Let the software maintainers bear the burden of patching and securing their software when CVEs happen. In a way this could reduce the burden on distro packagers. Download or build the latest binaries and distribute that. If upstream won’t patch their stuff then warn the user and refer them to upstream. Yes this means packagers have to build more stuff. But we unlock a whole lot of benefits in the process. Less issues with shared libraries. More reliable software with these new tools.

                            2. 4

                              This isn’t substantially different from a terrible maintainers of a C application. Distributions have been carrying patches forward for libraries and applications for as long as I’ve been using distributions and longer.

                              Previously we just needed to patch the shared library and move on. Now we suddenly need to care about what a number of upstreams vendor with complete disregard for what that implies.

                              The comments reads as extremely unsympathetic to distributions. But why? This is a model Rust/Go/npm picked. This wasn’t decided by us, and you still need to deal with the issue regardless if there is a distribution involved or not. We are told “take this and deal with it”. Upstreams are not the one going to deal with user inquiries why XYZ isn’t fixed and what we are going to do about it. We are understaffed and given more problems to deal with.

                              If you don’t want us to package the “next generation of software” say so… but users are going to disagree.

                              1. 1

                                I acknowledge the fact that you have to run cargo build more times than before. But that is the price you pay for packinging in a distro. If your users want the rust application then package it for them. Rust isn’t going to adapt for a whole host of really good reasons. And I as both a developer and someone who deploys some of these in production get a lot of benefit out of those reasons and as the primary user of the language would resist any such change.

                                For the security issues if upstream won’t patch then remove them from the security policy and tell the user they need to take CVEs up with maintainer of the software. This isn’t hard and complaining about it gives no value to any end-user.

                                1. 2

                                  Are we going to claim everything Rust touches to be unsupported? Seriously?

                                  I don’t think Rust is the C replacement the community thinks it is.

                                  1. 1

                                    If the distro needs something that Rust touches then they need to build the tooling to be able to package it. It’s more expensive to build it all but if that’s what you need to do then do that.

                              2. 3

                                But that’s the job. If you don’t want that job then don’t work on a distro.

                                Note that Debian security team found the job so onerous that they decided to remove security support for all Go packages in Debian. See https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#golang-static-linking.

                                1. 1

                                  This is a perfectly valid approach. If you can’t make your tooling do the job with that software then drop support for it.

                                  1. 2

                                    This works for Go, but does not work for Rust, because GNOME C libraries are starting to depend on Rust and distributions don’t want to drop support for GNOME.

                                    1. 1

                                      Then I guess in the case of GNOME it is by definition not too onerous. It is not the case that you can’t package and ship updates for the software. It’s just harder and more complicated. But that’s why we write tooling. I don’t get to complain when my business wants a feature that requires me to write complicated hard code. I just role up my sleeves and start designing/typing. This is no different.

                        1. 9

                          I’m always happy to see Racket going forward. It’s the best language (I know of) for learning about languages, especially with How to Design Programs. I write Clojure at $dayjob but find Racket more enjoyable and cleaner as a Lisp.

                          AFAICT Racket is a reasonable choice for real-world applications, but it’s not very commonly used. Common Lisp is more common (no pun intended) though still rare. (Anyone else hate how no pun intended is always a lie?)

                          1. 14

                            Oh, I’m cons-tantly making lisp puns.

                            1. 5

                              ☝️ Yes, officer, this is the criminal right here.

                              1. 3

                                criminal? cdo?

                                explain!

                                1. 3

                                  Perhaps you meant to say: “Yes, cons-table…”

                            1. 2

                              Long article. I started by glancing at the conclusions, which turned out to be a good idea in hindsight.

                              Monolithic vs micro-kernel vs your-favourite-denomination-kernel. This is one of the ultimate nerd debates, and I don’t think that the answer to this question is actually very important.

                              The highlighted part saved me a lot of reading. I dismissed the article as easily as the writer dismissed the state of the art in OS architecture; They didn’t even bother with doing some basic research on the subject, yet felt entitled enough to rudely dismiss it.

                              1. 10

                                I think it’s a mistake to dismiss this article based on it not dealing with a single question of monolithic vs microkernel kernel architecture. Most of what it dealt with was the design of operating system userspace paradigms, which I think is as worthy of attention as kernel architecture. Certainly it’s far more visible to the end user than whether the kernel is a monolith or not.

                                1. 2

                                  I think it’s a mistake to dismiss this article based on it not dealing with a single question of monolithic vs microkernel kernel architecture.

                                  I was otherwise fine with the article not dealing with this, but they didn’t stop at that, and had to add some disrespectful blabber.

                                  Most of what it dealt with was the design of operating system userspace paradigms

                                  Sure, I did end up reading more, but was understandably pissed off at how the part I highlighted was worded. Just because the author isn’t personally interested, it doesn’t give them agency to do this.

                                  1. 4

                                    Hahaha, especially since they have an issue open on Github titled “How to handle when a device driver panics?”.

                                2. 2

                                  Given some of the grief I’ve had due to faulty drivers or kernel services in production systems, I can only imagine the article’s author has little experience with production servers. I’d give a lot for the ability to gcore and restart currently in-kernel services without rebooting the entire system: less impact for fixes and upgrades, far easier debugging, likely higher overall stability, and a smaller attack surface while we’re at it.

                                  The answer to this question is very important.

                                1. 1

                                  This is nice. It actually adds some safety if you structure your code properly.

                                  1. 3

                                    I really feel for the Pattern Matching section. Typescript semantics (which, correctly IMO, are much smarter about the blurred value/type line) make it easy to write safe code without having to make a bunch of wrapper types and other type-level drudgery to get what you want.

                                    If the computer sees a literal, it knows the value! And it should be able to carry that around! I understand that this is, of course, not easy. And I guess this makes type-level stuff undecidable (though I believe there’s sets of extensions that get you there anyways). But there’s some great payoffs to be had when you start going down things like type literals, actual first class type unions (not Either but actual “A | B” and not requiring an unwrap).

                                    The simple example is “I had a function that could take a number. It can now also take a string”. Refactoring that in Typescript is really easy, cuz you don’t have to modify any callsites (And there’s very little engineering value in going around writing Left everywhere, in this specific case). This isn’t a one-size-fits-all thing, but it’s like a pair of M-size sweatpants. I can squeeze into them just fine for the most part.

                                    This sort of stuff is hard of course, and trying to execute on these kinds of concepts is probably a really thankless job.

                                    1. 1

                                      Wouldn’t that require explicit boxing or inlining (possibly with link-time optimization) whereas using a type class allows for full type erasure?

                                    1. 1

                                      Perhaps a reaction to AMD ROCm stack?

                                      1. 2

                                        Similar to Unpoly, apparently.

                                        1. 1

                                          From a very brief look, Unpoly seems to at least be based on the principals of progressive enhancement.

                                          e.g. decorating a regular href, so that it is handled more “smoothly” via JS XHR/fetch, but still working fine without JS enabled/working.

                                          The OP article describes the lib with two examples that would just break completely without JS.

                                          It does however bother me how much these things all add randomly named attributes. I thought we’d all agreed upon data-* attribute names.

                                        1. 2

                                          I think the site is kinda shady, won’t let me cancel my subscription. I hope I’ll remember to retry later on. Kinda sus, ngl.

                                          1. 3

                                            I can cancel it if you want. I really should write that code, it is pretty sus.

                                            1. 1

                                              Hahaha. I’ve never experienced lobste.rs equivalent of a standup comedy. I simply had to try and say thanks.

                                              So sad I couldn’t afford to pay the full price and rake in billions in dividends. :-)

                                              Do you write often?

                                              1. 2

                                                Not very often, usually on my site, and I wrote another IMGZ post recently (that lobste.rs didn’t like as it’s a parody of business), but my problem is that I don’t find inspiring topics to write about frequently. I most like writing when I can make fun of something I find ridiculous, and I guess there isn’t enough of that :P

                                                EDIT: By the way, the “cancel subscription” section should now be active for you, I just deployed that code a few minutes ago.

                                          1. 16

                                            If only we had a way to systematically invest into and maintain the most important infrastructure projects our lives depend on…

                                            I have no idea why is Mozilal dragging their feet lobbying EU representatives to support it’s development. It’s normal for EU to spend billions of EUR on communications and broadband subsidies…

                                            It could be rather mutually beneficial relationship, since EU would get a way to introduce eIDAS and eGovernment services years ahead of it’s current schedule simply by implementing all relevant functionality (eID, signing, validation, e-delivery) in Firefox directly, which would definitely boost Firefox’s corporate and public sector usage.

                                            One can dream…

                                            1. 1

                                              Ah, I was 17 and learning to program in PHP on SuSE 9.1. Kate was my main editor back then. Thanks!

                                              I’ve switched to LFS and vim, never built KDE from scratch and thus never went back to it again.

                                              1. 1

                                                Are you still using LFS as your OS?

                                                1. 1

                                                  Hahaha, no, not anymore. Switched to Gentoo (obviously), then Debian and finally to Fedora. Nowadays I am considering NixOS. Perhaps I will check it out during my January sabbatical/vacation.

                                                  I still recommend LFS fondly as a great learning experience to anyone serious about system administration. I am not sure, but I would wager a guess that LFS brought us several fairly popular distributions and helped tens of thousands of people understand our infrastructure a lot better.

                                                  I am stuck with vim, though. Hard to unlearn.

                                                  1. 1

                                                    I am stuck with vim, though. Hard to unlearn.

                                                    same

                                                1. 1

                                                  Tinkering in Blender mostly. 2.80 made me get back to it, but 2.91 made me really excited.

                                                  1. 4

                                                    Until there are Linux equivalents, not just “clones,” of the Adobe toolchain in Linux, this is to be expected. You can hire professional designers and let them use the tools they need, or you can say “GIMP and Scribus only!” and turn out terrible looking publications because most designers won’t work with those tools.

                                                    Alternately, they could use LaTeX or troff and produce the sorts of documents that those tools are great at producing.

                                                    1. 7

                                                      There are people who produce excellent quality output using those tools, like David Revoy. At least some of those people are even available for hire.

                                                      1. 1

                                                        You can hire professional designers and let them use the tools they need…

                                                        Never met these professional designers you talk of. Far more often it’s just status signalling jerks who use the most expensive hardware and software because they do the “social media magic” or “glossy brochure secret sauce” everyone is so crazy about lately.

                                                        I’ve met several video specialists who did not understand their formats nor did have any meaningful video editing skills.

                                                        I’ve met UI designers who were unable to explain white space rules for an UI kit they “designed”.

                                                        I’ve met PR staff who did not even know how they’ve licensed the illustrations for a book they’ve published only to find they were - of course - not compliant.

                                                        I’d take someone who can use Scribus and GIMP and Inkscape over anyone who “needs” Adobe products, because the later only guarantees high costs. Not better results.

                                                        Also, the brochure could’ve been set as effectively in LibreOffice on Linux. There is nothing special in it.

                                                        1. 3

                                                          Never met these professional designers you talk of. Far more often it’s just status signalling jerks who use the most expensive hardware and software because they do the “social media magic” or “glossy brochure secret sauce” everyone is so crazy about lately.

                                                          You need to expand your social circle. I e-know plenty of people who work in graphic design who are both professional and passionate about their work.

                                                          1. 1

                                                            Yeah, might be a central Europe thing, because that’s what sits on the other side of the table when we open a public tender or try hiring.

                                                            And don’t get me started on their salesmen…

                                                      1. 18

                                                        Should employees of coca-cola only be allowed to drink coca-cola? Should people who work at Samsung only be allowed to use Samsung devices? If something else does the job better then you should probably use that.

                                                        And before people bring “ethics” in to this, the Linux Foundation is not the FSF. Retraining people who produce these sort of graphics (possibly from external companies) is not really a good investment of funds, even if the tools on Linux would be equivalent. Better to spend that money to improve those tools, for example.

                                                        And the whole complaining about the stock image is just silly.

                                                        1. 8

                                                          It’s good to keep in mind that LF is a trade org, not a public interest non-profit, both to steer expectations and the level of support the wider community should provide them.

                                                          That said: Blender is gearing up to become a popular professional 3D modeller in large part because they have dedicated dogfooding campaigns - namely every open movie by the Blender Foundation which is always accompanied by supporting development work.

                                                          If the LF has any interest (as a trade org) to promote the Linux desktop (I think they do, even if it’s not their top-most priority) it would be good to have similar campaigns: Let domain experts use that environment while having developers work on the tooling immediately and with short feedback loops with those users.

                                                          1. 5

                                                            Yeah, that would be useful, but it’s quite different from “random person at LF struggling to get the required output with tools they’re not familiar with and may be less good than alternatives”.

                                                            One of the things Blender probably did right was focus on funding right from the start, so people could work on it full-time to do this kind of stuff and more. Right now it employs 24 people, but even with just 2 or 3 devs you’ve got buckets of more time to actual make a good end-product (and crucially, also help out devs wanting to contribute). If you compare this to GIMP – which is just a few people working on it in their spare time – or many of the Open Source PDF tools, then it’s quite a different story. I have the impression that the GIMP people don’t want to go down the same Blender did, and are happy with how things are – which is fair enough – but then don’t be surprised if professionals prefer Photoshop.

                                                          2. 1

                                                            https://fsfe.org/activities/publiccode/brochure

                                                            Made in Scribus. Successfully translated into multiple languages. Some outside of FSFE.

                                                            1. 3

                                                              I already used Scribus back in 2007 when I published/edited the Scouting magazine for my region, migrating from Microsoft Publisher that the previous guy used.

                                                              This was a long time ago, but migrating was rather time-consuming, and I had to relearn everything I learned in Publisher. It certainly wasn’t “free”, as in that I needed to invest quite some time in it (DTP isn’t always the easiest software to learn/use). I also had some problems getting it delivered to our printer; from what I remember I eventually managed to get the PDF output just right so it printed well, but it took some effort. With Publisher, this was easier as they were already familiar with it and could just open the Publisher files.

                                                              There were also some other issues with it; I think the way justified text worked was pretty borked at the time, but again, this was a long time ago so I’d hope that’s improved now. I also had to fix the FreeBSD port first before I could actually use it 🙃

                                                              As with GIMP, Scribus seems like a small hobby project. Actually, it seems there are just three people working on it, with the majority being done by two people. That’s very fine and all and the results are pretty impressive for such a small team working in their spare time, but you can’t expect that to be able to compete against professional products with a bunch of people working on it full-time. Is it possible to publish stuff with it? Sure, I did it myself. Is it the best tool for professional use? Probably not.

                                                              As a sidenote, Microsoft Publisher was actually pretty decent as well; the only Office product I somewhat enjoyed using. In the end, the main advantage was that I could publish stuff from my FreeBSD desktop, instead of having to reboot to Windows to run Publisher.

                                                              So yeah, sure, it’s possible. But why? What’s the point?

                                                              1. 2

                                                                No point if your only goal is to get it done now for yourself. But imagine all OSS promoting orgs to pay someone to actually improve Scribus and GIMP to match their needs. Instead of subscribing to Adobe, which only deepens the lock-in hole.

                                                          1. 17

                                                            Very interesting, in a way I’m curious about how this will evolve, but the Ethereum integration is a huge turn-off.

                                                            1. 5

                                                              At the moment there is no Etherum integration at all. Down the road we are going to add entirely optional ways to benefit from the extra security, authenticity and coordination tooling backed by something like Etherum. All the code collaboration functionality will be unaffected by it and and at no point are users expected or required to use the optional features.

                                                              1. 8

                                                                Thank you for specifying that, that is exactly how I understood it and while I appreciate the optional integration, there is one planned and that for me makes this project very unattractive. Which I think is a shame, because the P2P aspect (alongside it being open-source, written in rust, having a very polished presentation) of it looks very good.

                                                                1. 1

                                                                  Great to hear that the messaging on the website makes that clear. Out of curiosity what are the reasons for the strict rejection based on that optional feature set?

                                                                  1. 13

                                                                    I reject any blockchain-related project, because blockchain is wasting a huge amount of resources for absolutely no gain. Associating a project with it - even by making it optional - means you support the ideas behind blockchain, so I automatically can’t support the project.

                                                                    1. 6

                                                                      Ethereum is starting it’s PoS transition, stage zero, mainnet literally today, first of December. They are working hard to drop PoW.

                                                                      EDIT: Here is a launch event live stream: https://www.youtube.com/watch?v=MD3mADL33wk

                                                                      EDIT2: Eth 2.0 beacon chain explorer: https://beaconcha.in/

                                                                      1. 3

                                                                        I have no idea what “Pos”, “stage zero”, “mainnet” means. I assume PoW doesn’t mean prisoner of war, but proof of work?

                                                                        1. 3

                                                                          Proof of Stake, proof of work, mainnet = production. Stage zero - this new production network does create new blocks and mints validator rewards, but it is yet to be upgraded to network that can do proper transactions. Also, question of turning off PoW of Ethereum 1.0 is still not very clear. First two networks need to be merged into one.

                                                                          1. 4

                                                                            So regarding my argument that

                                                                            blockchain is wasting a huge amount of resources for absolutely no gain

                                                                            you are saying that they are taking care of the amount of resources consumed by lowering the energy consumption?

                                                                            1. 3

                                                                              They are making steps (just launched a first of series of network upgrades) on a road that will lead to PoW being fully replaced by PoS in Ethereum. So - yes. But it will take some time.

                                                                              Please note that the costs of PoW are distributed onto every holder of cryptocurrency. And some of that cost is distributed on everyone else in a form CO2 emissions. They are offsetted by of carbon tax, but only in some of the countries where miners operate.

                                                                              Everyone is interested in PoW becoming a thing of the past.

                                                                              1. 1

                                                                                At least that’s the long-term plan for Ethereum.

                                                                                Bitcoin, for example, has no plans to move towards Proof of Stake. Neither do most other blockchain based cryptocurrencies.

                                                                                1. 2

                                                                                  To me, “proof of work” feels a little like the internal combustion engine. It’s kind of dirty, and grew so big in our time that it needs huge amounts of power to continue to work (as in drilling for oil and fracking all the things to make cars go brrrr): it’s “what we have” (or “they” have, whatever) because for a while, every blockchain based cryptocurrency went for it.

                                                                                  Now, some people are not completely oblivious to how stupid this looks, and alternatives are slowly coming around (proof-of-stake for eth, consensus protocols for xlm, etc.). The future is looking brighter, but as for BTC it’s just too late: it’s expensive because it requires huge amounts of investments to exist and allow transactions.

                                                                        2. 3

                                                                          Blockchain hype is pretty large source of free software funding that doesn’t corrupt the movement to serve a couple of oligarchs. Both offer freedom.

                                                                          I am not fan of blockchain snake oil peddlers myself, but if they manage to convince a bunch of greedy, rich capitalists to lose some money on free software alternatives to the status quo, I am content.

                                                                          Public smart contracts also incentivize research of formal methods, dependent typing and other methods to improve software correctness. Another great win.

                                                                          1. 2

                                                                            blockchain is wasting a huge amount of resources for absolutely no gain

                                                                            As mentioned down-thread I think you mean “proof of work” and not “blockchain” here. Git repos use a merkle-tree block-chain just like Bitcoin, for example, and there’s no proof of work there.

                                                                            I also think it’s ilkely you misunderstand the nature of “wasted resources” in a proof of work algorithm (blockchain related, or otherwise) but that’s a side issue here.

                                                                    2. 4

                                                                      This stance comes from a place of ignorance on how energy is generated and converted into proof-of-work. It’s a shame because blockchains are here to stay and will only grow in consumption. The majority of electricity in use by proof-of-work chains today comes from excess hydro energy. This is energy that was already harvested and would go to waste if it wasn’t used to secure bitcoin. There are today several large projects looking to do the same: make use of excess energy harvested by power plants, and turn them into money. There is really no need to create new energy to power cryptocurrencies. That, I agree is a waste, and completely unnecessary.

                                                                      1. 13

                                                                        Whoa, wait a minute. That’s like how I can buy the meat in the supermarket without being responsible for animals being slaughtered because the animals have already been slaughtered anyway, right? There’s no market involved here or anything.

                                                                        1. 3

                                                                          There is really no need to create new energy to power cryptocurrencies. That, I agree is a waste, and completely unnecessary.

                                                                          I know this rationale very well and I think truth is somewhere in the middle. Yes, mining is clearly a way to turn that excessive (in off-peak hours) energy into useful work. BUT! ATM any other, potentially more useful way of turning this energy into useful work needs to compete with miners. Potentially, that energy could create aluminum, fill water tanks, charge huge batteries, etc.

                                                                          Also, the way demand supply/demand for hashing power works - in times of high prices of BTC (like now) people are likely to run miners everywhere, not just next to large power stations in off-peak hours.

                                                                      1. 40

                                                                        Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files, …

                                                                        Also, a standard mechanism for applications to send messages to each other, preferably using the above structured format when passing data around. Seriously, IPC is one of the worst parts of modern OSes today.

                                                                        If we’re going utopic, then the operating system should only run managed code in a abstract VM via the scheduler, which can provide safety beyond what the hardware can. So basically it would be like if your entire operating system is Java and the kernel runs everything inside the JVM. (Just an example, I do not condone writing an operating system in Java).

                                                                        I’m also liking what SerenityOS is doing with the LibCore/LibGfx/LibGui stuff. A “standard” set of stuff seems really cool because you know it will work as long as you’re on SerenityOS. While I’m all for freedom of choice having a default set of stuff is nice.

                                                                        1. 21

                                                                          The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files

                                                                          I’d go so far as to say that processes should be able to share not only data structures, but closures.

                                                                          1. 4

                                                                            This has been tried a few times, it was super interesting. What comes to mind is Obliq, (to some extent) Modula-3, and things like Kali Scheme. Super fascinating work.

                                                                            1. 3

                                                                              Neat! Do you have a use-case in mind for interprocess closures?

                                                                              1. 4

                                                                                To me that sounds like the ultimate way to implement capabilities: a capability is just a procedure which can do certain things, which you can send to another process.

                                                                                1. 5

                                                                                  This is one of the main things I had in mind too. In a language like Lua where closure environments are first-class, it’s a lot easier to build that kind of thing from scratch. I did this in a recent game I made where the in-game UI has access to a repl that lets you reconfigure the controls/HUD and stuff but doesn’t let you rewrite core game data: https://git.sr.ht/~technomancy/tremendous-quest-iv

                                                                              2. 1

                                                                                I would be interested in seeing how the problem with CPU time stealing and DoS attacks that would arise from that could be solved.

                                                                              3. 17

                                                                                Digging into IPC a bit, I feel like Windows actually had some good stuff to say on the matter.

                                                                                I think the design space looks something like:

                                                                                • Messages vs streams (here is a cat picture vs here is a continuing generated sequence of cat pictures)
                                                                                • Broadcast messages vs narrowcast messages (notify another app vs notify all apps)
                                                                                • Known format vs unknown pile of bytes (the blob i’m giving you is an image/png versus lol i dunno here’s the size of the bytes and the blob, good luck!)
                                                                                • Cancellable/TTL vs not (if this message is not handled by this time, don’t deliver it)
                                                                                • Small messages versus big messages (here is a thumbnail of a cat versus the digitized CAT scan of a cat)

                                                                                I’m sure there are other axes, but that’s maybe a starting point. Also, fuck POSIX signals. Not in my OS.

                                                                                1. 5

                                                                                  Is a video of cats playing a message or a stream? Does it matter whether it’s 2mb or 2gb (or whether the goal is to display one frame at a time vs to copy the file somewhere)?

                                                                                  1. 2

                                                                                    It would likely depend on the reason the data is being transferred. Video pretty much always fits into the ‘streaming’ category if it’s going to be decoded and played, as the encoding allows for parts of a file to be decoded independent of the other parts. Messages are for atomic chucks of data that only make sense when they’re complete. Transferring whole files over a message bus is probably a bad idea though, you’d likely want to instead pass a message that says “here’s a path to a file and some metadata, do what you want with it” and have the permissions model plug into the message bus so that applications can have temporary r/rw access to the file in question. Optionally, if you have a filesystem that supports COW and deduplication, you can efficiently and transparently copy the file for the other applications use and it can do whatever it wants with it without affecting the “original”.

                                                                                    1. 5

                                                                                      Which is why copy&paste is implemented the way it is!

                                                                                      Many people don’t realize but it’s not actually just some storage buffer. As long as the program is running when you try to paste something the two programs can talk to each other and negotiate the format they want.

                                                                                      That is why people sometimes have odd bugs on linux where the clipboard disappears when a program ends or why Powerpoint sometimes asks you if you want to keep your large clipboard content when you try to exit.

                                                                                2. 13

                                                                                  Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files, …

                                                                                  It’s a shame I can agree only once.

                                                                                  Things like Records Management Services, ARexx, Messages and Ports on Amiga or OpenVMS’ Mailboxes (to say nothing of QIO), and the data structures of shared libraries on Amiga…

                                                                                  Also, the fact that things like Poplog (which is an operating environment for a few different languages but allows cross-language calls), OpenVMS’s common language environment, or even USCD p-System aren’t more popular is sad to me.

                                                                                  Honestly, I’ve thought about this a few times, and I’d love something that is:

                                                                                  • an information utility like Multics
                                                                                  • secure like seL4 and Multics
                                                                                  • specified like seL4
                                                                                  • distributed like Plan9/CLive
                                                                                  • with rich libraries, ports, and plumbing rules
                                                                                  • and separated like Qubes
                                                                                  • with a virtual machine that is easy to inspect like LispM’s OSes, but easy to lock down like Bitfrost on one-laptop per child…

                                                                                  a man can dream.

                                                                                  1. 7

                                                                                    Something other than “everything is bytes”, for starters. The operating system should provide applications with a standard way of inputting and outputting structured data

                                                                                    have you tried powershell

                                                                                    1. 4

                                                                                      or https://www.nushell.sh/ for that matter

                                                                                    2. 4

                                                                                      In many ways you can’t even remove the *shells from current OS’s IPC is so b0rked.

                                                                                      How can a shell communicate with a program it’s trying to invoke? Array of strings for options and a global key value dictionary of strings for environment variables.

                                                                                      Awful.

                                                                                      It should be able to introspect to find out the schema for the options (what options are available, what types they are…)

                                                                                      Environment variables are a reliability nightmare. Essentially hidden globals everywhere.

                                                                                      Pipes? The data is structured, but what is the schema? I can pipe this to that, does it fit? Does it make sense….? Can I b0rk your adhoc parser of input, sure I can, you scratched it together in half a day assuming only friendly inputs.

                                                                                      In many ways IPC is step zero to figure out. With all the adhoc options parsers and adhoc stdin/out parsers / formatters being secure, robust and part of the OS.

                                                                                      1. 3

                                                                                        I agree wholeheartedly with the first part of your comment. But then there is this:

                                                                                        If we’re going utopic, then the operating system should only run managed code in a abstract VM via the scheduler, which can provide safety beyond what the hardware can.

                                                                                        What sort of safety can a managed language provide from the point of view of an operating system compared to the usual abstraction of processes (virtual memory and preemptive scheduling) combined with thoughtful design of how you give programs access to resources? When something goes wrong in Java, the program may either get into a state that violates preconditions assumed by the authors or an exception will terminate some superset of erroneous computation. When something goes wrong in a process in a system with virtual memory, again program may reach a state violating preconditions assumed by the authors, or it may trigger a hardware exception, handled by the OS which may terminate the program or inform it about the fault. Generally, it all gets contained within the process. The key difference is, with a managed language you seem to be sacrificing performance for an illusory feeling of safety.

                                                                                        There are of course other ways programs may violate safety, but that has more to do with how you give them access to resources such as special hardware components, filesystem, operating system services, etc. Nothing that can be fixed by going away from native code.

                                                                                        No-breaks programming languages like C may be a pain for the author of the program and there is a good reason to switch away from them to something safer, in order to write more reliable software. But a language runtime can’t protect an operating system any more than the abstractions that make up a process, which are a lot more efficient. There are of course things like Spectre and Meltdown, but those are hardware bugs. Those bugs should be fixed, not papered over by another layer, lurking at the bottom.

                                                                                        Software and hardware need to be considered together, as they together form a system. Ironically, I may conclude this comment with an Alan Kay quote:

                                                                                        People who are really serious about software should make their own hardware.

                                                                                      1. 9

                                                                                        I’d argue the author’s solution is technically also machine learning just a way simpler method.

                                                                                        Part of me wants to write an article with machine learning methods in the spirit of Evolution of a Haskell Programmer

                                                                                        1. 4

                                                                                          Please do, I’d love to read it and learn. :-)

                                                                                        1. 30

                                                                                          Over the summer I started dabbling in GTK3. I made the mistake of looking at pretty much every ‘GTK3 tutorial’ on the internet, almost all of them involve Glade. After ending up with something not working 100% of the time, I jumped on IRC, was told:

                                                                                          • basically exactly what was written about here (don’t use glade)

                                                                                          • don’t use the ‘official’ GTK ‘tutorial’ docs/blog posts, they’re outdated/not functional

                                                                                          • only use gnome-builder

                                                                                          • reference the source code for 3-4 gnome app projects for information on how to use the API (including how to use/structure xml UI portion)

                                                                                          I hope they do some deep soul searching soon.

                                                                                          1. 10

                                                                                            Agreed. The article is the usual “GTK devs broke things and blame other people for it” we saw for years.

                                                                                            1. 3

                                                                                              I wonder if the situation is significantly better at the kde side. QT stuff looks a bit more… Product oriented.

                                                                                              1. 9

                                                                                                I maintain a 10 year old Qt project and it compiles pretty much like it did when the project started. I think that in those ten years I have replaced a handful of lines because a method was deprecated. In all those cases, the documentation pointed me to the right replacements (the Qt documentation is really good compared to Gtk+). Also, even without replacing those calls to deprecated methods, the program would still compile fine (I guess they are removed in Qt6?).

                                                                                                A year ago I had to make a nice, presentable UI for some work project that we wrote in Rust. Since the only functional GUI toolkit for Rust is Gtk+ (through the excellent gtk-rs binding), I wrote the first version with Gtk+. Gtk+ felt very archaic. Glade (yes, we shouldn’t use it, but who wants to write XML by hand) was very primitive compared to Qt Designer. Many of the APIs felt primitive and hard to extend (it’s easy to subclass widgets in Gtk an make your own flavors). And many things were not described in the documentation and I found the answers by digging through old mailing list discussions.

                                                                                                Another issue is that you should probably only use Gtk+ if you target X11 or Wayland. Gtk+ is unbearably slow on e.g. macOS and does not integrate at all.

                                                                                                At any rate, the whole experience was so miserable that I ended up making a Python binding and used PyQt to write the GUI, which was very pleasant. As a bonus, the application worked well on macOS and Windows as well and look and feel nearly native.

                                                                                                1. 9

                                                                                                  Another issue is that you should probably only use Gtk+ if you target X11 or Wayland. Gtk+ is unbearably slow on e.g. macOS and does not integrate at all.

                                                                                                  GTK+ also has zero accessibility support on Windows and Mac. So some users would be completely blocked from using your application.

                                                                                                  1. 2

                                                                                                    I am having similar experience with qt, and wondering why developer still working with gtk, a much worse competitor of qt.

                                                                                                    1. 5

                                                                                                      For me it’s GStreamer and C bindings with GObject introspection.

                                                                                                      1. 3

                                                                                                        Most likely licensing issues.

                                                                                                        1. 6

                                                                                                          I thought the gtk/qt license wars were over a long time ago and QT is now GPL + LGPL (with the option to purchase commercial if you don’t want to have to abide by GPL/LGPL).

                                                                                                          That seems to offer strictly more options than GTK?

                                                                                                          1. 3

                                                                                                            See, I was still in the licensing war mindset. I’d missed that Qt has resolved the outstanding questions with regards to their licensing.

                                                                                                        2. 1

                                                                                                          in a word, bindings. gtk works with a ton of languages.

                                                                                                      2. 2

                                                                                                        I can’t say about KDE specifically (KDE Frameworks are a thing and I haven’t used them in… uh, when was KDE 3.3 again?) but yeah, things are way, way better on the Qt side of things. Qt is very thoroughly documented, there are plenty of examples. When something gets deprecated, you usually know well in advance, and you generally have at least one migration path when the deprecation happens (I can’t describe how GTK does that because I inevitably descend into rage and snark, sorry). As with any large (partly) commercial codebase bugs do sometimes get ignored for a long time, especially in the Widgets, but in my experience, the “here’s a bug – this is not a bug” dance is very infrequent.

                                                                                                        This is mostly a general experience, of course. There are exceptions (things that get deprecated too soon/without adequate replacements/without a proper announcements etc.) but they’re rare.

                                                                                                        I’ve used both and between 2007 and 2020 (and especially in the GTK 3 age) my attitude towards GTK has slowly gone from “either GTK or Qt is fine, just pick the one you like and maybe the one that matches the DE most of your users prefer” to “I wouldn’t touch this even if you paid me to”.

                                                                                                  1. 21

                                                                                                    The responses to this post are pretty spot on. Asking people to hand-edit XML files to do UI design is stupidly backwards and I can’t even…

                                                                                                    1. 18

                                                                                                      … poster proceeds to edit HTML and CSS in an IDE, watching results in a browser.

                                                                                                      1. 9

                                                                                                        Yes, this blog post is missing the part where another better tool is recommended.

                                                                                                        1. 5

                                                                                                          Yes, we should be drawing mini ASCII art instead:

                                                                                                          "H:|-[button1(200)]-(>=10)-[button2(200)]-|"
                                                                                                          
                                                                                                          1. 3

                                                                                                            Do Apple developers still use this?

                                                                                                            1. 3

                                                                                                              No, that was the first version of auto layout. I remember using this syntax for one year, iOS SDK 6 I think? Tools / SDK releases are annual just like phone hardware, and we’re on SDK 14 now, so it has been a good while. Anyway there was one initial year where Interface Builder, the visual layout tool, made constraints hard to get right, and that year my teams did constraints in code. We came around to IB again when they improved it a year later.

                                                                                                              The primary ways to define constraints now are either to draw them in Interface Builder or use a new more expressive and type-safe code syntax. If I recall, that was new with SDK 8. I believe all of this was pre-Swift.

                                                                                                              However the true new hotness is SwiftUI, which is a React-like declarative UI system that tends to operate in flexbox-like stacks and permits specific, dynamic layout in ways other than a system of linear equation constraints, so your don’t then have to debug them for over- or under-definition. SwiftUI cannot yet do everything, so adoption is slow, but that’s the state of the art over in Apple dev world.

                                                                                                              Suffice to say, VFL is ancient history.

                                                                                                              1. 2

                                                                                                                I suspect it’s not very popular these days. VFL dates back to the time before Swift when ObjC would have forced extremely bracket-y APIs. With Swift we get more fluent APIs.

                                                                                                                Many new projects probably start with SwiftUI, a completely new UI toolkit. It’s still limited, but nice where it works. Most UIKit/AppKit apps use Interface Builder, Xcode’s graphical GUI builder. Of those that prefer UIs in code (I certainly do), many probably use a third party wrapper around auto layout, such as SnapKit:

                                                                                                                box.snp.makeConstraints { (make) -> Void in
                                                                                                                    make.size.equalTo(50)
                                                                                                                    make.center.equalTo(container)
                                                                                                                }
                                                                                                                

                                                                                                                Or they use Apple’s newer anchor API which is almost as nice:

                                                                                                                NSLayoutConstraint.activate([
                                                                                                                    box.widthAnchor.constraint(equalToConstant: 50),
                                                                                                                    box.heightAnchor.constraint(equalToConstant: 50),
                                                                                                                    box.centerXAnchor.constraint(equalTo: container.centerXAnchor),
                                                                                                                    box.centerYAnchor.constraint(equalTo: container.centerYAnchor),
                                                                                                                ])