I have no idea why the FSF has the hots for JPEG XL and at this point, I’m afraid to ask.
I’m a semi-avid photographer and I don’t know how to produce a JPEG XL image. I think there’s both a demand and a supply problem for the format.
Don’t know about the FSF, but I have the hots for JPEG-XL because $WORK has over 1tb/day egress of user-supplied images in jpeg format. Being able to shrink that by 30% with no loss of quality would be a huge win!
It suddenly occurs to me that a cloud provider who charges people for egress would have no incentive to fix problems revenue sources like that.
That might be the case, but while the company @danielrheath works for deals with image data, I have to imagine that for a generic cloud provider video bandwidth dwarfs image bandwidth (if we’re talking media). The big wins are in making video encoding more effective.
if you control the receiver in any form, Lepton might be of interest
I’m a semi-avid photographer and I don’t know how to produce a JPEG XL image.
This is because the format is still brand new, and also because the lack of adoption by the most popular application for consuming the largest distribution network for images. This is why JPEG XL’s removal from Chrome seems suspiciously premature: the bitstream and file format for JPEG XL was finalized last year, but the reference implementation isn’t stable yet.
As for why people are excited about it, there are a lot of reasons which point to companies with big investments in image delivery being able to save significantly on bandwidth costs without losing quality, as @danielrheath pointed out. I’m one of the few small-time users with a legitimate interest in JPEG XL that goes beyond the comparatively marginal benefit of saving some of my personal disk space, because it offers uniquely good set of characteristics for my use-cases as a photographer. In increasing order of esotericism:
I usually edit and save my photos these days assuming they’ll be viewed on a wide gamut display with at least Display P3 capability. Regular old JPEG can do this, but the colour space information is stuffed into EXIF data rather than being intrinsic to the file format, and zealous EXIF stripping programs on various websites sometimes delete it, so the colours in the picture look wrong. Also, regular old JPEG is limited to 8 bits per channel, which starts to get quite thin as the gamut increases. Admittedly, AVIF and WebP are ‘good enough’ at this.
I work with scans of medium format and large format film, in the range of tens of thousands of pixels per image dimension. Regular old JPEG is limited to 65k pixels per side, and newer formats – especially AVIF, based on a format designed for video with much smaller dimensions – are actually worse than old-school JPEG at this, and can only ‘cheat’ very large images by gluing them together side by side, so you lose the advantages of being able to share compression information between ‘panels’ of an image, an effect that gets worse as pictures get larger. There may also be visible artefacts between panels because the transition can’t be smoothed over by the compression algorithm effectively. JPEG XL natively supports up to a thousand million pixels per image side.
I also sometimes work with photos with transparency. Among older formats, JPEG can’t do this at all, and PNG only offers lossless compression which isn’t intended for photographic data and thus makes absolutely huge files when used for it. Again, AVIF and WebP are probably ‘good enough’ at this, but for some applications they suck because afaict you can’t have transparency in a CMYK image for them, so if I’m preparing an image with transparency for press, there’s no format for that.
Thanks a lot for taking the time to listing these pros for JPEG XL!
It’s interesting that according to the wiki page for the format, both Adobe and Flickr (after the SmugMug purchase?) were on the JPEG XL train.
Thank you for sharing. If I’m understanding correctly, the removal of JPEG XL from Chrome wouldn’t really affect most of your personal use-cases then, right? For instance, I can’t imagine you’d be inlining any billion-by-billion pixel images on a web page.
Your points are all valid yet entirely pointless on the web (CMYK? billion pixels in each direction?). JPEG-XL could make inroads in print, design, arts etc without thinking about Chrome even once. Yet it seems that there isn’t much enthusiasm in supporting it and its unique features in that space. How is that Chrome’s fault?
That seems more of a self-imposed hegemony: “We only look at the format for any use case once Chrome supports it. Bad Google!” In my opinion that’s a weird notion of “free as in freedom”.
The question was why people are interested in JPEG XL, especially from the perspective of a ‘semi-avid photographer’. I acknowledged explicitly that I benefit unusually extensively from JPEG XL compared to other individual photographers, and pointed to another answer in this thread that makes a more compelling case that’s very relevant to the web. I would also say that each of my points contains something less unusual that actually is relevant for the web (e.g. it’s not that uncommon for images larger than 4K, the maximum size for one panel of AVIF data, to be posted on the web).
Also, JPEG XL is actually beginning to see adoption outside of web browsers for these use cases. But that wasn’t the question I was answering here.
Yet it seems that there isn’t much enthusiasm in supporting it and its unique features in that space. How is that Chrome’s fault?
Because that’s Google’s line, not reality. Read the issue thread and note how many voices there are (from major companies) speaking against the decision to remove it.
“to remove it” - an experimental feature behind a flag. Nobody seriously used that capability, ever. How many of those opposing the removal in the issue thread are even aware of that instead of just joining the choir?
So where’s JPEG-XL support in Edge, Firefox, Safari? Where’s the “10% of the web already use a JPEG-XL decoder polyfill until Chrome finally gets its act together and offers native support” article?
This entire debate is in a weird spot between “Google (Chrome) force developments down our throats” (when something new does appear, such as WebUSB, WebMIDI, WebGPU, …) and “We have to wait until Chrome does it before everybody else can follow”.
That’s kind of the point. It wasn’t even given a chance to prove itself. This was very premature and came out of nowhere just as JPEG-XL was starting to gain attention. Why is it so hard to understand why people are frustrated by this? I guess I just don’t understand why you’re against it, or feel the need to suggest that people against the removal are just ‘joining the choir’. Maybe people do really care?
I don’t know what this has to do with any other web technology. I would take JPEG-XL over any of those (not that’s that’s really relevant).
Right, JPEG-XL hasn’t got a chance to “prove itself” by becoming part of the standard feature set of Chrome because it was never put in front of ordinary users (it’s always been behind a flag).
Every other time that Chrome unilaterally decides to put something in front of ordinary users, people claim that this is just an example of Chrome’s web hegemony. That would have happened with JPEG-XL, too.
What should happen for healthy web standard evolution:
For some odd reason, JPEG-XL advocacy starts at step 4, simultaneously arguing that Chrome shouldn’t be the arbiter of what becomes a web standard and what doesn’t, and not doing any of the other work. (edit to add: meanwhile it ignores all the other actors on the web who don’t support JPEG-XL, either.)
To me that looks like JPEG-XL advocates were expecting Chrome to implement the format, take the heat for “forcing more stuff on people” (as always happens when Chrome unilaterally adds stuff), then have everybody else follow. That’s a comfortable strategy if it works, but I don’t see why JPEG-XL should have a right to this kind of short cut. And I don’t see how “Chrome didn’t employ its hegemony to serve our purpose” is a sign of “how the web works under browser hegemony.”
So: where are the Polyfills? Where are the Adobes, Flickrs and everybody using them, with blog posts documenting their traffic savings and newly enabled huge image features, and that “x% of the web runs on JPEG-XL”?
And, to keep that line of thought a bit separate, as its own post:
I don’t so much mind JPEG XL folks doing that. That means that they’re working in a world view that presumes such a web hegemony by Chrome. I disagree but given that it’s xPEG, they probably can’t think any other way.
The FSF, however, should be well aware how stuff like this ought to work…
Okay, so it’s more about web standards being fast tracked without proper procedure? I can definitely appreciate that.
The way Google has gone about this though is not just removing an existing experimental flag for a feature that actually has (or had) a good chance of getting in had it just been given time, but did so in such a way that made it sound like the decision was final and there would be no way back from it, while providing such a weak/misleading explanation that it seemed pretty obvious there must be an ulterior motive. Especially when they didn’t even acknowledge the flood of companies showing an interest and clearly proving them wrong. If even they can’t convince Google to think twice, then clearly they aren’t interested in having an honest discussion.
Personally I don’t mind how long it takes for the web to adopt JPEG-XL. I would have been all too happy for it to have gone through the process you describe (although I’m not sure how realistic it is that the major players would use a polyfill for something like this). What’s frustrating is that the way they did handle it may have effectively killed any chance it had prematurely rather than allowing it to gain traction slowly and naturally.
Edit: And I really want to be wrong about that. I hope that there is a way back from this.
so it’s more about web standards being fast tracked without proper procedure?
To some degree, but not only. The other aspect is that all those organizations chiming in on the issue tracker talk the talk but don’t walk the walk. There are numerous options to demonstrate that they care about a JPEG-XL deployment and that kickstart JPEG-XL on the web. I haven’t seen much in that space (see above for ideas what they could do, that nobody working on Chrome could block).
What I’ve seen are complaints that “Mighty Chrome is taking our chance at becoming part of the web platform!” and that seems more like a pressure campaign that Somebody Else (namely: the Chrome folks) should invest in JPEG-XL and bear any risks (whatever they might be, e.g. patent encumbrance issues) without doing anything themselves.
And that’s a pretty clear sign to me that nobody actually cares - it’s not just “Google’s line”.
Again: Talk is cheap. Chrome’s JXL support was barely beyond “let’s see if it compiles”, given that it was behind a flag. If all these parties who “care so much” truly care and make JXL-on-the-web a thing, I’m quite sure that Chrome will change course and reintroduce that code.
And unlike with IE6, it won’t take 15 years between Chrome finally introducing native support and everybody being able to take out their polyfills because Chrome has an actual planned update process. My guess is that Safari would be the last hold-out, as is usual.
If you work with RAWs and darktable (free software and multi platform), you can export your pics into JXL files (and choose parameters) ;)
It’s a bit slow though (compared to JPEG), but still faster than exporting AVIF, which is also supported.
As for me, the standard image viewer included with Fedora supports JXL too.
I work with RAWs but I don’t use darktable. I used to use Lightroom but am trying out alternatives.
Anyway I publish to Flickr and AFAIK they don’t support to format. This goes back to the original point I guess, browsers not supporting it.
A lot of the internet commentary suggests Google is/was opposed to JPEG XL, but if anything they seem to have been the only browser vendor interested in it. It’s derived from Google tech (PIK and Brunsli), was implemented in Chrome early (behind a flag, per current standard practice), and doesn’t seem to have been implemented by any of the other major browsers (Safari, Firefox, Edge).
The Firefox tracking bug (https://bugzilla.mozilla.org/show_bug.cgi?id=1539075) has quotes from someone who appears to have been a Google employee at the time they posted:
We are hoping to use JPEG XL in both image encoding and content encoding. For content encoding we can deliver traditional JPEGs with the -22 % savings in transferred bytes. We can help in the exploration work, documentation and other aspects of the possible integration.
JPEG XL is building on our previous work in FLIF, FUIF, pik, guetzli, zopfli, butteraugli, knusperli, brunsli, WebP lossless, and brotli. I believe it is the technically strongest, easiest to use, and most backward compatible and full-featured contender of the next-gen image formats.
What was the Chrome team supposed to do here? If they’d pushed JPEG XL despite lack of interest from anyone else, then that would have been “browser hegemony”.
Is Google now evil for not trying to force-feed adoption of a Chrome-specific technology?
As so often, I think this was a management intervention at Google. The engineers are probably still for it, but Google wants to push their own formats, too.
This has never been a technical debate, even though some are trying to turn it into one. AVIF is a video codec shoehorned into an image format, which brings a lot of disadvantages (no progressive decoding, lack of error tolerance, performance). Just the simple fact that JPEG XL allows JPEG transcoding with 20-30% savings is reason enough to support it exclusively.
Ok, then why single out Google? Why isn’t the FSF also going after Mozilla and Apple? It’s not like JPEG XL as a technology is inherently more aligned with copyleft than AVIF or WebP, and the primary stakeholders are all sites that nobody belonging to the FSF would use regularly (due to e.g. requiring JavaScript).
If Chrome shipped JPEG XL and it started getting adopted on sites like Facebook, I guarantee you there would be angry articles about depending on Chrome-specific capabilities. There was a lot of that for WebP[0], and it took years before Mozilla gave in and implemented support for it.
This reads like the FSF jumping into a debate they don’t understand, for obscure reasons.
[0] Websearch surfaces https://news.ycombinator.com/item?id=18974637 and https://news.ycombinator.com/item?id=27508300 as examples, which match my general memories of online consensus at that time.
This is a good point. As a counterargument, other than AVIF or WebP, JPEG XL is actually an ISO standard. I think we should all shame Mozilla for wimping out on this great format only because big brother Google decided so. Google might have even used their influence over Mozilla here, given they fund it heavily.
lack of interest from anyone else
Did you see like every major graphics-related or social-media-related vendor have a spokesperson comment on the Chromium issue tracker about this? Like you have team Krita and GIMP agreeing with Facebook and Adobe over interest.
This is what really frustrates me. Seeing people buy into Google’s suggestion that there’s a ‘lack of interest’ and then claim that supporters are just jumping on the anti-Google bandwagon as if I haven’t read the whole issue thread (and followed the format’s history for years) because JPEG-XL really is something I care about. Forget about the FSF, this is much bigger than that. We have a chance here to make the right decision, which we will benefit from for many years to come. It’s being killed off by some Google internal politics. I can’t believe anyone would jump to their defence here.
I don’t think that’s fair, they said that Mozilla just didn’t approve the extension in time for their beta release.
I experimented with Vite for awhile recently and I definitely came away with a good impression. I love the index.html default entry point for quick demos/examples and prototypes. I haven’t found motivation to use it in a real project yet, but I agree that Vite gets a lot of things right. I’m all for rejecting unnecessary complexity.
I gave up on Webpack a looong time ago. Actually to be honest, I don’t think I ever picked it up. I had to use it at work on a couple of projects we inherited. One of those was a fairly simple headless ecommerce website and the Webpack build took 20 seconds (incremental builds around 500ms). I felt dirty every time I worked on that project. I just can’t believe people willingly signed up for that experience, especially when the likes of esbuild came along, but these kinds of low standards have been pretty common in the wider web development community unfortunately (see bundle sizes, 10,000 dependencies, etc.). I’m hopeful that trend is reversing and we’ll see a push towards greater efficiency in our tools and workflows.
I seem to be in the minority here, but as someone who has also been forever confused with the project and its past naming, this renaming is the first time I have actually felt like I understand the concept. I honestly like the change as it makes the busybox-like binary clearly distinct from the two symlinks, and does not favour one over the other (as seems to be the intent). The only alternative I’ve seen here that makes sense to me (given what I now understand to be the project goals) is the osh/osh-compat or osh/osh-posix split (it is undeniably more clear).
I actually agree with calvin that the issue is less the naming and more the communication. Part of that is the long history of the project. There have been so many words and I’m still not sure if it’s supposed to be ready for adoption or is still a work in progress. I think people kind of switch off after awhile (I did). But this open discussion is good and it shows that there is plenty of interest in the project, if that interest can only be better taken advantage of.
Glad it clarified things. Honest question: now that OSH vs. YSH is clear (formerly OSH vs Oil), do the “messages to take away” in the latest post give you a picture of the status of the project?
https://www.oilshell.org/blog/2023/03/release-0.14.2.html#messages-to-take-away
I put that section up front because I knew that people were wondering about the project.
I agree it would be great to take advantage of the interest.
As mentioned in the post, we have multiple contributors, including one who has made changes all over the codebase (Melvin).
But a pretty common occurrance is that people get very excited about contributing (and maybe have never contributed to an open source project before), and then they don’t make progress. I think that’s just normal for open source, but I’d be interested whether people with experience think we can do something differently.
In one sense, it’s a Python program on Github, so it should be pretty easy to contribute to.
Another problem I’ve seen though is that a lot of people don’t know shell and just want a better shell. It will be hard to contribute to Oil if you don’t want to know any shell, for multiple reasons.
It does help, I guess from my perspective I’m just waiting until such a time as Oil is ‘ready’ (particularly YSH) to really give it a spin. I did install it for a quick look (OSH). A couple of things were slower than bash but mostly it looks quite polished and I do like the concept. I am optimistic that you will get the project to where you want it to be. Thank you for putting so much work into this.
Perhaps you can solicit feedback specifically from people who know shell. You will obviously need their support for this project to go anywhere.
What’s the editor support like? I assume the best thing to do would be create a tree-sitter grammar for this, unless it already exists?
Not an answer to your question, but related: they also develop an online collaborative editor that looks very interesting: https://typst.app/
I suspect that the moment Helix supports this format, I will start converting all my notes from Markdown.
I absolutely love the look of this, it seems to be everything I’ve dreamt of in a LaTeX replacement. The web UI is really nice too, although it seems to be struggling a bit under the load post launch announcement.
Are you writing lots of markdown in Helix? Whenever I try, I get annoyed by the lack of soft wrap. I suspect I’d have the same problem writing LaTeX or Typst with Helix.
Helix has soft wrap in HEAD since #5893 was merged two weeks ago. If you run nightly, just add editor.soft-wrap.enable = true
to your config. Else of course wait for the next release, though I don’t know if there are plans yet for when it will drop.
Oh, and according to (nightly)hx --health
, Helix doesn’t support Typst yet. I guess it will be added quickly once there is a tree sitter for it.
Personally I always use :reflow
or pipe to fmt
to hard wrap lines when I write Markdown. But it’s also good to hear that soft wrap is coming soon!
Nonexistent :’]
Honestly the syntax is not the worst thing to write without highlighting, but I’d love an $EDITOR plugin.
The web is not the same medium as graphic design.
I disagree. The web is a place for art and graphic design as much as anything else. Websites are allowed to be pretty.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
The user wants to complete a task - not look at a pretty poster
You are not all users. I, for one, do not enjoy using websites that don’t look nice.
many designers design more for their own ego and portfolio rather than the end-user
Again, anecdotal (though it does seem plausible).
I find myself agreeing with all the other points brought up in the article (system fonts are usually good enough, consistency across platforms isn’t essential, performance). I don’t have any extra fonts used on my website (except for where Katex needs to be used) and I think it’s fine (in most cases. I’ve seen the default font render badly on some devices and it was a little sad).
I still disagree about “websites are tools and nothing else”. I don’t want my website to be a tool. I want it to be art. I’ve poured in time and effort and money and my soul into what I’ve made. I do consider it art. I consider it a statement. And if I have to make a 35kb request to add in a specific typeface, then I’ll do it to make it reach my vision.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
That was obviously not the real question though: the point is, do web fonts help users in any way, compared to widely available system fonts? My guess is that the difference is small enough to be hard to even detect.
As a user, they make me happy and I tend to be more engaged with the content (when used effectively), so yes I find them helpful. I don’t want to live in a world without variety or freedom of expression. As long as there are ways to turn them off, surely everyone can be happy.
We live in a world full of colour. I don’t like this idea of the hypothetical ‘user’ who ‘just wants to get things done’ and has no appreciation for the small pleasures in life. I don’t have anything against anyone who feels that way of course (it’s completely valid). Just this generalisation of ‘the user’.
It really depends on the metrics measured.
Does the font help the user fill out the form and submit it? No, not really.
Does the font help engender a brand feeling of trust across platforms and mediums? Probably yes.
It’s impossible not to detect my own instinctive, positive reaction to a nice web design, and typography is a big part of that. I am quite certain I’m not alone in that feeling. That enjoyment is “helpful” enough for me to feel strongly that web fonts are here to stay, and that’s a good thing. There’s also plenty of UX data about what typography communicates to users, even if those findings aren’t normally presented in terms of “helping.”
A poorly chosen font can be hard to read in a certain context, but that’s a far cry from “all custom web fonts are bad for usability” and I haven’t seen any evidence to back up that claim. So given there are obvious positives I think the question is really what harm they actually do.
I’d wager typography is not limited to fonts.
Now obviously there’s a difference between a good web font and a crappy system font. But are all system fonts crappy? I haven’t checked, but don’t we already have a wide enough selection of good fonts widely installed on user’s systems already? Isn’t the difference between those good fonts and and (presumably) even better web fonts less noticeable? Surely we’re past the age of Arial and Times New Roman by now?
See this related submission: https://lobste.rs/s/tdiloe/modern_font_stacks
It’s basically grouping “typeface styles” across different systems’ installed fonts.
This is big, thank you.
I mean, I guess it won’t be as good as the best web font someone would chose for a given application, but if anything is “close enough”, that could be it.
Obviously fonts are a subset of typography (didn’t mean to imply they are the same), but they are absolutely central to it. And I didn’t say that system fonts are all crappy. My argument doesn’t rely on that premise, and anyway, I like system fonts! I think that designing within constraints can foster creativity! I just don’t think we should impose those constraints on everyone. At least not without a lot more evidence of actual harm than I’ve seen presented.
And although we are definitely past the New Roman Times ;) I don’t think that means that a striking use of a good font is any less effective on the web.
As someone who doesn’t use KDE, I do think the outlines look great in the screenshots. Thanks for your work, and for putting in the time and effort to solve something you care about!
I’m amazed by the strength some people have to deal with all this toxicity in their lives, yet be able to push through it all and keep doing with they believe in. This would be many times past my breaking point.
I’ve been having trouble with this:
https://zero-to-nix.com/start/nix-build-flake
I don’t understand how the JavaScript flake is supposed to work. Isn’t nix build
sandboxed, preventing network access? The documentation I’ve found isn’t very clear on this, but I think network access is blocked inside the sandbox as I’m getting DNS errors:
FetchError: request to https://registry.npmjs.org/vite/-/vite-3.2.4.tgz failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
If there’s no network access, then how would the pnpm install
in buildPhase
work?
I tried disabling the sandbox in nix.conf too, but that just gave me some strange permission errors like:
Error: EACCES: permission denied, mkdir '/tmp/.pnpm-store/v3/files/4f'
Oops, a generous contributor sent a PR for that. Thanks for pointing it out, it should be fixed now: https://github.com/DeterminateSystems/zero-to-nix/pull/174
That’s a relief! Unfortunately I’m now seeing this error from nix build
:
npm ERR! [esbuild] Failed to find package "esbuild-linux-64" on the file system
I think that this is because package-lock.json was generated on a macOS system so it only includes esbuild-darwin-arm64
and not esbuild-linux-64
(or other architectures).
Edit: Sorry, I submitted an issue for this.
I’ll be honest, this doesn’t look very useful. This is a tutorial of how to invoke Nix, not how to use it. The second class of target people, “who have tried to cross the chasm to using it in their daily workflows but haven’t gotten there yet”, aren’t stuck because nix-build
and nix-shell
“present significant cognitive overhead”* compared to nix build
and nix shell
. They’re stuck because they don’t know how to package things. My daily workflow does not involve rebuilding bat
. My daily workflow involves building my project, and to do that with Nix, I need documentation that explains how to make Nix do that. This doesn’t provide that outside of links to existing documentation, and it’s that documentation that is, imho, most in need of attention. This just doesn’t feel like a good use of documentation-directed effort to me.
*As an aside, this claim makes no sense to me whatsoever. Learning a dozen commands isn’t any harder than learning a dozen verbs on one command.
As a person interested in nix, I find the structure and writing style of Zero to Nix far more useful than any other nix documentation I’ve read so far. Especially the opinionated style is useful as an orientation guide in the vast sea of things to learn and try out.
I’m glad it’s useful to you, then. Hopefully you’re able to get a foothold of understanding that carries you through the older strata of documentation.
Completely agree. I applaud the effort at making Nix more approachable, but Nix is still way too obscure and things that are simple to do with other package managers are still too difficult to do with Nix.
I decided to peek into Nix again after seeing this guide (I’ve tried and given up on multiple occasions in the past). I wanted to try making a C++ development environment to build a project. It’s a shame that creating a “flake.nix” file to do something even that simple is so complicated that you have to resort to using a template from a third-party. Writing the file by hand, from scratch, is basically out of the question for a beginner.
But ok, I’ll use the template. Now it’s time to add some packages. I use the “nix search” tool, which shows me a large list of package names, but doesn’t tell me anything about what’s in these packages. For example, what can I find in “clang-tools_11”? Is there a way to list the files that package includes? What about “clang11Stdenv”? That looks like it could be useful, but again, there’s no way (that I can see) to view what makes up that package or what it actually provides.
In contrast, a package manager like pacman can list all of the files that a package will install. Even Homebrew will tell me what a packages dependencies are, and will give me a hyperlink to the “formula” for a recipe, so I can see exactly how the package is defined. Is any of this possible with Nix? If it is, that’s the kind of stuff that is useful to know for using Nix as a package manger. Not rebuilding bat.
The top search result for “how to view contents of a package in Nix” shows this answer:
ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
What does this mean? How is anyone who hasn’t already invested dozens of hours into Nix supposed to understand this, let alone figure this out on their own?
In the end, I think this guide is like so much other Nix documentation. It provides surface level, trivial examples to give the illusion that “Nix is easy”, but leaves you completely ill-equipped for doing anything useful.
Sorry for the rant, but the hype around Nix is never ending, and I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there, so I’m left feeling confused about what I’m doing wrong.
I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there
I hear you. As an ardent Nix fan, I have similar experiences too. Sometimes I’ll be trying to get something packaged, and the program or its language ecosystem isn’t easily “tamed” by Nix, and I get so frustrated. And that is from someone with quite a lot of experience and background in build systems, isolation, Nix itself, and lots of experience figuring it out. Days like that I feel like I lost, and sometimes get pretty down about it. A couple years ago I “gave up” and even installed another Linux distro. (This didn’t last for more than a day…)
I hope one day Nix is as easy to use as anything else. Or even easier. It empowers me to do so much without fear, and I’m addicted to that power.
My perspective is that to do this we need to:
What I don’t want to do is beat my head against the wall every time I want to try some new software. I admit that if it takes me more than an hour, I’ll sometimes boot a VM and try it out in another distro. That’s okay by me. By my count, more Nix in the world is good for everyone, and when it doesn’t serve me that is okay too.
As an aside, this line: ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
seems a bit weird. However, the idea of listing what files a package will install is a bit … different in Nix, because “installing” is … a bit different. We’re going to be publishing either a z2n page, or a blog post about that soon.
For searching, at least, I’ve always used search.nixos.org rather than the CLI tool. The search results usually have a link to the package definition, though often the package source isn’t legible to someone who isn’t an advanced user. clang-tools_11
is defined here, if anything in there is helpful.
Personally, I run NixOS on some home servers and I find the declarative system configuration aspect to be incredibly helpful. But when it comes to working on individual projects, I mostly develop them in their own respective tooling, sometimes with a nix-shell
or nix develop
config to get the dependencies I want installed, and I only figure out how to make them buildable with Nix later on.
I’m definitely the target audience for this, and having just gone through the Quick Start I find myself agreeing with you. I was optimistic at first, as what’s there is presented clearly, but as I reached the last page I realised I don’t feel any more informed than I was in the first place and all I’ve done is run someone else’s ‘flakes’ without really understanding what’s happening (I understand broadly what is happening with each command of course, but not in any kind of sense that I could actually reproduce it myself). None of this makes Nix ‘click’ for me unfortunately. It’s a decent start, but as you said it’s just not all that helpful in its current state. It needs to provide structure and order to the learning process. You can’t just gloss over the Nix language… when you introduce me to something like this, I want to read and understand it, so that I might be able to write it myself:
https://github.com/DeterminateSystems/zero-to-nix/blob/main/nix/templates/dev/javascript/flake.nix
But nothing I’ve read here has given me any kind of guidance on that. It’s like I’m supposed to just trust that it works. What do I take away from that? A good guide will engage my curiosity and encourage me to look deeper at what’s being presented. Shouldn’t the examples be as trimmed down as possible? Is this part really necessary?
# Systems supported
allSystems = [
"x86_64-linux" # 64-bit Intel/ARM Linux
"aarch64-linux" # 64-bit AMD Linux
"x86_64-darwin" # 64-bit Intel/ARM macOS
"aarch64-darwin" # 64-bit Apple Silicon
];
# Helper to provide system-specific attributes
nameValuePair = name: value: { inherit name value; };
genAttrs = names: f: builtins.listToAttrs (map (n: nameValuePair n (f n)) names);
forAllSystems = f: genAttrs allSystems (system: f {
pkgs = import nixpkgs { inherit system; };
});
I kind of understand what’s happening here, but that’s quite offputting if you’re trying to convince me that Nix is easy and approachable. Am I supposed to ignore this? It only makes me wonder why I would bother. I’m sure you can do things in a far more simple manner, so why this complexity? Perhaps there’s a little too much emphasis on correctness before you’ve reached the natural point of explaining why it is necessary. A nice start, and I enjoyed going through it but it needs much more work to live up to the promise of its title and ultimately I’m disappointed.
Thank you for working on this, and I hope you continue. Maybe I’ll be able to come back later and have a better experience.
Thanks for this feedback. One of the things in discussion around Flakes is system specificity and the overhead / boilerplate it tends to create. We’ll take a look at how we can simplify the documentation on this and make it more straightforward. I appreciate it!
That makes sense, thanks for the explanation. I reread my comment and I think it comes off as a bit too negative. I really did enjoy the content that is there so far, I just wanted it to keep going!
Whew, that is really great to hear =).
For what it is worth…
The nix.dev article looks helpful, so I’ll definitely go through that.
What I was really hoping for was more of a structured path to guide me through these concepts (a continuation from the guide). I realise how challenging that is likely to be given how broad the ecosystem is. Everyone probably has their own motivation for learning it. For me, the reproducible development environments are a big draw. For another person it might be something completely different. So I don’t know what that looks like exactly. Possibly branching paths after the quick start guide in the form of guides for the most common use cases? Whatever the case I think the hand holding needs to continue a little longer before you let people loose to discover the rest of the ecosystem for themselves. My best learning experiences have been just that. I follow the guide until I can’t help imagining all the possibilities and I am naturally compelled to go off and explore those ideas for myself. That’s where my journey really starts. If I’m going through a book (for example), that point is usually somewhere well before the middle and I may not even finish because I have enough confidence that I know what to look for. With this guide I still feel lost in a vast sea. It still feels like there’s a very large investment up front (in Nix), and I’m just trying to find the motivation to push through (hoping that it is what I imagine it to be).
Anyway, I hope my feedback is helpful in some way. I guess what I’m trying to say is that I definitely don’t expect Zero-to-Nix to be an exhaustive guide to the entire Nix ecosystem, but I do think it could carry the reader a little further, perhaps until they’re at least comfortable in writing their own simple flakes (for common use cases). A little extra confidence. That might be enough to reach that wonderful state of inspiration that drives further learning.
They provide templates for using flakes with some languages. Depending on the language you want to use to build stuff, that’s what you should look at. I think they don’t spell it because they tried to keep everything language-independent in the text, so you have to run the commands they provide to see more.
Templates are good for saving people the trouble of writing boilerplate. They are absolutely not a substitute for understanding. If you want to change something nontrivial in a project generated from a template, you still have to know how it all works.
Also see (Linux, etc.):
https://github.com/Duncaen/OpenDoas
I love it just because it’s so much easier to configure than sudo.
At Toitū Te Whenua LINZ we’re using Nix in a few places. Two open source repos include Geostore and emergency management tools.
Pros:
Cons:
I’m very much in favour of Nix over any other packaging system I’ve used, with the only possible exception being Rust crates. Despite its strange syntax and lack of documentation, Nix is massively better than packaging using [about six unnamed mainstream packaging systems] then flip-flopping to the latest and greatest variation every couple years, having to learn a huge amount of new best practices and bug/usability workarounds to get to something vaguely stable and idiomatic.
Rst has its problems (terrible headers, no nested inline markup) but the extensibility you get is just wonderful.
I don’t really care about extensibility if it means every time I want an in-line code block with part or all of it linked to another document I need to write my own role. Not supporting nested in-line markup is just brain-dead.
It was a more robust and better designed option, retaining the essential same mindset as markdown. It is unfortunate that markdown won the popularity contest. But marketing and hype dictated the outcome.
But marketing and hype dictated the outcome.
It’s funny, but OG Markdown was just a dude with a then popular blog and a whole mess of Perl: https://daringfireball.net/projects/markdown/
Markdown is a text-to-HTML conversion tool for web writers.
The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.
(my emphasis)
The list of filters (from https://daringfireball.net/projects/markdown/syntax#philosophy), links from the original:
Gruber never intended MD to become the be-all and end-all of authoring. In fact I think that’s why he didn’t let CommonMark use the Markdown name.
Gruber never intended MD to become the be-all and end-all of authoring. In fact I think that’s why he didn’t let CommonMark use the Markdown name.
Yes, also because he didn’t want Markdown to be a single standard. Which is why he has no problems with “GitHub-flavored” markdown and didn’t want CommonMark using the markdown name.
RST is mentioned as an inspiration to MD, see my comment down below https://lobste.rs/s/zwugbc/why_is_markdown_popular#c_o1ibid
I recently fell down a bit of a Sphinx rabbit hole that got me into RST after years of Markdown and org. I really really appreciate how easy they make it to add your own plugins that can manipulate the parsed tree. That project is temporarily on the shelf but I’m hoping to get back into it when the snow falls more.
I configured my webserver to allow caching of the CSS file, and have it expire after a year [1]. My thought was—the first request is slow, but subsequent hits go fast since the webserver can return a 304 (not modified). But I checked my logs for last month, and … I don’t think it does any good. Out of 14,301 requests for it last month, only 11 requests resulted in a 304. That sucks.
[1] If I make a change to the CSS file, it also gets a new name so as to force a refresh by the client.
I don’t tend to bother with 304 responses. I think usually the idea with caching assets is to make sure that the client doesn’t even make the request in the first place (unless the filename changed with a new hash).
And if the hash did change, then you will never respond with a 304 anyway.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
I thought of that after I posted. I do set a Cache-Control: header, so it might be that most clients that do not have it cached will do the one request, then won’t again for a year. I might be able to pull that information out from the logs, but it will at best be an estimate anyway.
On Rust’s “diversity numbers being terrible” and worse than the industry – is it worse than open source? For example, I think the last time anyone checked, there were many fewer women open source programmers vs. paid career programmers.
That is, I think all open source projects have this issue (and lobste.rs and HN seem to as well :-/ )
My impression is that Rust’s gender ratio number is in fact worse than open source. One possible reason is that compiler is a field gender-coded to be masculine. That is, I think Rust-the-project’s number is worse than Rust ecosystem (e.g. bodil, the maintainer of im crate, is a woman) but similar to, say, LLVM.
I’d love to see some more insights here.
You’re making the argument that the common wisdom of Arch being unstable is incorrect and you’re positing that running the same install over a couple of machines over the course of a decade proves that.
The thing is, there are people who can say the same thing with virtually any operating system, including the much maligned Windows! :) (There are definitely people out there who’ve been upgrading the same install since god knows when).
What makes your experience with Arch’s stability unique? How does Arch in particular lend itself to this kind of longevity and stability?
I really just meant to say that Arch Linux doesn’t break unusually much compared to other desktop operating systems I’ve used. At least that’s been my experience. The other operating systems I’ve used are mainly Ubuntu, Fedora, and Windows.
Try using arch without updating it for a year or two, then update it all at once. And then try this with Windows, Fedora, Ubuntu again. That’s honestly arch’s primary issue, that you can’t easily update after you’ve missed a few months of updates.
I don’t quite see how that is a primary issue when this is the nature of rolling release distros though. Comparing to point release Fedora and Ubuntu doesn’t fit well, and I’d somewhat expect the same to happen on the rolling/testing portion of Debian and Ubuntu, along with Fedora Rawhide or OpenSUSE Tumbleweed? Am I wrong here? Do they break less often if you don’t update for a year+?
Personally I keep around Arch installs from several years ago and I’ll happily update them without a lot of issues when I realize they exist. Usually everything just works with the cursed command invocation of pacman -Sy archlinux-keyring && pacman --overwrite=* -Su
I don’t quite see how that is a primary issue when this is the nature of rolling release distros though
It’s not necessarily – a rolling release distro could also do something like releasing a manifest of all current package versions per day, which is guaranteed to work together, and the package manager could then incrementially run through these snapshots to upgrade to any given point in time.
This would also easily allow rolling back to any point in time.
It’s actually a similar idea to how coreos used to do (and maybe still does?) it.
That would limit a package release to once a day? Else you need a pr hour/minute manifest. This doesn’t scale when we are talking about not updating for years though as we can’t have mirrors storing that many packages. I also think this glosses over the challenge of making any guarantee that packages work together. This is hard and few (if any?) distros are that rigorous today even.
It is interesting to note that storing transactions was an intended feature of pacman. But the current developer thinks this belongs in an ostree
or btrfs snapshot functionality.
I’m confused by this thread. Maybe I’m missing something?
It seems like it should be really simple to keep a database with a table containing a (updated_at, package_name, new_package_version) triple with an index on (updated_at, package_name) and allow arbitrary point in time queries to get a package manifest from that point in time.
No need to materialize manifests every hour/minute/ever except when they’re queried. No need to make any new guarantees, the packages for any given point in time query should work together iff they worked together in that actual point of time in real life. No need to make the package manger transactional (that would be useful for rolling back to bespoke sets of packages someone installed on their system, but not for installing a set of packages that were in the repo at a given time).
Actually storing the contents of the packages would take up quite a bit of disk space, but it sounds like there is an archive already doing that? Other than making that store it sounds like just a bit of grunt work to make the db and make the package manger capable of using it?
I didn’t read it as some grandiose statement of how awesome Arch is, just one user’s experience.
My equally unscientific experience is that I can usually upgrade Ubuntu once, but after a second upgrade it’s usually easier and quicker to just start fresh because of all the cruft that has accumulated. I fully acknowledge that one could combat that more easily, but I also typically get new hardware after X years, for X lower than 10.
I do have an x230 from 2013 with a continually updated Debian install on it, so 10 more months and I also managed to get to 10 years.
Really though? While it’s true that you can upgrade Windows I have yet to see someone who managed to pull that off with their primary system and things like drivers, etc. accumulating and biting you. This is especially true if you update too early and later realize incompatibility with drivers or software which really sucks if that’s your main system.
Upgrading usually works when everything because there’s upgrade paths but having a usable system sadly is a less common theme.
And Linux distributons that aren’t rolling release tend to be way worse than Windows. And while I don’t know MacOS at every company I’ve been so far a bigger update of MacOS always means that everyone is expected to not have a functional system for at least a day, which always is a bit shocking in an IT company. But I have to say I have really no clue what is going on during that time. So not sure what’s really happening. I know from my limited exposure that updates rent to simply be big in download and long installation processes.
I probably only stuck with arch all that time compared precisely because it didn’t give me a time to consider switching so it’s really just laziness.
When I think about other OSs and distributions I’ve used scary updates of OS and software are a big junk of why I don’t use them in one way or another. I used to be a fan of source based distributions because they gave you something similar. Back when I could spend the time. I should check whether Gentoo has something like FreeBSD’s poudriere to pre-build stuff. Does anyone happen to know?
I would have agreed with you from ca. 1995-2017, but I saw several Win 7 -> Win 10 -> Win 10 migrations that have at least been flawlessly running for 5+ years, if you accept upgrades between different Win 10 versions as valid.
I’ve had many qualms and bad things to say about Windows in my life, but I only had a single installation self-destruct since Win 7 launched, so I guess it’s now on par with Linux here for this criterion.
Changing the hardware and reusing the Windows installation was still hit or miss whenever I tried, with the same motherboard chipset I’ve never seen a problem, and otherwise.. not sure.. I guess I only remember reinstalls.
I was doing tech support for a local business around the time the forced Windows 10 roll out happened. I had to reinstall quite a few machines because they just had funky issues. Things were working before then suddenly they weren’t. I couldn’t tell you what the issues were at the time but I just remember it being an utter pain in the ass.
Yeah I’m not claiming authority on it being flawless, but it has changed to “I would bet on this windows installation breaking down in X months” to “hey, it seems to work”, based on my few machines.
(Long-time Gentoo user here.) I’m not sure if this answers your question, but I often use Gentoo’s feature to use more powerful machines to compile stuff, then install the built binaries on weaker systems. You just have to generally match the architecture (Intel with Intel, AMD with AMD).
I will need to look into this. I mostly wrote that sort of to keep it at the back of my head. I heard it was possible but there was some reason that I didn’t up trying that out . Maybe I should do it at my next vacation or something.
It isn’t stable in the sense that ‘things don’t change too much’, but it is stable (in my experience) in that ‘things generally work’. I recall maybe 10 years ago I would need to frequently check the website homepage in case of breaking upgrades requiring manual intervention, but it hasn’t been like that for a long time. In fact I can’t remember the last time I checked the homepage. Upgrades just work now, and if an upgrade doesn’t go through in the future, I know where to look. Past experience tells me it will likely just be a couple of manual commands.
On the other hand, what has given me far more grief lately is Ubuntu LTS (server). I was probably far too eager to upgrade to 22.04, and too trusting, but I never thought they’d upgrade to OpenSSL 3 with absolutely no fallback for software still reliant on OpenSSL 1.x… in an LTS release (isn’t this kind of major change what non-LTS releases are for?). So far, for me this has broken OpenSMTPD, Ruby 2.7, and MongoDB (a mistake from a very old project – never again). For now I’ve had to reinstall 20.04. I’ll be more cautious in future, but I’m still surprised by how this was handled.
Even Arch Linux plans to handle OpenSSL3 with more grace (with an openssl-1.1 package).
I’ve been talking a lot about make
lately, it seems, and I kinda feel sometimes like I’m a “Make developer” as much as some folks call themselves “Java developer” or “Go developer” these days because I’ve spent so much time making my codebases easier to use by using Make as the interface for both one-off tasks without dependencies or file generation (“phony” in Make parlance) as well as for the textbook use with output files and dependent input files.
Just on Lobsters alone:
make python devex
: Towards Clone to Red-Green-Refactor in One Command with a 45+-Year-Old Tool
At my last job I worked with the developer of the Procfile format: https://devcenter.heroku.com/articles/procfile
I asked him a few times to explain what Procfiles were for that couldn’t be done equally well with existing tooling using Makefiles and phony targets.
Never got a straight answer. And now we’re stuck with yet another Filefile entry: https://github.com/cobyism/Filefile
:) I find phony targets to be not as simple as a Procfile (which I’ve accidentally memorized). Makefile I think fits nicely with make just as Procfile fits nicely with ps
if you can imagine that heroku is starting a process for you. But it’s a valid point, I’m not arguing. They do similar things in different ways. Most of Heroku’s “ahead of its time”-ness was on the massive effort to detect your app. I kind of accepted touching an empty Procfile while I was amazed at what Heroku was doing near launch.
The Filefile thing is funny and I starred that repo way back when. It’s odd to see them all collected up even if some of those tools aren’t as recognizable anymore. It’s rare for a project to have all those things in the root and they’d be broken up by a similar troupe of cargo.lock
, yarn.lock
, package-lock.json
, Gemfile.lock
, poetry.lock
.
I asked him a few times to explain what Procfiles were for that couldn’t be done equally well with existing tooling using Makefiles and phony targets
Have a target named web and a web process type. You’d at least need 2 separate Makefiles then.
As far as I can tell you’d have to implement a Makefile parser to implement the scaling UI. There might be a way to get make(1) to tell you the phony targets but if so it is not obvious to me how. That seems awfully complicated for the use case.
There might be a way to get make(1) to tell you the phony targets but if so it is not obvious to me how.
You would use the same mechanism that bash uses to determine tab-completions.
Have a target named web and a web process type. You’d at least need 2 separate Makefiles then.
Seems like using a sledgehammer to squash a fly if you ask me.
You would use the same mechanism that bash uses to determine tab-completions.
I wasn’t aware that only completed phony targets?
Seems like using a sledgehammer to squash a fly if you ask me.
Makefiles have power in excess of what Procfiles are used for. There are no dependencies, no patterns, no inclusion, etc.
A well known directory (call it “procs”, or hell “bin”) with executables in it would be a vastly simpler replacement for Procfiles than a Makefile.
A well known directory (call it “procs”, or hell “bin”) with executables in it would be a vastly simpler replacement
TBH I first asked him why make the Procfile over a bin directory and never got a straight answer for that either.
I thought Procfile was more for running a number of services together. I also thought it came from Foreman, but perhaps I’m wrong about that?
I wonder if you’ve considered shell for those tasks, and if so what the pros/cons you see are?
If you have a bunch of shell commands you want to put in a single file, I call that the “Taskfile” pattern, and funnily enough you can use either make or shell for “Task files”.
https://www.oilshell.org/blog/2020/02/good-parts-sketch.html#semi-automation-with-runsh-scripts
In shell, I just use a function for each task:
build() {
./setup.py --quiet build_ext --inplace
}
test() {
for t in *_test.py; do
$t
done
}
or you can also use a case statement.
Here are some pros and cons of make vs. shell I see:
"$@"
but it doesn’t provide great error messages.)Make downsides:
\
at the end of every line in a Makefile, which is very uglyAny others? I prefer shell but I can see why people choose Make. Really it is the language “mashup” that really bothers me :-/
BTW I take Task files to an extreme and Oil has over 10,000 lines of shell automation
https://www.oilshell.org/release/0.12.3/pub/metrics.wwz/line-counts/overview.html#i32
e.g. to generate all the tests, benchmarks, and metrics here: https://www.oilshell.org/release/0.12.3/quality.html
Related story from 5 months ago: https://lobste.rs/s/lob0rw/replacing_make_with_shell_script_for
Other comment: https://news.ycombinator.com/item?id=23195313 – OK another issue is listing the “tasks”, which is related to the autocompletion issue. Make doesn’t have this by default either
Make is a tool for managing a directed acyclic graph of commands. So I’m not sure why you’d compare it to bash. Make is a wrapper for bash lines that defines the relationships between your bash code.
I understand that theory, but that’s not what what the OP is talking about using for. Look at one comment he linked:
https://lobste.rs/s/7svvkz/using_bsd_make#c_dri73e
Those are six .PHONY
verbs, not nouns. Even build
is a verb.
So they’re using make as a “Task runner” (verbs), not as a build system (to “demand” nouns). (FWIW Oil’s Ninja build has hundreds of nouns, mainly for tests and several build variants: ASAN, UBSAN, coverage, etc.)
Make isn’t great as a build system for all the reasons discussed here and many other places: https://lobste.rs/s/sq9h3p/unreasonable_effectiveness_makefiles#c_v7pkr0
As mentioned, I wrote 3 Makefiles from scratch starting from 2017 and concluded it was a big mistake (and I’m still maintaining them).
For those use cases (and I think most), Python/Ninja is way better, and similar to Bazel, but much lighter. The sandboxing like Landlock Make would be great though – that is a real problem.
I think you forked Make and made it a good build system for your use cases, but that doesn’t mean it’s good in general :)
Where is your Python Makefile? If your effort to write a Makefile for Python didn’t work out, then that doesn’t make it it’s Make’s fault. There were probably just some things you failed to consider. I wrote a Makefile for Python about a year ago .https://github.com/jart/cosmopolitan/blob/master/third_party/python/python.mk If I build Cosmopolitan, then rm -rf o//third_party/python
and then time make -j16 o//third_party/python
then it takes 17 seconds to compile Python and run the majority of its tests. The build is sandboxed and pledged. It doesn’t do things like have multiple outputs. We removed all the code that does things like communicate with the Internet while tests are running.
It starts here: https://github.com/oilshell/oil/blob/master/Makefile
It definitely works, but doesn’t do all the stuff I want.
Where Make falls down is having any kind of abstraction or reuse. That seems to show in your lengthy ~4500 line Makefile. (If it works for you, great, but I wouldn’t want to maintain the repetition! )
For the Python makefile, I want to build oil.ovm
, opy.ovm
, hello.ovm
, i.e. three different apps. I’m using the %
pattern for that. If I want to add a second dimension like ASAN/UBSAN/coverage, in my experience that was difficult and fragile
For the log analysis, I want to dynamically create rules for YYYY-MM-DD-accesslog.tar.gz
.
For the blog, I want to dynamically make rules for blog/*/*/*.md
, and more.
That pattern interacts poorly with the all the other features of Make, including deps files with gcc -M
, build variants, etc.
In contrast, it’s trivial with a script generating Ninja.
But I don’t think even those use cases necessary to justify my opinion, there are dozens of critiques of Make that are 10 years old and based on lots of experience. And comments like this right in the thread:
https://lobste.rs/s/sq9h3p/unreasonable_effectiveness_makefiles#c_v7pkr0
I have looked quite deeply into Make, and used it on a variety of problems, so I doubt it will change my mind, e.g.
http://www.oilshell.org/blog/2017/10/25.html#make-automatic-prerequisites-and-language-design
Remember when I say “make is a bad language”, this is coming from the person who spent years reimplementing most of bash and more :) i.e. I don’t really have any problem with “bad” or “weird” or “string-ish” languages, at least if they have decent / salvageable semantics. And I don’t think Make does for MANY common build problems.
The sandbox/pledge stuff you added to Make is very cool, and I would like something like that, and hopefully will get time to experiment with it.
For the Python makefile, I want to build oil.ovm, opy.ovm, hello.ovm, i.e. three different apps. I’m using the % pattern for that. If I want to add a second dimension like ASAN/UBSAN/coverage, in my experience that was difficult and fragile
Consider using o/$(MODE)/%.o: %.c
pattern rules. Then you can say make MODE=asan
.
Remember when I say “make is a bad language”, this is coming from the person who spent years reimplementing most of bash and more :)
I don’t doubt you’re an expert on shells. Being good at shells is a different skillset from directed acyclic graphs.
That seems to show in your lengthy ~4500 line Makefile. (If it works for you, great, but I wouldn’t want to maintain the repetition!
If by repetition you mean my makefile code is unfancy and lacks clever abstractions, then I’ll take it as a compliment. Python has 468,243 lines of code. I’m surprised it only took me 4k lines of build code to have a fast parallelized build for it that compiles and runs tests in 15 seconds. Even the Python devs haven’t figured that out yet, since their build takes more like 15 minutes. I believe having fast build times with hermeticity guarantees is more important than language features like rich globbing, which can make things go slow.
I use Tailwind* on most of my personal projects. Most of them also happen to be built with React.
I don’t think I would use Tailwind if I wasn’t using it with some kind of component framework. It’s hard to take this article too seriously given that it doesn’t mention how many of its points are obviated by using Tailwind within that context.
*I’ve stopped using Tailwind directly, and now use UnoCSS. UnoCSS allows me to use the Tailwind classnames that I am familiar with, but is faster with a nice set of additional features that help cut down on cruft:
<div class="hover:(bg-gray-400 font-medium) font-(light mono)"/>
transforms to:
<div class="hover:bg-gray-400 hover:font-medium font-light font-mono"/>
Can also declare shortcuts in a configuration file like:
shortcuts: {
'btn': 'py-2 px-4 font-semibold rounded-lg shadow-md',
'btn-green': 'text-white bg-green-500 hover:bg-green-700',
}
<div class="hover:(bg-gray-400 font-medium) font-(light mono)"/>
Okay, but what does this div do? Without a semantic tag or a class name, the developer will have no idea why it’s there.
Given that the point the parent comment is making is that Tailwind is generally used within the context a component framework, that should be made pretty clear by the name of the component.
Really, I think many of the commenters in this thread could do with reading up on the rationale behind Tailwind. There are some great resources out there, but if you don’t have the problems Tailwind aims to solve, then it’s hard to understand why some people are so passionate about it. I’m sure there are plenty of cases where Tailwind is overkill, but that doesn’t make it an antipattern. If you’re going to criticise something, at least try to understand the motivation behind it.
I was sceptical myself at first. I even tried it early on and it didn’t stick. But I kept running into the same problems, so I determined to give it an honest chance and it has been an absolute joy to work with.
It’s the first time in 15+ years of web development where I haven’t had to constantly think about how to structure things, which class naming convention to use, how to name a ‘container/wrapper/inner’ class, keep track of which CSS is in use and where, or deal with CSS specificity issues.
But the best part, is that it gives you flexibility with limitations. These limitations are what make it trivial to maintain a consistent design system, and it’s up to you do decide where those limitations are. The defaults will take you a long way, but if/when you need to change something (spacing, colours, fonts, anything…), it’s usually just a few lines in the config file.
As a kiwi, the choice of animal made me flinch!
TIL
source