Hello! Nice post!
Any reason why you didn’t use the functions in pkgs.dockerTools
to build an efficient docker image directly?
https://nixos.org/manual/nixpkgs/stable/#sec-pkgs-dockerTools
There is also pkgs.ociTools
that makes no assumptions about the container runner used for that container.
https://nixos.org/manual/nixpkgs/stable/#sec-pkgs-ociTools
The dockerTools bit is near the end:
Nix is able to make more optimal Docker image layers by using the native Nix dockerTools to build an image instead of a Dockerfile, but the whole point of this blog post is to show you the Dockerfile approach.
Poster isn’t the author (@mitchellh is here, though they haven’t commented in an age… ).
Likely, this approach does not require people to install Nix on their workstation. For open source projects, requiring Nix may be a bit too much.
I’d consider the approach in the post if you have a sizeable amount of Dockerfile code already that would be too much work to port over to pkgs.dockerTools
I use a similar approach here: https://github.com/akvorado/akvorado/blob/main/Dockerfile
It’s possible to enhance caching by running nix develop -c true
after copying flake.nix
and flake.lock
(and additional files depending on what is needed to get the shell).
The problem with default fonts is that monospace is unlikely to pair correctly with the body font. The problem with system fonts is that you cannot pair monospace and a body font. I am using custom fonts mostly for this issue. And I tweak my monospace font to really match the metrics of the body font.
Browsers artificially shrink monospace fonts, if that’s what you’re referring to. For some reason, specifying monospace, monospace
instead of just monospace
suppresses this behavior.
Ok, this surprised me…
This seems to be a major clusterfuck. Looking at the sudoers man page, it spends some time explaining how use_pty prevents a dangerous attack, just to then say: “This flag is off by default.” Which of course leaves the question: WHY? You know you have a major security vulnerability, you have a mitigation in place, and it is “off by default”?
For su the same applies, though you cannot even set this in a config file.
On Debian, use_pty
is enabled by default in the shipped configuration file. This is quite recent (added two years ago).
Isn’t it off by default to protect non-interactive sessions from wonky output? At least that’s the message I get from the linked article.
Nah, I don’t want someone else to decide (a) what is safe for me, and (b) what my kids should have access to. This is a decision that I won’t delegate to anyone, but me. My primary concern is privacy – once they have the data, it is just a matter of time before it is abused. 1984.
That said, this may be a good option for technically-challenged parents, who just want a safer internet for their kids, and do not wish to pay for various anti-virus companies selling them the notion of “safer internet” and “internet monitoring”. Of course, the challenge is, the kids should not have Administrator or root access to the device, otherwise they can simply change the DNS back to 8.8.8.8.
I agree. It’s a shame that they don’t offer an un-censored service. Other than I think children (and bad actors) are probably both able to get around DNS based censorship these days.
They should really work on their page then. I just re-checked. While I saw both the kids and the zero version I actually can’t tell what the difference between zero and the regular is.
And are you sure? Zero says:
Massively increase the catch rate for malicious domains — especially in their brutal early hours — by combining human-vetted threat intelligence with advanced heuristics that automatically identify high-risk patterns.
Which to me sounds like “it does filtering, like regular, but more”. While the landing page explicitly states “Integrated protection against millions of malicious domains — from phishing websites to C&C servers.” without the context of zero.
This seems interesting. At first glance, I thought it was a more ergonomic syntax for Go (something like CoffeeScript for JavaScript back in the day); however, it seems like it’s a configuration language for Go. Which is neat, but there are already quite a few configuration languages with Go support (e.g., Starlark). The big thing that most of these fail at, however, is static types–it seems like it’s particularly easy to make type errors in configuration languages, and the usual dynamic typing guidance “just catch type errors with tests!” seems particularly unhelpful here (I really don’t want to have to write tests for my configuration if I can help it). Since I’ve dabbled with TypeScript recently for some web frontend stuff, I’ve come to the conclusion that it’s probably the best bet for an embedded scripting language in the general case (type system is solid, syntax is familiar, lots of existing high quality tooling, etc), provided of course that we can get a decent embeddable runtime for host languages.
That said, does anyone know how we can make tsc
aware of host functions? E.g., if I bind a foo()
func to the TS/JS runtime and invoke it in my program, how do I make tsc
not complain that foo()
isn’t defined/imported in my TS source code?
expr is not really a configuration language. It is used to write expressions. It is useful if you want to integrate a rule system in an app (filtering a a subset of data, giving permissions for a resource). It is easy to learn because its scope is limited and it is fast because it compiles to bytecode and has an optimizing compiler. The author is also quite responsive. Thanks Anton!
I’ll be honest, this doesn’t look very useful. This is a tutorial of how to invoke Nix, not how to use it. The second class of target people, “who have tried to cross the chasm to using it in their daily workflows but haven’t gotten there yet”, aren’t stuck because nix-build
and nix-shell
“present significant cognitive overhead”* compared to nix build
and nix shell
. They’re stuck because they don’t know how to package things. My daily workflow does not involve rebuilding bat
. My daily workflow involves building my project, and to do that with Nix, I need documentation that explains how to make Nix do that. This doesn’t provide that outside of links to existing documentation, and it’s that documentation that is, imho, most in need of attention. This just doesn’t feel like a good use of documentation-directed effort to me.
*As an aside, this claim makes no sense to me whatsoever. Learning a dozen commands isn’t any harder than learning a dozen verbs on one command.
As a person interested in nix, I find the structure and writing style of Zero to Nix far more useful than any other nix documentation I’ve read so far. Especially the opinionated style is useful as an orientation guide in the vast sea of things to learn and try out.
I’m glad it’s useful to you, then. Hopefully you’re able to get a foothold of understanding that carries you through the older strata of documentation.
Completely agree. I applaud the effort at making Nix more approachable, but Nix is still way too obscure and things that are simple to do with other package managers are still too difficult to do with Nix.
I decided to peek into Nix again after seeing this guide (I’ve tried and given up on multiple occasions in the past). I wanted to try making a C++ development environment to build a project. It’s a shame that creating a “flake.nix” file to do something even that simple is so complicated that you have to resort to using a template from a third-party. Writing the file by hand, from scratch, is basically out of the question for a beginner.
But ok, I’ll use the template. Now it’s time to add some packages. I use the “nix search” tool, which shows me a large list of package names, but doesn’t tell me anything about what’s in these packages. For example, what can I find in “clang-tools_11”? Is there a way to list the files that package includes? What about “clang11Stdenv”? That looks like it could be useful, but again, there’s no way (that I can see) to view what makes up that package or what it actually provides.
In contrast, a package manager like pacman can list all of the files that a package will install. Even Homebrew will tell me what a packages dependencies are, and will give me a hyperlink to the “formula” for a recipe, so I can see exactly how the package is defined. Is any of this possible with Nix? If it is, that’s the kind of stuff that is useful to know for using Nix as a package manger. Not rebuilding bat.
The top search result for “how to view contents of a package in Nix” shows this answer:
ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
What does this mean? How is anyone who hasn’t already invested dozens of hours into Nix supposed to understand this, let alone figure this out on their own?
In the end, I think this guide is like so much other Nix documentation. It provides surface level, trivial examples to give the illusion that “Nix is easy”, but leaves you completely ill-equipped for doing anything useful.
Sorry for the rant, but the hype around Nix is never ending, and I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there, so I’m left feeling confused about what I’m doing wrong.
I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there
I hear you. As an ardent Nix fan, I have similar experiences too. Sometimes I’ll be trying to get something packaged, and the program or its language ecosystem isn’t easily “tamed” by Nix, and I get so frustrated. And that is from someone with quite a lot of experience and background in build systems, isolation, Nix itself, and lots of experience figuring it out. Days like that I feel like I lost, and sometimes get pretty down about it. A couple years ago I “gave up” and even installed another Linux distro. (This didn’t last for more than a day…)
I hope one day Nix is as easy to use as anything else. Or even easier. It empowers me to do so much without fear, and I’m addicted to that power.
My perspective is that to do this we need to:
What I don’t want to do is beat my head against the wall every time I want to try some new software. I admit that if it takes me more than an hour, I’ll sometimes boot a VM and try it out in another distro. That’s okay by me. By my count, more Nix in the world is good for everyone, and when it doesn’t serve me that is okay too.
As an aside, this line: ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
seems a bit weird. However, the idea of listing what files a package will install is a bit … different in Nix, because “installing” is … a bit different. We’re going to be publishing either a z2n page, or a blog post about that soon.
For searching, at least, I’ve always used search.nixos.org rather than the CLI tool. The search results usually have a link to the package definition, though often the package source isn’t legible to someone who isn’t an advanced user. clang-tools_11
is defined here, if anything in there is helpful.
Personally, I run NixOS on some home servers and I find the declarative system configuration aspect to be incredibly helpful. But when it comes to working on individual projects, I mostly develop them in their own respective tooling, sometimes with a nix-shell
or nix develop
config to get the dependencies I want installed, and I only figure out how to make them buildable with Nix later on.
I’m definitely the target audience for this, and having just gone through the Quick Start I find myself agreeing with you. I was optimistic at first, as what’s there is presented clearly, but as I reached the last page I realised I don’t feel any more informed than I was in the first place and all I’ve done is run someone else’s ‘flakes’ without really understanding what’s happening (I understand broadly what is happening with each command of course, but not in any kind of sense that I could actually reproduce it myself). None of this makes Nix ‘click’ for me unfortunately. It’s a decent start, but as you said it’s just not all that helpful in its current state. It needs to provide structure and order to the learning process. You can’t just gloss over the Nix language… when you introduce me to something like this, I want to read and understand it, so that I might be able to write it myself:
https://github.com/DeterminateSystems/zero-to-nix/blob/main/nix/templates/dev/javascript/flake.nix
But nothing I’ve read here has given me any kind of guidance on that. It’s like I’m supposed to just trust that it works. What do I take away from that? A good guide will engage my curiosity and encourage me to look deeper at what’s being presented. Shouldn’t the examples be as trimmed down as possible? Is this part really necessary?
# Systems supported
allSystems = [
"x86_64-linux" # 64-bit Intel/ARM Linux
"aarch64-linux" # 64-bit AMD Linux
"x86_64-darwin" # 64-bit Intel/ARM macOS
"aarch64-darwin" # 64-bit Apple Silicon
];
# Helper to provide system-specific attributes
nameValuePair = name: value: { inherit name value; };
genAttrs = names: f: builtins.listToAttrs (map (n: nameValuePair n (f n)) names);
forAllSystems = f: genAttrs allSystems (system: f {
pkgs = import nixpkgs { inherit system; };
});
I kind of understand what’s happening here, but that’s quite offputting if you’re trying to convince me that Nix is easy and approachable. Am I supposed to ignore this? It only makes me wonder why I would bother. I’m sure you can do things in a far more simple manner, so why this complexity? Perhaps there’s a little too much emphasis on correctness before you’ve reached the natural point of explaining why it is necessary. A nice start, and I enjoyed going through it but it needs much more work to live up to the promise of its title and ultimately I’m disappointed.
Thank you for working on this, and I hope you continue. Maybe I’ll be able to come back later and have a better experience.
Thanks for this feedback. One of the things in discussion around Flakes is system specificity and the overhead / boilerplate it tends to create. We’ll take a look at how we can simplify the documentation on this and make it more straightforward. I appreciate it!
That makes sense, thanks for the explanation. I reread my comment and I think it comes off as a bit too negative. I really did enjoy the content that is there so far, I just wanted it to keep going!
Whew, that is really great to hear =).
For what it is worth…
The nix.dev article looks helpful, so I’ll definitely go through that.
What I was really hoping for was more of a structured path to guide me through these concepts (a continuation from the guide). I realise how challenging that is likely to be given how broad the ecosystem is. Everyone probably has their own motivation for learning it. For me, the reproducible development environments are a big draw. For another person it might be something completely different. So I don’t know what that looks like exactly. Possibly branching paths after the quick start guide in the form of guides for the most common use cases? Whatever the case I think the hand holding needs to continue a little longer before you let people loose to discover the rest of the ecosystem for themselves. My best learning experiences have been just that. I follow the guide until I can’t help imagining all the possibilities and I am naturally compelled to go off and explore those ideas for myself. That’s where my journey really starts. If I’m going through a book (for example), that point is usually somewhere well before the middle and I may not even finish because I have enough confidence that I know what to look for. With this guide I still feel lost in a vast sea. It still feels like there’s a very large investment up front (in Nix), and I’m just trying to find the motivation to push through (hoping that it is what I imagine it to be).
Anyway, I hope my feedback is helpful in some way. I guess what I’m trying to say is that I definitely don’t expect Zero-to-Nix to be an exhaustive guide to the entire Nix ecosystem, but I do think it could carry the reader a little further, perhaps until they’re at least comfortable in writing their own simple flakes (for common use cases). A little extra confidence. That might be enough to reach that wonderful state of inspiration that drives further learning.
They provide templates for using flakes with some languages. Depending on the language you want to use to build stuff, that’s what you should look at. I think they don’t spell it because they tried to keep everything language-independent in the text, so you have to run the commands they provide to see more.
Templates are good for saving people the trouble of writing boilerplate. They are absolutely not a substitute for understanding. If you want to change something nontrivial in a project generated from a template, you still have to know how it all works.
I think it would make it easier to accidentally do really dangerous things. So maybe that’s why it doesn’t have such a way.
You mean like that?
% grep umount /usr/local/etc/doas.conf /usr/local/etc/sudoers
/usr/local/etc/doas.conf: permit nopass :network as root cmd umount
/usr/local/etc/sudoers:%network ALL = NOPASSWD: /sbin/umount -f *
When you specify the doas(1)
rule as below:
permit nopass :network as root cmd umount
You are permitting to use umount(8)
with all possible arguments (or without them).
As a general rule, if you see * in a sudoers file, there’s probably a privilege escalation issue of some sort. Before the glob is matched, the program arguments are concatenated with spaces, this means that permitting “rm /foo/bar/baz/*” actually also permits “rm /foo/bar/baz/nonexistent /any/file/you/like” and there’s no way to fix it.
doas allows you to either specify a program (for which any arguments are permitted) or a program and a (optionally empty) list of arguments which must be passed verbatim. It’s incredibly difficult to provide much more flexibility than this. You will need a configuration file syntax to distinguish between string literals, globs (or regular expressions), repeating arguments, etc and how they are split. Once your configuration file syntax is sufficiently complex, you now have to document it for end users. At this point I think you will likely have the same issue that sudoers has, you need a long manpage to explain the syntax which nobody reads and as a result nobody understands the intricacies of.
I genuinely believe it’s less likely you will shoot yourself in the foot if you wrap whatever operation it is you need to perform with a simple python or similar (or sh/bash but only if you know what you’re doing) script which takes arguments and sanitizes them. You can refer to this script in your doas.conf.
I’m all for experimenting with software, and I know that sometimes a simpler tool is a better solution for simpler problems.
But this … this is PHP with go. More specifically, it’s 90s, early 2000s PHP, the worst kind of PHP. I can’t think of a single good reason to ever use this for anything, ever.
Again, experiments are valid, and sometimes people do stuff just because, and it’s all fine, and I don’t mean any moral or professional judgement of the author.
But damn this looks horrible.
Sorry it’s not to your liking!
I don’t disagree that this is like PHP in some ways. The ability to add a file to the filesystem and automatically get a URL route in mod_php was a pretty nice experience, especially for quickly developing apps. But it also turns out to be helpful for long-term maintenance of a site. One of the challenges of using Go on the frontend is each project does its own project layout and organization.
As a former PHP developer who then switched to Rails briefly and then to Django for many years (and then to Go), I’m interested to hear what you don’t like about it. Maybe Pushup can avoid those fates. ;^)
The mix of code and HTML, without a separation of templates and application logic, just gives me vietnam war flashbacks of horrible PHP code I’ve seen in the wild. And as much as this is not necessarily what people will do, if it’s allowed, someone will abuse it.
There’s also no concept or affordance for layers. Maybe it’s a personal bias, but I don’t care how small your project is, don’t query your database straight from the view/template. Even your exemple code is doing this, so, seems endorsed by the tool.
I also don’t love the syntax for code blocks, but that’s a purely aesthetic comment, I have no good reason for it.
All that aside, as I said, you do you, and as long as I am never asked to maintain it on production, I won’t harbor any hate towards your project =P
Had similar flashbacks when looking at the example in the repo. We used to call the files that contained a mix of HTML, CSS, JS, and PHP the “Da Vinci Code”.
Totally agree that it’s a cool experiment. Also agree that maintaining it in production isn’t on the list of things I’d sign up for.
The mix of code and HTML, without a separation of templates and application logic, just gives me vietnam war flashbacks of horrible PHP code I’ve seen in the wild. And as much as this is not necessarily what people will do, if it’s allowed, someone will abuse it.
Isn’t that the default in most frameworks though? As much as better things exist, Rails still defaults to ERB for example
I don’t know rails in detail, but for things like flask or Django, you’d have to go significantly out of your way to do the kind of mess PHP makes basically the path of least resistance.
Take querying the database on templates. I don’t think it’s even possible to do so on flask, because templates are a separated language, and anything you pass to it must be serializable, which DB connections aren’t.
You can have super messy, over complicated templates, sure. I certainly have done that and felt the pain, but the architecture of the framework either prevents or heavily steers you out of that.
Take querying the database on templates. I don’t think it’s even possible to do so on flask, because templates are a separated language, and anything you pass to it must be serializable, which DB connections aren’t.
You can always use app.jinja_env.globals.update
to add your database’s API :)
FWIW, people can and do abuse the separation of application logic from HTML, namely with insanely bloated HTML/JS monstrosities. A lot of web apps written in PHP feel a lot faster and less hostile, partly because PHP encourages you to keep the HTML somewhat manageable.
I cut my teeth on PHP back in the mod_php era. In the early days of a project, this sort of file-based routing and “pushing everything into templates” was a nice experience, but as projects grew they became unmaintainable. Too much stuff lived scattered around templates, and presentation was mixed with other concerns. IMHO there’s a reason the PHP folks moved away from this sort of structure.
That said, if this is working for you, I’m glad you’re building it. I think a diverse ecosystem is a good thing, but I suspect your page-based-routing design decision is going to seriously limit your project’s adoption.
Just to add another point why 90s/2000s PHP was terrible is security-wise. By having the entire app’s filesystem serve up “scripts” you make it hard to protect against malicious upload attempts. Most larger web sites will end up with some way to host user-uploaded files, which requires at least a part of the file system to be writable to the application.
In PHP it was often the case that the web server ran under the user who was able to upload to the site, which meant the entire file tree was writable (in principle). Then it becomes very easy for a malicious user to upload a .php
file straight into the file tree, then execute it. And if there was a designated uploads directory, they’d upload a file via path injection a la ../../foo.php
to place it outside the intended upload directory. Or if that wasn’t possible, they’d upload a .txt
file or whatever and tricked running scripts into include()
ing the file through path injection. Or a .htaccess
file to make Apache execute the PHP scripts with another file extension. And so on and so forth.
There are so many reasons that nobody but PHP used this approach of scattered files in the file system. Even CGI wasn’t this bad - you’d usually have a restricted read-only directory with cgi-bin files that was entirely separate from your web root.
The ability to add a file to the filesystem and automatically get a URL route in mod_php was a pretty nice experience, especially for quickly developing apps.
That’s not PHP specific, all CGI is like that
That’s not PHP specific, all CGI is like that
cgi uses files as entry points, but you can also go to paths beyond the initial cgi path and route it internally to the script. See PATH_INFO. PHP can kinda do that too but it’d more often want mod_rewrite nonsense whereas with cgi it just works as part of the interface design.
But this … this is PHP with go. More specifically, it’s 90s, early 2000s PHP, the worst kind of PHP.
To my mind that was the worst PHP for most apps only because at the time PHP didn’t offer a path toward a more sustainable organizing model as an application grew (also security issues). At the same time, I think it was the best PHP to start a project with. I see the power of Pushup as being able to start projects simply and quickly while still being able to evolve them into a maintainable app.
The distribution of application lifetimes seems something vaguely like a power law graph with a gap and bump up as the tail grows. Most projects live a few minutes or hours. A tiny fraction of those live for days. A tiny fraction of those live for weeks. The proportion that live for a decade or more is minuscule but at the same time it seems like a project that lives for more than a year or two will also live for a decade. We can predict the fate a huge portion of the left side of the graph because many projects are intended to only live a short time. However we’re very bad at predicting the right side of that graph. It’s a strangely common story to hear of a business running critical processes on an application built by a hobbyist when the business was just started. I see the value of Pushup in being able to start projects more quickly while still being able to evolve them as they continue to live on.
I see the power of Pushup as being able to start projects simply and quickly while still being able to evolve them into a maintainable app.
Is it there, though? Like, more than in PHP? If it grows badly, you’ll still have to refactor the shit out of all the pages into a coherent architecture. All the same opportunity for it to grow badly is there, same as with PHP.
On the other hand, I’m kinda OCDish, so even with the smallest apps I go hard on proper architecture. Sometimes it’s an asset, but often it’s an obstacle, so maybe it does boils down to personal preference.
Is it there, though? Like, more than in PHP?
Probably not because PHP has grown a lot since the early 2000s. The gains seem to have been made by accepting frameworks of a comparable minimum weight and complexity as other languages. IMO Pushup keeps the initial bar much lower while not sacrificing anything on the upper end.
All the same opportunity for it to grow badly is there, same as with PHP.
I think it does provide the same opportunity for it to grow badly but I also think it provides more of an opportunity for it to grow well. Pushup provides an ability to pull in Go libraries and to write your own lower-level helper functions in Go when performance requires it in a way that is much more gradual than jumping to writing an extension in C. It also delivers a Go module that could be adopted wholesale into another application in a way that requires less of a deployment hassle than PHP where you likely have to reconfigure a reverse proxy in front of the app to overlay one app’s routing atop another app’s to give the appearance of a single application. Granted these things aren’t in the quickstart sales pitch and aren’t very clearly documented. More like implied, e.g.
Pushup apps compile to pure Go, built on the standard net/http package. Fast static binary executables for easy deployment. Easy to integrate into larger Go apps
So:
> pushup new godemo
> cd godemo
> pushup build
> grep 'func.*Respond' build/index.up.go
func (up *IndexPage) Respond(w http.ResponseWriter, req *http.Request) error {
That function signature would be a familiar hook for most users of other Go frameworks.
but I also think it provides more of an opportunity for it to grow well.
Yeah, I think you have a point there.
For completions, you can make them autoload, so you don’t need to source them. Put them in a directory in your fpath
and use #compdef kubectl
at the top (already done by kubectl).
Why would it be? Plenty of articles posted here strongly advocate for one technology or other, including technologies that happen to have been originated by a corporation (Rails, GraphQL, gRPC…)
It’s “why you should use typescript” from a site called “typescriptcourse.com”, and it doesn’t do anything different or go deeper than any other “why typescript” essay.
As usual, remember it’ll be patched until heat death of the universe by 3rd parties that have this in their LTS distros, RHEL, Ubuntu, no clue about SUSE.
As a side note NetworkManager’s connection sharing uses unbound as the server.
Yes, but who will be looking for issues to patch? Particularly security bugs that only exhibit buggy behavior in the face of malicious input and are therefore unlikely to be discovered on accident by end users (and now that the project is unmaintained, developers).
Developers are not the ones looking for the issues to patch normally. The standard path starts from a security researcher who hopefully reports it to the project. The only difference is that they’ll have to report it to either some distro, or project zero, or something similar.
On my first reading of this headline I was wondering which cookies Avast had acquired, and how that is even technically possible to do.
It’s the “I Don’t Care About Cookies” extension that has acquired by Avast. Though it sounds like mostly an acquihire, as he talks about working on other products for them.
FWIW, I’ve never used this extension because I feel like it is dangerous to go randomly accepting T&Cs (which is what the cookies popups actually are) in a browser window where that might get linked to one of my accounts. It’s not uncommon for the wording to say not only do you accept cookies, but you fully agree with the privacy policy.
If it’s a site I’m going to come back to, I am definitely making sure I click on the correct button for “no I don’t agree to your random and otherwise not-enforceable nonsense that I don’t have time to read”.
I now tend to open most untrusted websites (such as links from orange or blue websites) in incognito mode, click on the most obvious “go away” link and close the window later, safe in the knowledge that I didn’t really agree to anything binding. I’d be reasonably happy with an extension set to incognito mode only, to save that click, but I’m pretty sure only the other way around is currently possible.
I’ve never used this extension because I feel like it is dangerous to go randomly accepting T&Cs (which is what the cookies popups actually are) in a browser window where that might get linked to one of my accounts.
Plus like … why accept the cookies if you can decline them? The whole premise is nonsensical.
Suggested replacement: https://addons.mozilla.org/en-US/firefox/addon/consent-o-matic/
Yes, that seems much better designed, since it lets you set your preferences in simple categories and then applies those choices everywhere - which is how compliance with the law should have been implemented in the first place.
Thanks for the suggestion, I’ve installed it to try it out. The only thing I can’t see is a way to override these preferences for a specific site if needed.
I go a very different route.
I block the consent banners & popups where I can with uBlock origin. Get out of my way, I don’t want you to solicit me.
I then use cookie autodelete to delete cookies for a site after I close its tabs. This is a bit like telling the browser to block cookies and localstorage completely, but websites that (pretend to) break still keep working.
This isn’t perfect. Youtube still has ways of tracking and remembering me (at least according to the suggested videos) but of course deleting cookies does make it forget that I turn off autoplay. Quite an interesting perspective on their priorities and methods.
The point of things like consent-o-magic isn’t to prevent tracking, it’s to prevent dark UI patterns from getting user consent before tracking. The goal is to ensure that companies like Google and Facebook are definitely in violation of the letter of the law, not just doing things that their users don’t understand and would hate if they did, so that information commissioners can collect the evidence that they need to impose the 5% of annual turnover fines that the GDPR permits.
Yeah, this is the way to go — uBlock Origin + EasyList Cookie blocks the obnoxious dialogs, and Cookie Auto-Delete cleans up the mess. Sadly the latter isn’t available on Android, though.
When using uBlock Origin + EasyList Cookie, I am often left with a website with a backdrop and not allowing scrolling. This can be fixed with the inspector tool, but I am wondering if I am missing something.
I had that issue only once, on a site that was completely broken with any ad blocker. I expect the answer is yes, but do you have uBlock’s cosmetic filters enabled?
I was thinking that something like this should exist. Nice to see it implemented, it’s looking pretty good! I’m just a bit confused about this part:
the server does not support per-request decompression / recompression
the server will serve all resources compressed (i.e. Content-Encoding: brotli), regardless of what the browser accepts
Wouldn’t it be possible to store multiple versions of the files and choose according to Accept-Encoding
header? Sending content in an encoding that the UA doesn’t claim to support seems wrong to me.
Wouldn’t it be possible to store multiple versions of the files and choose according to Accept-Encoding header?
Yes, that would be a possibility if such a requirement would be really needed.
At the moment the site operator has three choices:
Accept-Encoding
header;Sending content in an encoding that the UA doesn’t claim to support seems wrong to me.
Any “modern enough browser” (say since 2018, and here I include things like Netsurf), does support even Brotli compression, let alone gzip
. Indeed Lynx (the console browser) doesn’t support Brotli, but it still does support gzip
.
Thus, for all “real clients” out there, serving straight compressed contents is not an issue, thus one could choose to just ignore the Accept-Encoding
header.
Yes, and if such a client would be among the intended clients for one’s use-case, then one could switch to gzip
compression, or even disable compression.
However as said, Kawipiko’s main use-case is serving mostly static web-sites (blogs, products, etc.) that are to be consumed by real browsers (Firefox, Chrome, Safari, etc.), and thus it provides optimizations towards this goal.
It’s still against spec. I don’t see a reason why an archive couldn’t have multiple compression types bundled within it, and why kawpiko couldn’t handle choosing the correct one. FWIW, even the identity encoding isn’t guaranteed to be acceptable.
Well, I’ve just looked over the HTTP specification regarding Accept-Encoding
(RFC 7231 – HTTP/1.1 Semantics and Content – Section 5.3.4, Accept-Encoding), and from what I see I would say Kawipiko’s behavior (of just sending the compressed version it has) is compliant with the specification:
An
Accept-Encoding
header field with a combined field-value that is empty implies that the user agent does not want any content-coding in response. If anAccept-Encoding
header field is present in a request and none of the available representations for the response have a content-coding that is listed as acceptable, the origin server SHOULD send a response without any content-coding.
The emphasis is on SHOULD which states that a well-behaved server should send the response without compression, but in the end it’s not a hard requirement and the server could just ignore the requested encoding.
Granted, this is by applying the “letter of the law”, thus a well-behaved server should do things differently. However, as stated many other times, Kawipiko tries to be practical in its choices, at least for the use-cases it was meant to be used.
If you’re going to talk about the definition of SHOULD, then it also includes the sentiment that you document a very good reason for breaking the rule.
IMO, “everyone implements brotli right?” is not a sufficient reason at the current time, but “everyone implements gzip right?” does rise to the threshold of SHOULD-breaking. In particular, older webkit (e.g. iPhones) are something you should be concerned about.
You are right, when one chooses to “bend” the rules (as is with this SHOULD), one should also clearly document the reasoning. So I’ll try to summarize here my reasoning for choosing to ignore the Accept-Encoding
header:
first of all the website developer has to make this choice! he can instruct kawipiko-archiver
to use no compression, use gzip
(perhaps with the zopfli
compressor), or use brotli
; thus the choice of breaking the rules is his; (by default kawipiko-archiver
doesn’t use any compression;)
however, according to caniuse.com
and Mozilla Developer Network, one could expect that gzip
compression works in all browsers, thus the website developer could choose to go with gzip
compression without expecting compatibility issues; (see https://caniuse.com/sr_content-encoding-gzip and https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding#browser_compatibility;)
with regard to brotli
compression, it’s a tradeoff between storage (and bandwidth) reduction (which sometimes are negligible compared with gzip
) and compatibility;
and finally, the recommended way to deploy any serious web-site these days is behind a CDN; which could go two ways for older clients: either they won’t be supported by the CDN itself (say TLS compatibility issues) in which case it’s pointless to talk about compression, either the CDN will take care of decompressing in case the client doesn’t support it (see here) in which case it’s again pointless to talk about compression on the origin side; (granted CloudFlare only supports gzip
compression on the origin side, and will just pass brotli
on untouched, thus the wesite developer should choose gzip
;)
However, to put things into perspective: how about TLS 1.1; the security experts strongly suggest disabling it, and some even suggest dropping TLS 1.2 in favour of TLS 1.3. Should we keep serving TLS 1.0 just because at some point in time someone had used a browser that doesn’t support at least TLS 1.2?
Just to be clear, I’m not for planed obsolescence; however I am for giving the opportunity to the user to sometimes take the most practical approach and bend some rules.
I don’t like all of that stuff, so I took a note from SirCmpwn and replaced xdg-open in my $PATH with my own script:
I used OpenBSD as my desktop for a few years in college, and had shell scripts for everything… …I can’t believe I never thought to do that rather than just giving in and configuring everything as described in the above post.
I am using this Python script to be able to choose the application instead of using the default one: https://github.com/vincentbernat/i3wm-configuration/blob/master/bin/xdg-app-chooser.
However, knowing how xdg-open
works is still useful as many applications will not call xdg-open
but rely on a library (like Gio) to do the same work.
I did the same, got tired of the scripts and wrote my own replacement[1] that is configured with a JSON file. Xdg-open is so very complex, it’s always very hard to predict what is going to launch.
On Linux with the GNU libc, /etc/nsswitch.conf
is central on how getaddrinfo
works. On some configurations, notably when running systemd-resolved, /etc/nsswitch.conf
can instruct getaddrinfo
to never look at /etc/resolv.conf
. It also explains why /etc/hosts
is checked first and how multicast DNS is entangled into that.
Yay, I’ve been using this branch for months to get the Wayland scaling and I’ve had zero problems.
I think that Wayland support is so important that they should cut ASAP Emacs 29 just for it, but probably we’ll have to wait a couple more years for a “stable” release.
I am a Wayland user with a boring desktop use case. I like the:
I feel that last point. When using X11, it feels like I have a choice between no compositor (and the lack of features and slightly buggy rendering that entails) and compton (with the very bad performance that entails). Workspace switching with i3 would make the every OpenGL window look initially blank before it pops in half a second later when using compton.
Sway, on the other hand, Just Works.
Everything else on that list is important too of course.
Curious: what specific features do you like about the compositors? I’ve personally found them completely useless.
I don’t remember 100% since it’s a long time ago, but IIRC, a compositor was necessary for reducing screen tearing, and parts of the screen would sometimes fail to update properly without a compositor.
Screen tearing is easier to avoid with a compositor (nothing to do) than without. And in my case, true transparency (this helps me to check if a window is focused, but also for eye candy).
X can do all those things too, except maybe the refresh rate thing though, I’m not sure about that. It is a pity that applications are just rewriting in Wayland instead of fixing their bugs on X and maintaining full compatibility.
X11 can’t really avoid screen tearing. There are lots of different hacks which each somewhat reduce tearing in certain situations, but I’ve yet to see an X11 system which doesn’t have at least some screen tearing in at least some circumstances – while I’ve yet to see a Wayland system with screen tearing issues.
Fractional scaling on X11 is a hack which partially works sometimes but doesn’t work perfectly.
We’re long overdue for an X11 replacement. We’ve been adding hacks upon hacks upon hacks for far too long, and the fundamental limitations are really starting to show. It’s not enough to “just fix application bugs on X”. Screen tearing by itself is enough reason to throw X11 in the garbage and start fresh with something that’s actually reasonably well designed.
As far as I understand, X cannot have different fractional scaling factors for different monitors, while Wayland can. It’s the main motivation for me to use Wayland, given I have a 1440p 25’ and 2160p 27’.
I was always curious about fractional scaling. I thought that Wayland didn’t handle it (see https://gitlab.freedesktop.org/wayland/wayland-protocols/-/issues/47). From my understanding, you are rendering at 2x, then downscaling. If you happen to have two monitors, this can be a lot of pixels to render.
From my understanding, you are rendering at 2x, then downscaling.
This is how macOS does it, IIRC.
I think there are non-mainline GDK protocols for it. At least all the non-XWayland applications I use look perfectly crisp
For me personally, I use Wayland because it’s the only thing supported on my hardware (MNT Reform).
The only thing I use it for is to run XWayland so I can immediately launch Emacs and exwm, and then I run Firefox inside that.
At first glance it appears slower than using Wayland directly, but that’s only before you factor in the time you have to spend retrieving your laptop because you threw it out the window in rage because Firefox’s keybindings (when running outside exwm) are so bad and holding down C-n opened seventeen windows again instead of scrolling down.
Probably, but if you run Firefox outside exwm there’s no way to fix the key bindings. The Firefox extension mechanism bans rebinding C-n and C-p for “security reasons”, making it completely unusable for me.
Supposedly it does https://twitter.com/omgubuntu/status/1379818974532280321?s=20, but I guess exwm doesn’t.
No, the problem is that Firefox doesn’t allow the user to change some keybindings, like C-n. EXWM is the solution to that problem.
For a while now already. Gotta define the MOZ_ENABLE_WAYLAND=1 env variable and it will start in Wayland mode. Have been doing this for 2(?) years now on Sway without issue. (Maybe the env thing is a thing of the past…)
I’m surprised by your question. My (perhaps naive?) understanding is that the Xorg maintainers decided years ago that they don’t like the X protocol anymore for modern graphics stacks, that the stuff that was added on top of it is painful to design/implement/maintain, and that they wanted to start with something fresh with a more solid design. Wayland, as I understand it, this is “something fresh with a more solid design” (but its own protocols are still evolving and probably imperfect, as everything is.)
With this understanding, if I have the choice to use both (that is, I’m not depending on worklows that are not supported on Wayland for fundamental reason (no support for easy forwarding) or due to lack of maturity), it seems natural to use Wayland. I want to use the new stuff that is the future and benefits from a growing userbase to mature (and benefit more users that currently have fewer options), and not the old stuff that keeps working but people dread maintaining.
So: my use case is “moving to the new software stack that was designed from the start to be a better, more robust successor than the old software stack”.
From my PoV Wayland is still in “second system redesign” territory like the early days of PulseAudio. Some people find it useful, some people like to help test the bleeding edge, but it’s not the default in anything I set up and since I’ve never had any issues with X I don’t currently go through the extra work to set it up. The only time I’ve worked with Wayland was to help my mom switch her computer back to X because some app was misbehaving under her Wayland setup.
But if it’s working for you, that’s great of course. Just providing my PoV so hopefully my question is less surprising now.
There is always a bias towards “not fixing what aint broken” so once you know a tool you’re likely to stick to it.
That being said I don’t think wayland represents enough progress to consider it a successor to X, it’s a competitor for sure but I think by the time I move off X it won’t be to wayland.
With most Linux distros gradually switching to Wayland and Windows adopting it as well for WSL, I think it’s fairly certain that Wayland is going to (mostly) replace X in the next 3-5 years. I doubt some other alternative will emerge before the transition to Wayland is complete.
Desktops built on Wayland are amazing compared to desktops built on X11. I couldn’t go back anymore.
That paragraph seems like the author is justifying to themselves why they want to play with k8s. I think they’re super weak reasons, could delete the entire paragraph and improve the post.
eg, not sure I’d claim Mesos hasn’t gained critical mass given it underpins Netflix’s entire stack (Titus depends on Mesos & Zookeeper.) Nomad has equivalent large deployments, Cloudflare, Roblox and as you point out, is free (with paid-for extra features.)
Author here, thanks for the feedback. I meant Nomad is not free because some features are behind an “Enterprise” paywall.
Isn’t that also technically true of k8s? In the sense that cloud providers (google, amazon) have a special sauce that they don’t share with mere mortals?
You don’t bump into “Error - Enterprise only feature” messages when working with K8s. I’m sure Amazon and Google have their own tools for working with it, but their use case is very specific.
It was voted to the attic in April: https://lists.apache.org/thread/ysvw7bb1rd8p88fk32okkzr75mscdjo8
I think you’re a little overly strict in your definition of “single source of truth” - if you have a “system” instead of “one piece of technology” then I don’t see a problem saying that that is the source of truth, but yes, you have to make sure everything is correct behind the “public api”. Maybe a bad example, but going further you could say “oh because of the normalization in the database table X is not the single source of truth anymore because I have to use JOIN”.
That said, we all know how encapsulation and access works in the long term so I can 100% understand the way you see and described it :P
It is mostly about the different workflows. One piece in Netbox where changes are applied immediately and not versioned. One piece in Git where changes can be reviewed and versioned. If you modify only one, no problem but if you need to do a change spanning across both of them, you are running into trouble.
I hope they look at torrents. People like myself are dying to find some socially-valuable way of using symmetric gig home fiber connections and abundant storage/compute in the homelab. I’ve tried hosting Linux ISOs but my sharing ratio never goes above 1. Torrents could be a first-line cache before hitting the S3 bucket or whatever else, and I think they have an extremely cool intersection with the idea of reproducible builds itself. Heck, you could configure your PC to seed the packages you’ve installed, which would have a nice feedback loop between package popularity and availability!
Hmmm, looking at the cost breakdown they link, they use Fastly as a CDN/cache so most of their S3 costs are for storage, not transfer. They cite 30 TB of transfer a month out of the storage pool, vs 1500 TB of transfer per month served by the CDN. Looks like Fastly gives them the service for free (they estimate it would be €50k/month if they had to pay for it), so their bottleneck is authoritative storage.
Backblaze would be cheaper for storage, but it’s still an improvement of like 50%, not 500%.
Okay maybe this is getting a bit too architectural-astronaut, but why do you need authoritative storage for a reproducible build cache? If no peers are found then it’s your responsibility to build the package locally and start seeding it. Or there could even be feedback loops where users can request builds of a package that doesn’t have any seeders and other people who have set up some daemon on their server will have it pull down the build files, build it, and start seeding it. The last word in reproducible builds: make them so reproducible you can build & seed a torrent without downloading it from anybody else first!
This thread isn’t about a reproducible build cache. It includes things like older source dists which aren’t on upstream anymore, in which case the cache isn’t reproducible anymore
This seems highly unsecure to let random people populate the cache. It would be easy for a malicious actor to serve a different build.
They could host an authoritative set of hashes
This only works for content addresses derivations, which are a tiny minority. The rest of them are hashed on inputs, meaning that the output can change and the hash won’t, so what you propose wouldn’t work at the moment, not until CA derivations become the norm (and even then, it still wouldn’t work for old derivations)
You can’t corrupt torrents, they’re addressed by their own file hash; see https://en.wikipedia.org/wiki/Magnet_URI_scheme
It’s not corruption of the file in the torrent that’s the problem, but swapping in a malicious build output while the nix hash would stay the same. This is possible in the current build model (see my comment above), and what we rely on is trust in the cache’s signing key, which cannot be distributed.
There are projects like trustix which try to address this, but they’re dormant as far as I can tell.
I wouldn’t even bother trying to build NixOS in a CI/CD context with torrent as a primary storage backend.
Why not? Would it be fine with you if http mirrors were still available?
Maybe because there would be increased latency for every single file/archive accessed? The idea of a community-provided mesh cache is appealing though, if the latency issue is mitigated.
Then I’d use only http, changing nothing for the project in terms of costs.
CI/CD (and NixOS) should be reproducible, but having a requirement on torrents throws that out of the window and make it inpredictable. Yes, it will probably be fine most of the time, and sometimes be faster than everybody using the same HTTP cache.
But also, firewalling the CI/CD pipeline using torrents? That’s hard enough with CDNs…
There are different topologies for torrents. BitTorrent became popular for illegal file sharing (in spite of being a terrible design for that) and a lot of the bad reputation comes from that. In this model, all uploaders are ephemeral users, typically on residential connections. This introduces a load of failure modes. All seeders may go away before you finish downloading. One block may be present on only a slow seed that bottlenecks the entire network (Swarmcast was a much better design for avoiding this failure mode). Some seeders may be malicious and give you blocks slowly or corrupted.
In contrast, this kind of use (which, as I understand it, was the use case for which the protocol was originally designed) assumes at least one ‘official’ seed on a decent high-speed link. In the worst case, you download from that seed and (modulo some small differences in protocol overhead), you’re in no worse a situation than if you were fetching over HTTP. At the same time, if a seed is available then you can reduce the load on the main server by fetching from there instead.
For use in CI, you have exactly the same problems as any dependency where you don’t have a local cache (Ubuntu’s apt servers were down for a day a couple of months back and that really sucked for CI because they don’t appear to have automatic fail over and so the only way to get CI jobs that did apt updates to pass was to add a line to the scripts that patched the sources file with a different mirror). At the same time, it makes it trivial to provide a local cache: that’s the default behaviour for a BitTorrent client.