Left reddit and built a viewer for the archives created by pushshift. It’s generated by a small rust program that spits out a lightweight static site with optional JS for searching.
Chowing through 5GB of JSON as fast as possible was a fun challenge, and integrating client-side only search for this amount of data worked better than anticipated.
It’s the AskHistorians Archive. I’ve managed to integrate pagefind, it works very well. The whole site is static which is kind of a challenge with ~600k posts but pagefind is pretty fast!
Even if you do that, can’t any middle-box or intermediating agent see mytopic as part of a request URL?
Ultimately I think this is fine! It just means that ntfy.sh is basically a demo server, that topic names underneath that domain provide no form of access control or delivery guarantees, and that any actual user will need to self-host with appropriate security measures. Which is, I guess, important to make clear in the docs. Specifically, there is no way for users to “choose a unique topic name” in a way that “keeps messages private”.
i think any webhook works the same way 😅, as does many cloud file providers that have things like “Anyone with this link” on things like google docs or dropbox… invitations to chats on systems like whatsapp for anyone with the link (or QR code)…
it really all depends on what you do with the URL, and the administrative practices of the people running the site that utilizes this method of security
as long as you don’t misuse it, and it’s using https, and the people running the site do it knowing this is how the security works, it is absolutely secure… and, as long as everyone is aware, as secure as using an unencrypted API key…
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel. No downstream consumer would assume that an arbitrary message, whether submitted via webhook or otherwise, is actionable without additional authn/authz. So I don’t think this, or any other kind of webhook, is directly comparable. I could be wrong! If ntfy.sh topic notifications are understood by consumers to be un-trusted, then no problem, and mea culpa! But that’s not what I took away from the docs.
You seem to be dead set to find a fatal flaw in ntfy, with quite the dedication. :-) I disagree with your assessment that the security of an API key and a secret URL are fundamentally different. And with that fundamental disagreement, our argument comes to an end.
On the wire, a HTTP request with an API key looks like this:
POST /api HTTP/1.1
Authorization: Bearer this-is-a-secret
some message
A request against the ntfy API looks like this (excluding the JSON endpoint, which is more like the above):
POST /this-is-a-secret HTTP/1.1
some message
The only difference is that the secret is in a different spot in the HTTP request.
You made an argument that you cannot rely on TLS: That is completely flawed, because if you cannot trust TLS, then your header-based auth also falls apart.
You also made an argument saying that you cannot rely on people making HTTPS requests. That also applies to the traditional Bearer/Basic/whatever auth.
IMHO, the only valid argument to be made is the one that the HTTP path is cached and prominently displayed by browsers. That’s correct. That makes it less secure.
ntfy is a tool usually used for casual notifications such as “backup done” or “user xyz logged in”. It is geared towards simplicity: simple simple simple. It doesn’t do end-to-end encryption, and the examples are (partially at least) suggesting the use of HTTP over HTTPS (for curl). So yes, it’s not a fort-knox type tool. It’s a handy tool that makes notifying super simple, and if used right, is just as secure as you’d like. But yes, it can also be used in a way that is less secure and that’s okay (for me, and for many users).
I really didn’t want to get into such a (what it feels like) heated discussion. I just wanted to show off a cool thing I did …
Technically, I agree with you that secret links and API keys are the same. I also agree that secret links are a simple, adequate solution for a simple service like ntfy.
When reasoning about the security of secret links, I’d encourage you to also think about the practicalities of how people tend to use links: It’s extremely easy to share them and people see them more as public information.
This can be seen in the behavior of some tools that automatically upload and store them elsewhere without encryption, e.g. browser history sync.
IIRC this also lead to leaked password reset links when outlook automatically scanned users’ emails for links and added them to the bing index.
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel.
Which can be catastrophic. I’ve heard many stories of crypto scams that were fueled by a hacked “official” project Discord account sending out a scam phishing link or promoting a pump-and-dump scheme.
Sure. But who cares? Then I abandon my channel and switch to a new one. They can’t find it, because I’m using https and they can’t MITM anything useful.
If your use case allows you to abandon one topic and switch to a new topic on an ad-hoc basis, that’s great, but it’s not something that most applications are able to do, or really even reliably detect. This is all fine! It just means that any domain that provides unauthenticated write access to topics is necessarily offering a relatively weak form of access control, and can’t be assumed to be trusted by consumer applications. No problem, as long as it’s documented.
I should really write this up as a FAQ, because it comes up so much. :-) First of, thanks for the vivid discussion on ntfy. I love reading feedback on it. Much appreciated.
The original premise of ntfy is that topics are your secret, so if you pick a dumb secret, you cannot expect it to be or remain private. so ntfy.sh/mytopic is obviously just a demo. Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).
If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).
It simply is not true that a URL containing a “unique-ish ID” like a UUID is “no more or less secure” than using an authorization header, or any other form of client auth. URLs are not secrets! Even if you ensure they’re only requested over HTTPS – which you can’t actually do, as you can’t prevent clients from making plain HTTP requests – it’s practically impossible to ensure that HTTPS termination occurs within your domain of control – see e.g. Cloudflare – and in any case absolutely impossible to ensure that middleboxes won’t transform those requests – see e.g. Iran. There are use cases that leverage unique URLs, sure, like, login resets or whatever, but they’re always time-bounded.
If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
If you pay to “reserve” a topic foo, does that mean that clients can only send notifications to ntfy.sh/foo with specific auth credentials? If so, all good! 👍
, as you can’t prevent clients from making plain HTTP requests
Well, that’s the clients fault? The client leaking their secrets is just as possible with an authorization header.
it’s practically impossible to ensure that HTTPS termination occurs within your domain of control
It’s trivial to do this. I don’t understand and I don’t see how an authorization header is different.
but they’re always time-bounded.
No they aren’t. Unique URLs are used all the time. Like every time you click “Share” on a document in Paper/Drive and it gives you some really long url.
We’re discussing “Capability URLs” as defined by this W3C doc which says that
The use of capability URLs should not be the default choice in the design of a web application because they are only secure in tightly controlled circumstances. However, in section 3. Reasons to Use Capabilty URLs we outlined three situations in which capability URLs are useful:
To avoid the need for users to log in to perform an action.
To make it easy for those with whom you share URLs to share them with others.
To avoid authentication overheads in APIs.
and further dictates (among other constraints) that
Sure, it’s not the most massively secure thing in the world, but anyone using this service can be confident their client isn’t making plain HTTP requests else they’d pick something normal. I don’t know why my HTTPS termination would be at CloudFlare unless I’d set it up (or ntfy started using it), and even if it were of all people I trust CloudFlare to not-spam me the most. It’s not that big a deal.
To clarify, an application developer using this service, being the type of developer to use a service like this, would be able to feel confident an application request to this web service is via HTTPS.
Hi everyone, My first post here. Nice to meet you all and thanks to @ocramz for inviting me.
This project has been in the making for a long time. It includes tooling and infrastructure to help developers write high-level tests for complex software workflows that are not easy to unit test. I wanted to take ideas from visual regression testing, snapshot testing, and property-based testing and build a general-purpose regression testing system that developers can use to find the unintended side-effects of their day-to-day code changes during the development stage. I wrote the first line of code in 2018 and left my job 2 years ago to work on it full-time (i.e. all the time). I am posting it here because I want to hear your unvarnished thoughts and feedback about its latest v2.0 release, a milestone version that hopes to be useful to small and large teams alike. This version comes with:
An easy to self-host server that stores test results for new versions of your software workflows, automatically compares them against a previous baseline version, and reports any differences in behavior or performance.
A CLI that enables snapshot testing without using snapshot files. It lets you capture the actual output of your software and remotely compare it against a previous version without having to write code or to locally store the previous output.
4 SDKs in Python, C++, Java, JavaScript that let you write high-level tests to capture values of variables and runtime of functions for different test cases and submit them to the Touca server.
Test runner and GitHub action plugins that help you continuously run your tests as part of the CI and find breaking changes before merging PRs.
I would really appreciate your honest feedback, positive or negative, about Touca. Do you find this useful? Would love to hear your thoughts and happy to answer any questions.
I haven’t really felt the pain of making snapshot tests scale (currently working with only about a 100 snapshots), and the Website / Docs don’t really make it clear to me what problems Touca solves that I might face in the future.
I’m definitely interested in a UI for visual regression testing as I’ve struggled to find good tooling for that, but looking at the screenshots it doesn’t seem like a big focus for Touca.
Thanks for sharing your thoughts. I’m sorry that you didn’t find the docs clear enough. I’ve tried to briefly explain the differences between Touca and snapshot testing here: https://touca.io/docs/guides/vs-snapshot/. As outlined in that document, one primary difference is that Touca removes the need for storing snapshot files in version control and moves the storage, comparison, visualization, and reporting of test results to a remote server. In this new model, instead of git committing snapshot files with differences, you’d use the server to promote a new version as baseline.
You are right that visual regression testing of web UI is not a focus of this project. I believe there are many good solutions in the market for web apps. We focus on software that may not have a web interface, like API endpoints, data pipelines, machine learning algorithms, command-line tools. We want to make it easy to test these types of software with various inputs without writing extensive unit tests and integration tests.
I found that page and got the difference, but I really like version control and its benefits. Focusing a little bit on the benefits a centralized server brings in comparison to the usual approach would be the interesting part for me :)
When visiting https://touca.io/ as someone who already understands snapshot testing, it was difficult to find how Touca was different from normal snapshot testing. The only obvious difference was that it has a web app interface instead of my local Git diff interface, but that on its own doesn’t sound like a desirable feature for an app that already has a CI pipeline. I think your pitch about “removes the need for storing snapshot files in version control” should be more visible. I still have no need for the feature – I struggle to imagine a case where snapshot files in nested folders would not be easy enough to manage – but at least that information would have made it clearer that the software is targeting problems I don’t have, so I don’t need to read more of the site.
On the home page, one heading is relevant to that question, “Snapshot testing without snapshot files”. However, the rest of that block doesn’t elaborate on how snapshot testing could possibly work without snapshot files or clarify whether “snapshot files” are the generated snapshots or source code files with snapshot tests. The next sentence, “Remotely compare the your software output against a previous baseline version”, sounds like it could equally apply to traditional snapshot testing. I think the word “remotely” in that sentence was also meant to imply “without snapshot files”, but I just interpreted it as “diff snapshot files on our remote server instead of locally”. (Also, there’s a typo: “the your” should be “the”.) The final part of that block, “brew install touca”, is not at all connected to the previous two sentences and seems out of place. Without reorganizing the whole page, that brew command might seem more relevant if a “Try it:” label were before it.
After I first was confused by the home page, when I clicked Docs, I just saw another summary of snapshot testing in general followed by a bunch of links, none of which sounded like they would explain how Touca is different.
I saw in the sidebar that I was in the Getting Started section. When I skimmed the next page, Concepts, it looked like just another explanation of snapshot tests in general.
Okay, so I went to the next page, Writing Tests. Great, examples of how to actually use this. Except… after reading the whole page it was hard to understand the expected input and output of each test. There is “alice”, “bob”, and “charlie”, but is the input literally those strings? How can that be when the assertions mention student.gpa? And where is the expected output saved – why wasn’t it mentioned on this page? If it’s not saved to a file, it must be saved to a server, but I had trouble imagining at this point why that would be better. At that point I gave up on understanding Touca. Only later did I come back to these comments and see your link.
I think the Writing Tests page is too abstract. The JavaScript code sample calls code_under_test.find_student(username), but that’s not defined anywhere, so I struggled to imagine how it’s connected to the rest of the code. Maybe include a short stub definition in your examples, like this?
async function find_student(username) { // the code under test
if (username === "alice") {
return {name: "Alice", gpa: 4.0}
} else if (username === "bob") {
// etc.
}
}
And the next line, touca.check("gpa", student.gpa), didn’t give any hint as to what expected value Touca would compare the GPA against. Maybe add a comment to that line: // compare student.gpa to Touca’s saved 'gpa' value in the scope of this testcase. That comment may be inaccurate; you can correct it.
Hi @roryokane, Thank you so much for this thorough feedback. I really appreciate you taking the time. I am going to read it again and again tomorrow and apply your suggestions both to the landing page and to the docs.
I think your pitch about “removes the need for storing snapshot files in version control” should be more visible.
Agreed. Will try to clarify.
I struggle to imagine a case where snapshot files in nested folders would not be easy enough to manage
You mentioned you are familiar with snapshot testing. Could you share more about your typical use-case?
I think the Writing Tests page is too abstract.
Based on your suggestion about this page and the “Concepts” page, I’m tempted to just rewrite the entire “Getting Started” section.
This is exactly the type of feedback that I was looking for. Thank you!
I’m really desperate for a tool to preserve these websites in an ‘open web” way. Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well.
I think solutions like archivebox handle the archiving part well, but there’s no clear story on how to easily host archived sites and make them discoverable.
Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well.
Maybe a good idea to donate then for that to not happen.
However I agree that decentralizing these things is a good idea. I know archive.org had some browser extension or something at some point to help with indexing things that crawlers have a hard time to reach. Maybe it would be worthwhile to base of that so both benefit?
I want to move to a world where an entire web site, as of a particular moment in time, exists as a snapshot in a distributed content-addressed storage system and your browser can be readily directed to download the entire thing
this would of course necessarily entail having fewer features that depend on server interaction, but I think uh… most sites should not be apps, heh
I’m aware that this is sort-of throwing a technical solution at a social problem, but I think in this case the technology could dovetail well with a cultural change where site owners want to do something about preservation - it would give an easy, immediately actionable thing that people who care can do that makes a real difference
I have, yes! I think IPFS is a very solid architecture, should definitely be the basis of anything like this, and probably solves about 90% of the problem. Of the part that’s left, most of it is documentation that explains what people might want to do, why, and how, and the smallest part is any small glue code that’s needed to make that easy.
One idea I had was for an appliance thing that could bring static IPFS blog / site publishing to the masses. Something like:
A SBC (RPi, Rock64, whatevs) running a Free OS.
Some sort of file share on it that was mDNS discoverable.
Each of these appliances has a (changeable, but default) unique IPNS identifier, with a QR code sticker on it that you can scan and share however you want (social media, IRL, as text, as an image, etc.)
Then you just write your content, copy it to the box (Samba? SFTP? …?), it generates a static site, you eyeball it, then hit ‘publish’ when you’re happy.
Aim would be for it to be simple enough for non-techies to use. There is a lot of devil in that detail, though. Some things I was spiking:
How to trigger the static generation? Samba is very bad at knowing when a file operation is “done”.
How to keep the thing updated and secure? I looked into Ubuntu’s IoT infra but there’s an entire herd of yaks to shave there.
How to support Windows? which still doesn’t do mDNS well, last I looked.
I’m all for this. This is very similar to what I’ve been thinking about. I would personally choose sftp over Samba because managing ssh credentials a skill that I think is very empowering and worth teaching, and because I never like tying my future to the whims of a megacorp, but that does incur an additional burden for documentation, since most people won’t know how to use it.
Your point 1 brings up another possibility though, which is using git-over-ssh. Then the generation can be kicked off by an on-push trigger in git.
With regard to your point 2 I personally lean very heavily towards NixOS as it’s good at this sort of thing, but teaching people how to manage appliances like this is a big writing task. I’m not a technical writer, and I’m not really the right person to take that on, although I’m always happy to chat with anyone who does.
Windows support does seem quite challenging, I don’t have good answers there.
Yeaaaahhhhhhhhhhh … I’m kinda reluctant to have to expose non-techies to Git. It’d be perfect for a coding-savvy market though.
but teaching people how to manage appliances like this is a big writing task.
I was thinking of something that wouldn’t have to be managed … updates would “just happen”. That turns out to be surprisingly difficult (c.f. herd of yaks).
It’s surprising to me that releasing an open-source appliance like this would still be a lot of work, but honestly, it really does seem like it would.
I’ve even started to build an archival app on top of it, but there are many thorny problems. How do you ensure the authenticity of archives published by other people? Where and how do you index archived content across the network? How do you get other people to re-host already archived content? How do you even get enough people interested in this to make it useful at all?
I’m definitely interested in this as well, I’ve started to believe that personal archives of sites/articles is the most resilient way to preserve information.
I’ll be honest, this doesn’t look very useful. This is a tutorial of how to invoke Nix, not how to use it. The second class of target people, “who have tried to cross the chasm to using it in their daily workflows but haven’t gotten there yet”, aren’t stuck because nix-build and nix-shell “present significant cognitive overhead”* compared to nix build and nix shell. They’re stuck because they don’t know how to package things. My daily workflow does not involve rebuilding bat. My daily workflow involves building my project, and to do that with Nix, I need documentation that explains how to make Nix do that. This doesn’t provide that outside of links to existing documentation, and it’s that documentation that is, imho, most in need of attention. This just doesn’t feel like a good use of documentation-directed effort to me.
*As an aside, this claim makes no sense to me whatsoever. Learning a dozen commands isn’t any harder than learning a dozen verbs on one command.
As a person interested in nix, I find the structure and writing style of Zero to Nix far more useful than any other nix documentation I’ve read so far. Especially the opinionated style is useful as an orientation guide in the vast sea of things to learn and try out.
I’m glad it’s useful to you, then. Hopefully you’re able to get a foothold of understanding that carries you through the older strata of documentation.
Completely agree. I applaud the effort at making Nix more approachable, but Nix is still way too obscure and things that are simple to do with other package managers are still too difficult to do with Nix.
I decided to peek into Nix again after seeing this guide (I’ve tried and given up on multiple occasions in the past). I wanted to try making a C++ development environment to build a project. It’s a shame that creating a “flake.nix” file to do something even that simple is so complicated that you have to resort to using a template from a third-party. Writing the file by hand, from scratch, is basically out of the question for a beginner.
But ok, I’ll use the template. Now it’s time to add some packages. I use the “nix search” tool, which shows me a large list of package names, but doesn’t tell me anything about what’s in these packages. For example, what can I find in “clang-tools_11”? Is there a way to list the files that package includes? What about “clang11Stdenv”? That looks like it could be useful, but again, there’s no way (that I can see) to view what makes up that package or what it actually provides.
In contrast, a package manager like pacman can list all of the files that a package will install. Even Homebrew will tell me what a packages dependencies are, and will give me a hyperlink to the “formula” for a recipe, so I can see exactly how the package is defined. Is any of this possible with Nix? If it is, that’s the kind of stuff that is useful to know for using Nix as a package manger. Not rebuilding bat.
ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
What does this mean? How is anyone who hasn’t already invested dozens of hours into Nix supposed to understand this, let alone figure this out on their own?
In the end, I think this guide is like so much other Nix documentation. It provides surface level, trivial examples to give the illusion that “Nix is easy”, but leaves you completely ill-equipped for doing anything useful.
Sorry for the rant, but the hype around Nix is never ending, and I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there, so I’m left feeling confused about what I’m doing wrong.
I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there
I hear you. As an ardent Nix fan, I have similar experiences too. Sometimes I’ll be trying to get something packaged, and the program or its language ecosystem isn’t easily “tamed” by Nix, and I get so frustrated. And that is from someone with quite a lot of experience and background in build systems, isolation, Nix itself, and lots of experience figuring it out. Days like that I feel like I lost, and sometimes get pretty down about it. A couple years ago I “gave up” and even installed another Linux distro. (This didn’t last for more than a day…)
I hope one day Nix is as easy to use as anything else. Or even easier. It empowers me to do so much without fear, and I’m addicted to that power.
My perspective is that to do this we need to:
Produce learning material to get the interested-and-willing folks over the initial hump. Hopefully zero-to-nix helps get people there.
Expand the user base to include people in the various language ecosystems, to improve support across these ecosystems. This is a long game, but Nix is nearly 20 years in by now and we’re seeing a lot of progress.
Lean into the work that software security / provenance people have been pushing for like SBOMs. Nix gets this stuff really right, so moving this forward makes Nix more of an obvious solution.
What I don’t want to do is beat my head against the wall every time I want to try some new software. I admit that if it takes me more than an hour, I’ll sometimes boot a VM and try it out in another distro. That’s okay by me. By my count, more Nix in the world is good for everyone, and when it doesn’t serve me that is okay too.
As an aside, this line: ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath) seems a bit weird. However, the idea of listing what files a package will install is a bit … different in Nix, because “installing” is … a bit different. We’re going to be publishing either a z2n page, or a blog post about that soon.
For searching, at least, I’ve always used search.nixos.org rather than the CLI tool. The search results usually have a link to the package definition, though often the package source isn’t legible to someone who isn’t an advanced user. clang-tools_11 is defined here, if anything in there is helpful.
Personally, I run NixOS on some home servers and I find the declarative system configuration aspect to be incredibly helpful. But when it comes to working on individual projects, I mostly develop them in their own respective tooling, sometimes with a nix-shell or nix develop config to get the dependencies I want installed, and I only figure out how to make them buildable with Nix later on.
I’m definitely the target audience for this, and having just gone through the Quick Start I find myself agreeing with you. I was optimistic at first, as what’s there is presented clearly, but as I reached the last page I realised I don’t feel any more informed than I was in the first place and all I’ve done is run someone else’s ‘flakes’ without really understanding what’s happening (I understand broadly what is happening with each command of course, but not in any kind of sense that I could actually reproduce it myself). None of this makes Nix ‘click’ for me unfortunately. It’s a decent start, but as you said it’s just not all that helpful in its current state. It needs to provide structure and order to the learning process. You can’t just gloss over the Nix language… when you introduce me to something like this, I want to read and understand it, so that I might be able to write it myself:
But nothing I’ve read here has given me any kind of guidance on that. It’s like I’m supposed to just trust that it works. What do I take away from that? A good guide will engage my curiosity and encourage me to look deeper at what’s being presented. Shouldn’t the examples be as trimmed down as possible? Is this part really necessary?
# Systems supported
allSystems = [
"x86_64-linux" # 64-bit Intel/ARM Linux
"aarch64-linux" # 64-bit AMD Linux
"x86_64-darwin" # 64-bit Intel/ARM macOS
"aarch64-darwin" # 64-bit Apple Silicon
];
# Helper to provide system-specific attributes
nameValuePair = name: value: { inherit name value; };
genAttrs = names: f: builtins.listToAttrs (map (n: nameValuePair n (f n)) names);
forAllSystems = f: genAttrs allSystems (system: f {
pkgs = import nixpkgs { inherit system; };
});
I kind of understand what’s happening here, but that’s quite offputting if you’re trying to convince me that Nix is easy and approachable. Am I supposed to ignore this? It only makes me wonder why I would bother. I’m sure you can do things in a far more simple manner, so why this complexity? Perhaps there’s a little too much emphasis on correctness before you’ve reached the natural point of explaining why it is necessary. A nice start, and I enjoyed going through it but it needs much more work to live up to the promise of its title and ultimately I’m disappointed.
Thank you for working on this, and I hope you continue. Maybe I’ll be able to come back later and have a better experience.
Thanks for this feedback. One of the things in discussion around Flakes is system specificity and the overhead / boilerplate it tends to create. We’ll take a look at how we can simplify the documentation on this and make it more straightforward. I appreciate it!
That makes sense, thanks for the explanation. I reread my comment and I think it comes off as a bit too negative. I really did enjoy the content that is there so far, I just wanted it to keep going!
The nix.dev article looks helpful, so I’ll definitely go through that.
What I was really hoping for was more of a structured path to guide me through these concepts (a continuation from the guide). I realise how challenging that is likely to be given how broad the ecosystem is. Everyone probably has their own motivation for learning it. For me, the reproducible development environments are a big draw. For another person it might be something completely different. So I don’t know what that looks like exactly. Possibly branching paths after the quick start guide in the form of guides for the most common use cases? Whatever the case I think the hand holding needs to continue a little longer before you let people loose to discover the rest of the ecosystem for themselves. My best learning experiences have been just that. I follow the guide until I can’t help imagining all the possibilities and I am naturally compelled to go off and explore those ideas for myself. That’s where my journey really starts. If I’m going through a book (for example), that point is usually somewhere well before the middle and I may not even finish because I have enough confidence that I know what to look for. With this guide I still feel lost in a vast sea. It still feels like there’s a very large investment up front (in Nix), and I’m just trying to find the motivation to push through (hoping that it is what I imagine it to be).
Anyway, I hope my feedback is helpful in some way. I guess what I’m trying to say is that I definitely don’t expect Zero-to-Nix to be an exhaustive guide to the entire Nix ecosystem, but I do think it could carry the reader a little further, perhaps until they’re at least comfortable in writing their own simple flakes (for common use cases). A little extra confidence. That might be enough to reach that wonderful state of inspiration that drives further learning.
They provide templates for using flakes with some languages. Depending on the language you want to use to build stuff, that’s what you should look at. I think they don’t spell it because they tried to keep everything language-independent in the text, so you have to run the commands they provide to see more.
Templates are good for saving people the trouble of writing boilerplate. They are absolutely not a substitute for understanding. If you want to change something nontrivial in a project generated from a template, you still have to know how it all works.
The big problem is XUL extensions are a dead end - they make sandboxing and multiprocess much harder, if not impossible. I do think they should get WebExtensions near API parity though, before killing off the old model.
I gotta say, this would make me awfully nervous. I tend to agree with the dictum that you should be including at most one new and sexy technology in your stack at any point. The hype on Elixir and Elm is super high right now—that’s not to say that either one of them is overhyped, or undeserving of its attention, but if I read about starting one new project where essentially the entire stack for both the front and backend is brand new, I’m going to start wondering whether these decisions were made for engineering reasons or because the author wanted a chance to play with some new and exciting tools.
I’d put Elm in the “new and hip tech” bucket but you should remember that Elixir is based on decades-old Erlang/OTP experience. If you really want to reduce Elixir to a short sentence, it’s cleaned up, syntactic sugar over Erlang with macros (among other goodies).
Elixir is hyped, sure, but at least its claims about fault tolerance and distribution have historical proof because they’re Erlang’s claims.
–
Something that stands out to me about the Elixir/Elm pairings is that people focus on Elm’s types as a major point yet use a dynamic language for the backend.
I would argue that while Elm is still currently in that bucket, it is start to leave. The language is becoming more stable on the whole, and the ideas it promotes have already spread to almost every other frontend framework available. If you’re writing Redux, you’re writing Elm without types. With regards to the types, Elm uses a basic type system which has existed in MLs for decades. At this point, there is little that Elm introduces as hip - instead, it is more the coagulation of a few very popular ideas elsewhere. Elm’s type system has historical proof because of this. The Elm architecture has been popular in concurrency groups for a long time, too.
Something that stands out to me about the Elixir/Elm pairings is that people focus on Elm’s types as a major point yet use a dynamic language for the backend.
Agreed. Elixir’s typespecs just don’t match up with the types-as-a-feature of Elm. To me, the biggest selling point of Elixir is the build tooling that helps simplify getting started with Erlang. Rebar is dead, long live rebar. Honestly though, if I got to choose the backend for a new production job, I would probably choose Haskell. I’d rather have a proper type system than not. This is not necessarily true for other Elm users - I see a lot of them coming from Rails or JS backgrounds, and Elm’s type system is often the first proper type system they’ve used.
Fair enough, I do think Elm is a safe choice nowadays (I have a friend who works at NRI ). Like @zdsmith I do see it “hyped up” a lot so I got the wrong impression. Thanks for the explanation.
Usually, I’d agree with you. Especially when looking at the JS world. But many of the interesting concepts in Elm and Elixir have one significant advantage over “hip.js”: It’s very hard to screw up your code. The lack of this feature in Coffeescript and ES7, for example, makes it easy to misinterpret or half-heartedly apply solutions one doesn’t understand, making it hard as hell to maintain the software afterwards.
But proper typing, pure functions, good error messages, and time travelling debuggers are all things that are hard to misuse and generally lead to better code. These languages teach better programming, IMO.
whether these decisions were made for engineering reasons or because the author wanted a chance to play with some new and exciting tools.
This is basically a mass-hallucination in the Elixir community. For some reason the Elm meme has become deeply embedded in the brain stem of some Elixir devs, not for any good reason it would seem.
Left reddit and built a viewer for the archives created by pushshift. It’s generated by a small rust program that spits out a lightweight static site with optional JS for searching.
Chowing through 5GB of JSON as fast as possible was a fun challenge, and integrating client-side only search for this amount of data worked better than anticipated.
Thanks for the hint - this looks like the perfect tool for the reddit archive site generator I’m building :)
Say more!
It’s the AskHistorians Archive. I’ve managed to integrate pagefind, it works very well. The whole site is static which is kind of a challenge with ~600k posts but pagefind is pretty fast!
I’ve actually went the opposite route and integrated meilisearch as full-text search engine.
How do I ensure that only authorized publishers are able to submit notifications to
ntfy.sh/mytopic
?keep
mytopic
a secret and only give it to authorized publishersEven if you do that, can’t any middle-box or intermediating agent see
mytopic
as part of a request URL?Ultimately I think this is fine! It just means that
ntfy.sh
is basically a demo server, that topic names underneath that domain provide no form of access control or delivery guarantees, and that any actual user will need to self-host with appropriate security measures. Which is, I guess, important to make clear in the docs. Specifically, there is no way for users to “choose a unique topic name” in a way that “keeps messages private”.an intermediating agent shouldn’t be able to see anything w/ https
you could think of that URL as being analogous to an API key
It would be incorrect to think of any URL as equivalent (in the security sense) to a secret like an API key.
Discord webhooks work the same way ¯\_(ツ)_/¯
i think any webhook works the same way 😅, as does many cloud file providers that have things like “Anyone with this link” on things like google docs or dropbox… invitations to chats on systems like whatsapp for anyone with the link (or QR code)…
it really all depends on what you do with the URL, and the administrative practices of the people running the site that utilizes this method of security
as long as you don’t misuse it, and it’s using https, and the people running the site do it knowing this is how the security works, it is absolutely secure… and, as long as everyone is aware, as secure as using an unencrypted API key…
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel. No downstream consumer would assume that an arbitrary message, whether submitted via webhook or otherwise, is actionable without additional authn/authz. So I don’t think this, or any other kind of webhook, is directly comparable. I could be wrong! If
ntfy.sh
topic notifications are understood by consumers to be un-trusted, then no problem, and mea culpa! But that’s not what I took away from the docs.You seem to be dead set to find a fatal flaw in ntfy, with quite the dedication. :-) I disagree with your assessment that the security of an API key and a secret URL are fundamentally different. And with that fundamental disagreement, our argument comes to an end.
On the wire, a HTTP request with an API key looks like this:
A request against the ntfy API looks like this (excluding the JSON endpoint, which is more like the above):
The only difference is that the secret is in a different spot in the HTTP request.
You made an argument that you cannot rely on TLS: That is completely flawed, because if you cannot trust TLS, then your header-based auth also falls apart.
You also made an argument saying that you cannot rely on people making HTTPS requests. That also applies to the traditional Bearer/Basic/whatever auth.
IMHO, the only valid argument to be made is the one that the HTTP path is cached and prominently displayed by browsers. That’s correct. That makes it less secure.
ntfy is a tool usually used for casual notifications such as “backup done” or “user xyz logged in”. It is geared towards simplicity: simple simple simple. It doesn’t do end-to-end encryption, and the examples are (partially at least) suggesting the use of HTTP over HTTPS (for curl). So yes, it’s not a fort-knox type tool. It’s a handy tool that makes notifying super simple, and if used right, is just as secure as you’d like. But yes, it can also be used in a way that is less secure and that’s okay (for me, and for many users).
I really didn’t want to get into such a (what it feels like) heated discussion. I just wanted to show off a cool thing I did …
Technically, I agree with you that secret links and API keys are the same. I also agree that secret links are a simple, adequate solution for a simple service like ntfy.
When reasoning about the security of secret links, I’d encourage you to also think about the practicalities of how people tend to use links: It’s extremely easy to share them and people see them more as public information. This can be seen in the behavior of some tools that automatically upload and store them elsewhere without encryption, e.g. browser history sync. IIRC this also lead to leaked password reset links when outlook automatically scanned users’ emails for links and added them to the bing index.
Sorry! My intent is definitely not to find some fatal flaw. I’m providing feedback as requested:
Haha. I suppose I got what I asked for :-)
Which can be catastrophic. I’ve heard many stories of crypto scams that were fueled by a hacked “official” project Discord account sending out a scam phishing link or promoting a pump-and-dump scheme.
You can also DELETE a discord webhook
why?
In what way could/would a URL (containing a long random string) not be a secret in that sense?
Or a user is sending notifications like “The dishwasher is done” or “The Mets lost” and there’s no need for security.
Sending notifications to what? If you can
then I can
right?
Sure. But who cares? Then I abandon my channel and switch to a new one. They can’t find it, because I’m using https and they can’t MITM anything useful.
If your use case allows you to abandon one topic and switch to a new topic on an ad-hoc basis, that’s great, but it’s not something that most applications are able to do, or really even reliably detect. This is all fine! It just means that any domain that provides unauthenticated write access to topics is necessarily offering a relatively weak form of access control, and can’t be assumed to be trusted by consumer applications. No problem, as long as it’s documented.
I should really write this up as a FAQ, because it comes up so much. :-) First of, thanks for the vivid discussion on ntfy. I love reading feedback on it. Much appreciated.
The original premise of ntfy is that topics are your secret, so if you pick a dumb secret, you cannot expect it to be or remain private. so
ntfy.sh/mytopic
is obviously just a demo. Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a anAuthorization
header with a bearer token (other than the notable difference that it’s in the server logs and such).If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
It simply is not true that a URL containing a “unique-ish ID” like a UUID is “no more or less secure” than using an authorization header, or any other form of client auth. URLs are not secrets! Even if you ensure they’re only requested over HTTPS – which you can’t actually do, as you can’t prevent clients from making plain HTTP requests – it’s practically impossible to ensure that HTTPS termination occurs within your domain of control – see e.g. Cloudflare – and in any case absolutely impossible to ensure that middleboxes won’t transform those requests – see e.g. Iran. There are use cases that leverage unique URLs, sure, like, login resets or whatever, but they’re always time-bounded.
If you pay to “reserve” a topic foo, does that mean that clients can only send notifications to
ntfy.sh/foo
with specific auth credentials? If so, all good! 👍Well, that’s the clients fault? The client leaking their secrets is just as possible with an authorization header.
It’s trivial to do this. I don’t understand and I don’t see how an authorization header is different.
No they aren’t. Unique URLs are used all the time. Like every time you click “Share” on a document in Paper/Drive and it gives you some really long url.
We’re discussing “Capability URLs” as defined by this W3C doc which says that
and further dictates (among other constraints) that
I don’t really care about that doc tbh
edit: To elaborate slightly, I’m extremely familiar with capabilities
Sure, it’s not the most massively secure thing in the world, but anyone using this service can be confident their client isn’t making plain HTTP requests else they’d pick something normal. I don’t know why my HTTPS termination would be at CloudFlare unless I’d set it up (or ntfy started using it), and even if it were of all people I trust CloudFlare to not-spam me the most. It’s not that big a deal.
Looks like plain HTTP to me.
To clarify, an application developer using this service, being the type of developer to use a service like this, would be able to feel confident an application request to this web service is via HTTPS.
You can either refuse to listen on port 80, or you can detect they’ve transmitted it in the clear and roll that key.
Suggest
release
tag. Super neat!Thanks!
Hi everyone, My first post here. Nice to meet you all and thanks to @ocramz for inviting me.
This project has been in the making for a long time. It includes tooling and infrastructure to help developers write high-level tests for complex software workflows that are not easy to unit test. I wanted to take ideas from visual regression testing, snapshot testing, and property-based testing and build a general-purpose regression testing system that developers can use to find the unintended side-effects of their day-to-day code changes during the development stage. I wrote the first line of code in 2018 and left my job 2 years ago to work on it full-time (i.e. all the time). I am posting it here because I want to hear your unvarnished thoughts and feedback about its latest v2.0 release, a milestone version that hopes to be useful to small and large teams alike. This version comes with:
An easy to self-host server that stores test results for new versions of your software workflows, automatically compares them against a previous baseline version, and reports any differences in behavior or performance.
A CLI that enables snapshot testing without using snapshot files. It lets you capture the actual output of your software and remotely compare it against a previous version without having to write code or to locally store the previous output.
4 SDKs in Python, C++, Java, JavaScript that let you write high-level tests to capture values of variables and runtime of functions for different test cases and submit them to the Touca server.
Test runner and GitHub action plugins that help you continuously run your tests as part of the CI and find breaking changes before merging PRs.
I would really appreciate your honest feedback, positive or negative, about Touca. Do you find this useful? Would love to hear your thoughts and happy to answer any questions.
Congrats on the release!
I haven’t really felt the pain of making snapshot tests scale (currently working with only about a 100 snapshots), and the Website / Docs don’t really make it clear to me what problems Touca solves that I might face in the future.
I’m definitely interested in a UI for visual regression testing as I’ve struggled to find good tooling for that, but looking at the screenshots it doesn’t seem like a big focus for Touca.
Thanks for sharing your thoughts. I’m sorry that you didn’t find the docs clear enough. I’ve tried to briefly explain the differences between Touca and snapshot testing here: https://touca.io/docs/guides/vs-snapshot/. As outlined in that document, one primary difference is that Touca removes the need for storing snapshot files in version control and moves the storage, comparison, visualization, and reporting of test results to a remote server. In this new model, instead of git committing snapshot files with differences, you’d use the server to promote a new version as baseline.
You are right that visual regression testing of web UI is not a focus of this project. I believe there are many good solutions in the market for web apps. We focus on software that may not have a web interface, like API endpoints, data pipelines, machine learning algorithms, command-line tools. We want to make it easy to test these types of software with various inputs without writing extensive unit tests and integration tests.
I found that page and got the difference, but I really like version control and its benefits. Focusing a little bit on the benefits a centralized server brings in comparison to the usual approach would be the interesting part for me :)
When visiting https://touca.io/ as someone who already understands snapshot testing, it was difficult to find how Touca was different from normal snapshot testing. The only obvious difference was that it has a web app interface instead of my local Git diff interface, but that on its own doesn’t sound like a desirable feature for an app that already has a CI pipeline. I think your pitch about “removes the need for storing snapshot files in version control” should be more visible. I still have no need for the feature – I struggle to imagine a case where snapshot files in nested folders would not be easy enough to manage – but at least that information would have made it clearer that the software is targeting problems I don’t have, so I don’t need to read more of the site.
On the home page, one heading is relevant to that question, “Snapshot testing without snapshot files”. However, the rest of that block doesn’t elaborate on how snapshot testing could possibly work without snapshot files or clarify whether “snapshot files” are the generated snapshots or source code files with snapshot tests. The next sentence, “Remotely compare the your software output against a previous baseline version”, sounds like it could equally apply to traditional snapshot testing. I think the word “remotely” in that sentence was also meant to imply “without snapshot files”, but I just interpreted it as “diff snapshot files on our remote server instead of locally”. (Also, there’s a typo: “the your” should be “the”.) The final part of that block, “
brew install touca
”, is not at all connected to the previous two sentences and seems out of place. Without reorganizing the whole page, thatbrew
command might seem more relevant if a “Try it:” label were before it.After I first was confused by the home page, when I clicked Docs, I just saw another summary of snapshot testing in general followed by a bunch of links, none of which sounded like they would explain how Touca is different.
I saw in the sidebar that I was in the Getting Started section. When I skimmed the next page, Concepts, it looked like just another explanation of snapshot tests in general.
Okay, so I went to the next page, Writing Tests. Great, examples of how to actually use this. Except… after reading the whole page it was hard to understand the expected input and output of each test. There is “alice”, “bob”, and “charlie”, but is the input literally those strings? How can that be when the assertions mention
student.gpa
? And where is the expected output saved – why wasn’t it mentioned on this page? If it’s not saved to a file, it must be saved to a server, but I had trouble imagining at this point why that would be better. At that point I gave up on understanding Touca. Only later did I come back to these comments and see your link.I think the Writing Tests page is too abstract. The JavaScript code sample calls
code_under_test.find_student(username)
, but that’s not defined anywhere, so I struggled to imagine how it’s connected to the rest of the code. Maybe include a short stub definition in your examples, like this?And the next line,
touca.check("gpa", student.gpa)
, didn’t give any hint as to what expected value Touca would compare the GPA against. Maybe add a comment to that line:// compare student.gpa to Touca’s saved 'gpa' value in the scope of this testcase
. That comment may be inaccurate; you can correct it.Hi @roryokane, Thank you so much for this thorough feedback. I really appreciate you taking the time. I am going to read it again and again tomorrow and apply your suggestions both to the landing page and to the docs.
Agreed. Will try to clarify.
You mentioned you are familiar with snapshot testing. Could you share more about your typical use-case?
Based on your suggestion about this page and the “Concepts” page, I’m tempted to just rewrite the entire “Getting Started” section. This is exactly the type of feedback that I was looking for. Thank you!
I’m really desperate for a tool to preserve these websites in an ‘open web” way. Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well. I think solutions like archivebox handle the archiving part well, but there’s no clear story on how to easily host archived sites and make them discoverable.
Maybe a good idea to donate then for that to not happen.
However I agree that decentralizing these things is a good idea. I know archive.org had some browser extension or something at some point to help with indexing things that crawlers have a hard time to reach. Maybe it would be worthwhile to base of that so both benefit?
I want to move to a world where an entire web site, as of a particular moment in time, exists as a snapshot in a distributed content-addressed storage system and your browser can be readily directed to download the entire thing
this would of course necessarily entail having fewer features that depend on server interaction, but I think uh… most sites should not be apps, heh
I’m aware that this is sort-of throwing a technical solution at a social problem, but I think in this case the technology could dovetail well with a cultural change where site owners want to do something about preservation - it would give an easy, immediately actionable thing that people who care can do that makes a real difference
Have you looked into IPFS?
https://ipfs.tech/
I have, yes! I think IPFS is a very solid architecture, should definitely be the basis of anything like this, and probably solves about 90% of the problem. Of the part that’s left, most of it is documentation that explains what people might want to do, why, and how, and the smallest part is any small glue code that’s needed to make that easy.
One idea I had was for an appliance thing that could bring static IPFS blog / site publishing to the masses. Something like:
Then you just write your content, copy it to the box (Samba? SFTP? …?), it generates a static site, you eyeball it, then hit ‘publish’ when you’re happy.
Aim would be for it to be simple enough for non-techies to use. There is a lot of devil in that detail, though. Some things I was spiking:
Etc.
I’m all for this. This is very similar to what I’ve been thinking about. I would personally choose sftp over Samba because managing ssh credentials a skill that I think is very empowering and worth teaching, and because I never like tying my future to the whims of a megacorp, but that does incur an additional burden for documentation, since most people won’t know how to use it.
Your point 1 brings up another possibility though, which is using git-over-ssh. Then the generation can be kicked off by an on-push trigger in git.
With regard to your point 2 I personally lean very heavily towards NixOS as it’s good at this sort of thing, but teaching people how to manage appliances like this is a big writing task. I’m not a technical writer, and I’m not really the right person to take that on, although I’m always happy to chat with anyone who does.
Windows support does seem quite challenging, I don’t have good answers there.
Yeaaaahhhhhhhhhhh … I’m kinda reluctant to have to expose non-techies to Git. It’d be perfect for a coding-savvy market though.
I was thinking of something that wouldn’t have to be managed … updates would “just happen”. That turns out to be surprisingly difficult (c.f. herd of yaks).
It’s surprising to me that releasing an open-source appliance like this would still be a lot of work, but honestly, it really does seem like it would.
I’ve even started to build an archival app on top of it, but there are many thorny problems. How do you ensure the authenticity of archives published by other people? Where and how do you index archived content across the network? How do you get other people to re-host already archived content? How do you even get enough people interested in this to make it useful at all?
I’m definitely interested in this as well, I’ve started to believe that personal archives of sites/articles is the most resilient way to preserve information.
I’ll be honest, this doesn’t look very useful. This is a tutorial of how to invoke Nix, not how to use it. The second class of target people, “who have tried to cross the chasm to using it in their daily workflows but haven’t gotten there yet”, aren’t stuck because
nix-build
andnix-shell
“present significant cognitive overhead”* compared tonix build
andnix shell
. They’re stuck because they don’t know how to package things. My daily workflow does not involve rebuildingbat
. My daily workflow involves building my project, and to do that with Nix, I need documentation that explains how to make Nix do that. This doesn’t provide that outside of links to existing documentation, and it’s that documentation that is, imho, most in need of attention. This just doesn’t feel like a good use of documentation-directed effort to me.*As an aside, this claim makes no sense to me whatsoever. Learning a dozen commands isn’t any harder than learning a dozen verbs on one command.
As a person interested in nix, I find the structure and writing style of Zero to Nix far more useful than any other nix documentation I’ve read so far. Especially the opinionated style is useful as an orientation guide in the vast sea of things to learn and try out.
I’m glad it’s useful to you, then. Hopefully you’re able to get a foothold of understanding that carries you through the older strata of documentation.
Completely agree. I applaud the effort at making Nix more approachable, but Nix is still way too obscure and things that are simple to do with other package managers are still too difficult to do with Nix.
I decided to peek into Nix again after seeing this guide (I’ve tried and given up on multiple occasions in the past). I wanted to try making a C++ development environment to build a project. It’s a shame that creating a “flake.nix” file to do something even that simple is so complicated that you have to resort to using a template from a third-party. Writing the file by hand, from scratch, is basically out of the question for a beginner.
But ok, I’ll use the template. Now it’s time to add some packages. I use the “nix search” tool, which shows me a large list of package names, but doesn’t tell me anything about what’s in these packages. For example, what can I find in “clang-tools_11”? Is there a way to list the files that package includes? What about “clang11Stdenv”? That looks like it could be useful, but again, there’s no way (that I can see) to view what makes up that package or what it actually provides.
In contrast, a package manager like pacman can list all of the files that a package will install. Even Homebrew will tell me what a packages dependencies are, and will give me a hyperlink to the “formula” for a recipe, so I can see exactly how the package is defined. Is any of this possible with Nix? If it is, that’s the kind of stuff that is useful to know for using Nix as a package manger. Not rebuilding bat.
The top search result for “how to view contents of a package in Nix” shows this answer:
What does this mean? How is anyone who hasn’t already invested dozens of hours into Nix supposed to understand this, let alone figure this out on their own?
In the end, I think this guide is like so much other Nix documentation. It provides surface level, trivial examples to give the illusion that “Nix is easy”, but leaves you completely ill-equipped for doing anything useful.
Sorry for the rant, but the hype around Nix is never ending, and I often feel like I’m being gaslit because every time I check it out I end up feeling frustrated and extremely unproductive. This doesn’t seem to match the experience of all of the ardent Nix fans out there, so I’m left feeling confused about what I’m doing wrong.
I hear you. As an ardent Nix fan, I have similar experiences too. Sometimes I’ll be trying to get something packaged, and the program or its language ecosystem isn’t easily “tamed” by Nix, and I get so frustrated. And that is from someone with quite a lot of experience and background in build systems, isolation, Nix itself, and lots of experience figuring it out. Days like that I feel like I lost, and sometimes get pretty down about it. A couple years ago I “gave up” and even installed another Linux distro. (This didn’t last for more than a day…)
I hope one day Nix is as easy to use as anything else. Or even easier. It empowers me to do so much without fear, and I’m addicted to that power.
My perspective is that to do this we need to:
What I don’t want to do is beat my head against the wall every time I want to try some new software. I admit that if it takes me more than an hour, I’ll sometimes boot a VM and try it out in another distro. That’s okay by me. By my count, more Nix in the world is good for everyone, and when it doesn’t serve me that is okay too.
As an aside, this line:
ls -l $(nix eval -f /etc/nixos/apps --raw xerox6000-6010.outPath)
seems a bit weird. However, the idea of listing what files a package will install is a bit … different in Nix, because “installing” is … a bit different. We’re going to be publishing either a z2n page, or a blog post about that soon.For searching, at least, I’ve always used search.nixos.org rather than the CLI tool. The search results usually have a link to the package definition, though often the package source isn’t legible to someone who isn’t an advanced user.
clang-tools_11
is defined here, if anything in there is helpful.Personally, I run NixOS on some home servers and I find the declarative system configuration aspect to be incredibly helpful. But when it comes to working on individual projects, I mostly develop them in their own respective tooling, sometimes with a
nix-shell
ornix develop
config to get the dependencies I want installed, and I only figure out how to make them buildable with Nix later on.I’m definitely the target audience for this, and having just gone through the Quick Start I find myself agreeing with you. I was optimistic at first, as what’s there is presented clearly, but as I reached the last page I realised I don’t feel any more informed than I was in the first place and all I’ve done is run someone else’s ‘flakes’ without really understanding what’s happening (I understand broadly what is happening with each command of course, but not in any kind of sense that I could actually reproduce it myself). None of this makes Nix ‘click’ for me unfortunately. It’s a decent start, but as you said it’s just not all that helpful in its current state. It needs to provide structure and order to the learning process. You can’t just gloss over the Nix language… when you introduce me to something like this, I want to read and understand it, so that I might be able to write it myself:
https://github.com/DeterminateSystems/zero-to-nix/blob/main/nix/templates/dev/javascript/flake.nix
But nothing I’ve read here has given me any kind of guidance on that. It’s like I’m supposed to just trust that it works. What do I take away from that? A good guide will engage my curiosity and encourage me to look deeper at what’s being presented. Shouldn’t the examples be as trimmed down as possible? Is this part really necessary?
I kind of understand what’s happening here, but that’s quite offputting if you’re trying to convince me that Nix is easy and approachable. Am I supposed to ignore this? It only makes me wonder why I would bother. I’m sure you can do things in a far more simple manner, so why this complexity? Perhaps there’s a little too much emphasis on correctness before you’ve reached the natural point of explaining why it is necessary. A nice start, and I enjoyed going through it but it needs much more work to live up to the promise of its title and ultimately I’m disappointed.
Thank you for working on this, and I hope you continue. Maybe I’ll be able to come back later and have a better experience.
Thanks for this feedback. One of the things in discussion around Flakes is system specificity and the overhead / boilerplate it tends to create. We’ll take a look at how we can simplify the documentation on this and make it more straightforward. I appreciate it!
That makes sense, thanks for the explanation. I reread my comment and I think it comes off as a bit too negative. I really did enjoy the content that is there so far, I just wanted it to keep going!
Whew, that is really great to hear =).
For what it is worth…
The nix.dev article looks helpful, so I’ll definitely go through that.
What I was really hoping for was more of a structured path to guide me through these concepts (a continuation from the guide). I realise how challenging that is likely to be given how broad the ecosystem is. Everyone probably has their own motivation for learning it. For me, the reproducible development environments are a big draw. For another person it might be something completely different. So I don’t know what that looks like exactly. Possibly branching paths after the quick start guide in the form of guides for the most common use cases? Whatever the case I think the hand holding needs to continue a little longer before you let people loose to discover the rest of the ecosystem for themselves. My best learning experiences have been just that. I follow the guide until I can’t help imagining all the possibilities and I am naturally compelled to go off and explore those ideas for myself. That’s where my journey really starts. If I’m going through a book (for example), that point is usually somewhere well before the middle and I may not even finish because I have enough confidence that I know what to look for. With this guide I still feel lost in a vast sea. It still feels like there’s a very large investment up front (in Nix), and I’m just trying to find the motivation to push through (hoping that it is what I imagine it to be).
Anyway, I hope my feedback is helpful in some way. I guess what I’m trying to say is that I definitely don’t expect Zero-to-Nix to be an exhaustive guide to the entire Nix ecosystem, but I do think it could carry the reader a little further, perhaps until they’re at least comfortable in writing their own simple flakes (for common use cases). A little extra confidence. That might be enough to reach that wonderful state of inspiration that drives further learning.
They provide templates for using flakes with some languages. Depending on the language you want to use to build stuff, that’s what you should look at. I think they don’t spell it because they tried to keep everything language-independent in the text, so you have to run the commands they provide to see more.
Templates are good for saving people the trouble of writing boilerplate. They are absolutely not a substitute for understanding. If you want to change something nontrivial in a project generated from a template, you still have to know how it all works.
If you’re interested in this, Wildcard is a cool example of malleable software in the real world.
Love the addition of exhaustive checks in case statements. After learning oCaml it is something I’ve wanted in every language I touch. Nicely done
Yes! It really improves that warm feeling of safety :)
“mostly works” is a funny choice of words, given that the app uses proof of work to prevent spamming.
Just discovered nidium, which looks like a nice alternative.
Its sad to see the classic Firefox addons go… IIRC WebExtensions have much more limited possibilities.
The big problem is XUL extensions are a dead end - they make sandboxing and multiprocess much harder, if not impossible. I do think they should get WebExtensions near API parity though, before killing off the old model.
I gotta say, this would make me awfully nervous. I tend to agree with the dictum that you should be including at most one new and sexy technology in your stack at any point. The hype on Elixir and Elm is super high right now—that’s not to say that either one of them is overhyped, or undeserving of its attention, but if I read about starting one new project where essentially the entire stack for both the front and backend is brand new, I’m going to start wondering whether these decisions were made for engineering reasons or because the author wanted a chance to play with some new and exciting tools.
I’d put Elm in the “new and hip tech” bucket but you should remember that Elixir is based on decades-old Erlang/OTP experience. If you really want to reduce Elixir to a short sentence, it’s cleaned up, syntactic sugar over Erlang with macros (among other goodies).
Elixir is hyped, sure, but at least its claims about fault tolerance and distribution have historical proof because they’re Erlang’s claims.
–
Something that stands out to me about the Elixir/Elm pairings is that people focus on Elm’s types as a major point yet use a dynamic language for the backend.
I would argue that while Elm is still currently in that bucket, it is start to leave. The language is becoming more stable on the whole, and the ideas it promotes have already spread to almost every other frontend framework available. If you’re writing Redux, you’re writing Elm without types. With regards to the types, Elm uses a basic type system which has existed in MLs for decades. At this point, there is little that Elm introduces as hip - instead, it is more the coagulation of a few very popular ideas elsewhere. Elm’s type system has historical proof because of this. The Elm architecture has been popular in concurrency groups for a long time, too.
Agreed. Elixir’s typespecs just don’t match up with the types-as-a-feature of Elm. To me, the biggest selling point of Elixir is the build tooling that helps simplify getting started with Erlang. Rebar is dead, long live rebar. Honestly though, if I got to choose the backend for a new production job, I would probably choose Haskell. I’d rather have a proper type system than not. This is not necessarily true for other Elm users - I see a lot of them coming from Rails or JS backgrounds, and Elm’s type system is often the first proper type system they’ve used.
Fair enough, I do think Elm is a safe choice nowadays (I have a friend who works at NRI ). Like @zdsmith I do see it “hyped up” a lot so I got the wrong impression. Thanks for the explanation.
Usually, I’d agree with you. Especially when looking at the JS world. But many of the interesting concepts in Elm and Elixir have one significant advantage over “hip.js”: It’s very hard to screw up your code. The lack of this feature in Coffeescript and ES7, for example, makes it easy to misinterpret or half-heartedly apply solutions one doesn’t understand, making it hard as hell to maintain the software afterwards.
But proper typing, pure functions, good error messages, and time travelling debuggers are all things that are hard to misuse and generally lead to better code. These languages teach better programming, IMO.
This is basically a mass-hallucination in the Elixir community. For some reason the Elm meme has become deeply embedded in the brain stem of some Elixir devs, not for any good reason it would seem.