pod cast
The topic isn’t in my interest, but you seem like really likable guys, nice work.
edit: whoops, I misunderstood but still nice work.
Thanks! The movies podcast stars me and a fellow amateur standup (see the theme) and the other one is more of a weekly journal. Both are infants and I just want to raise them right.
The built in peg parsing library in factor is super impressive. The tiny C compiler builds a stack while parsing C code and pops it to emit code, since the stack is already there implicitly with factor, I wonder how terse a super tiny C compiler could be written in factor.
I haven’t played with the PEG parser in Facctor yet, hearing this intrigues me a bit, however.
There is already an implicit stack in factor, but because it does duty as being where the parameters live, it might be a little tricky to use in the way that the Tiny C compiler uses it (I haven’t checked the tcc, so I don’t know for sure), if there’s any off-stack state (though factor does have globals and dynamic/lexical variables that could also be used).
Thinking more, you are definitely right that it may be super awkward to do anything tricky, but the code embedded in a peg parser would still help make things compact.
The whole point of shell scripting is getting shit done and move on to focus on more important stuff.
Once the script takes the life of its own and try to grow up, just spend some time on this teenager and make it speak in a real adult language. Another option that is sometimes preferable would be keeping it young and little, while replacing its limbs with cybernetics made of the real grownup stuff.
I would say the point of shell is composing other programs well, not hacking stuff poorly, that is what python/perl is for :)
That being said, I agree with you, shell has so much potential, but is so shitty for some unknown reason.
For me the whole point of shell scripting is that if it breaks, I can easily decompose the program into parts, and execute each part independently.
For getting things done, I’d rather use any general-purpose dynamically-typed scripting language.
@hwayne Calling the pony developers smug for that is a bit misleading with the missing context… Pony has checked exceptions, calls that can fail need to be annotated with ‘?’ . If division could fail a whole bunch of functions would need annotations defeating the value of ‘?’ as a documentation tool.
This means 1/0 == 0 is practical in pony to avoid typing ‘?’ everywhere.
It’s not 100% clear but I’m referring to the original tweet being smug: the person who made the “we men of industry” tweet. I cut off their username to anonymize them.
Oh ok, btw, your twitter account is a lot of fun. thanks for the laughs - everyone reading this follow him :)
Any plans for features you want to include? or is it to recreate something as a learning experience?
My aim is to replace make. I have the core algorithm done, it takes a very plain tab separated format as input. I think there needs to be a good UI to describe builds. I’m proving it by building existing projects with it.
Within a few weeks of writing Make, I already had a dozen friends who were using it.
So even though I knew that “tab in column 1” was a bad idea, I didn’t want to disrupt my user base.
So instead I wrought havoc on tens of millions.
(from https://beebo.org/haycorn/2015-04-20_tabs-and-makefiles.html)
The side-note is even better:
Side note: I was awarded the ACM Software Systems Award for Make a decade ago. In my one minute talk on stage, I began “I would like to apologize”. The audience then split in two - half started laughing, the other half looked at the laughers. A perfect bipartite graph of programmers and non-programmers.
I so far setup a blog which will be where I document a new backup tool I have been designing, prototyping and thinking about for a while now https://packnback.github.io/blog/work_begins/ .
I also want to play with https://tryretool.com/about which seems like it might speed up my progress on a different project in tying up a bunch of loose ends that would currently be done with manual sql queries.
In Canada most home routers (well, from bell at least, which is one of two dominant ISPs) come with a long randomly generated wifi password stamped on them.
Specifically 8 characters long. And for no apparent reason it is limited to hex ([0-9A-F]{8}). Creating about 4 billion passwords. It takes about a day on my gtx970m to try every single one against a captured handshake.
The defaults ESSID’s (wifi network names) are of the form BELL###. So there are a thousand extremely common ESSID’s. Apparently WPA only salts the password with the ESSID before hashing it and publicly broadcasting it as part of the handshake. In a few years of computation time on a decent laptop (so far less if I rented some modern gpus from google…) I could make rainbow tables for every one of those IDs that included every possible default password.
On the bright side it looks like this new method extracts a hash that includes the mac addresses acting as a unique salt, so at least the rainbow table method will still require capturing a handshake.
I never had this realization. Now my head has exploded.
What tool do you use to try these combinations? And is it heavily parallelized? To me 4 billion should not take a whole day…
I experimented with pyrit (24h runtime, builds some form of rainbow table, wrote a short program to pipe it all the passwords) and hashcat (20h runtime, no support for rainbow tables, supports generating the password combinations by itself via command line flags). They are both heavily parallelized, 100% utilization of my GPU.
My GPU is a relatively old GPU in a laptop with shitty cooling, which may contribute to the runtime.
Running on a CPU it said it would take the better part of a month.
Interesting. While waiting for a reply, I thought to myself: I wonder how much it would cost to run it on Google Compute with the best hardware. Could be worth it to those who want wifi for a week or longer without paying anything. Spooky.
In Luxembourg every (Fritz)box comes with a password written only on the notice (not on the box itself) that is 20 (5*4chars) in hexa. It’s a pain to type at first, but well, it’s seem like a good one.
In New Zealand most home routers I have seen recently come with a long randomly generated (I hope) wifi password stamped on the bottom. It may have other problems, but blind dictionary based cracking is not going to work
I currently use nix to build a mixed rust/go project that fetches and builds a lot of dependencies and links them all together, it works pretty well as a build tool and I haven’t felt any of the issues he is mentioning.
Is there any implementation similar to /mnt/acme? The standard acme heavily uses the file server internally, and all the custom scripts depends on the availability of /mnt/acme too.
I finally had the inspiration to solve using public/private encryption with content addressed deduplicated data. If you are interested in encrypted deduplicating backup tools and have some expertise, I wouldn’t mind talking about it to make sure it is sane. Did a tiny bit of work on a prototype backup tool, but not a priority for now, need to do some more boring work first.
Browsing this authors public repos has some neat things:
list of neat utilities, including a minimal ssh client in go and a regex based sort tool: https://github.com/as/torgo
A neat idea of making a dsl in go comments to generate go code for parsing binary data: https://github.com/as/wire9
What seems like a go implementation of the plan9 shell: https://github.com/as/rc
I’ve been very happy with pass, a command-line tool that stores passwords and notes in a git repository. Being a directory of text files, it’s easy to use standard command-line tools on or tinker with programmatically. There’s a thriving ecosystem of plugins, tools, and clients.
I also use autopass for autofilling in X applications. As time goes in, I fill in more and more autotype fields to check ‘remember me’ boxes and other non-standard fields. It’s really convenient. (One annoyance is that if any password files are not valid YAML, autopass errors to stdout without opening a window, so I hit my hotkey and nothing happens.)
One more vote for pass, i’ve been a happy user for years now. Was missing a proper browser extension for it so I built one: Browserpass. It’s no longer maintained by me due to lack of time, but the community is doing a far better job at maintaining it than I possibly could so that’s all good!
Pass looks pretty neat, but the reason I stick with KeePass(XC) is that Pass leaks metadata in the filenames - so your encryption doesn’t protect you from anyone reading the name of every site you have an account with, which is an often overlooked drawback IMO.
Your filenames don’t have to be meaningful though. It would be relativity trivial to extend pass to use randomly generated names, and then use an encrypted key->value file to easily access the file you want.
On the other hand, if someone already has that access to your device, accessing ~/.mozilla/firefox/... or analogous other directories with far more information is just as trivial, and has probably more informational value.
Then youre working around a pretty central part of pass’s design, which I don’t really like. It should be better by default.
wrt your second point, if you give up when they can read the filesystem, why even encrypt at all? IMO the idea is you should be able to put your password storage on an untrusted medium, and know that your data are safe.
if you give up when they can read the filesystem, why even encrypt at all?
Because in my opinion, there’s a difference between a intruder knowing that I have a “mail” password, and them actually knowing this password.
Huh, you made me read the man page and learn about this - it’s really cool! What’s your usage like for this though? Just use any barcode reader and then copy paste in the password box?
A barcode reader I trusted, but yeah - its a good hack because I usually have my laptop which has full disk encryption.
Yeah, when you said that all I could think of was the barcode scanner that I used to use where it would store the result of each barcode scanned in a history file… Not ideal :)
Seems like the android version’s maintainer is giving up. (Nice, 80k lines of code in just one dep…)
The temptation to nih it is growing stronger but I don’t have enough time :(
In general, I agree with the idea and setup of unveil() though I havn’t had much time to experiment with it yet. Something that irks me a bit though is that there doesn’t seem to be a way to hide a previously unveiled path - either that or I am greatly misreading the man page.
The case I have in mind is that I have a set of namespaces (so paths) that are conditionally accessible (on path and rwx- mode) based on where the call originates from (function in user provided scripting interface) and these may need to route through third party libraries, thus I routinely want to mask/unmask path - so where’s reveil()? :-)
The idea is you structure your program so it isn’t bouncing between privilege levels. It should not be possible to ever climb back out of a position of limited access. Sometimes this means using something like privilege separation where different processes work together, passing file descriptors, etc.
I understand that and to the largest extent possible given other constraints I do privsep and juggle descriptors around, but in those cases I can typically pledge without rpath/wpath so the value of unveil there is rather limited.
Think of trying to unveil-harden something like Wireshark as it has a similar pattern to what I’m describing - there is some ‘light’ privsep in the form of the Lua interpreter/JIT: when the process is in that execution context, few file operations should really be exposed, possibly some temp- store. The other execution context, so the “engine”, need to be able to load / save from a much wider set of paths. Sure it can be resectioned into better process privsep etc. but the amount of work gets substantial and would lead to the same situation as above, rpath/wpath likely won’t be needed.
Yeah, the microkernels like OKL4 used an IDL with tools like CAmkES to get components to work with each other. Far as Cap n Proto, I actually recommended that same thing to people wanting to build stuff on separation kernels after talking to its author, Kenton Varda. He was definitely well-read on capability-security research and projects. Inspires confidence. The other cool thing is you don’t give up performance to get security with Cap’ n Proto. I love it when that happens.
I wonder if a posix env for webasm would be a good way to upgrade C utilities like this with more security.
Nix is one of those tools where you don’t know what you aren’t getting until you get it. There are so many things wrong with this post, but I only know that because I spent weeks wrestling with lots of those issues myself.
You basically need to read all the nix pills (https://nixos.org/nixos/nix-pills/), the nix manual, the nixpkgs manual and the nixos manual in a loop gradually filling in what is going on… which takes a long time.
Nix is very confusing at first, but enables things that you would not have thought possible once you know what you are doing. The core people don’t seem to evangelize much because it is just one of those tools that solved their problems so well, they don’t have to care about the outside world anymore.
I use nixos for my laptop, desktop and a few servers, have all my machines config under version control and can roll the machines back to any version whenever I want, remote administer them, build an install on one computer, test it in a VM and then ship it with a single command to another machine. I won’t go back to another OS despite there being room for improvement, because no other OS comes close in terms of what you can do (my path has been windows -> ubuntu -> arch linux -> freebsd -> openbsd -> nixos).
I use NixOS on everything and completely agree. It’s a massive investment. It was worth it for me, but it shouldn’t have to be a massive investment. Need better tooling and docs.
Yeah, there are lots of things I wish I could explain, but the explanations take a large investment. Take for example the complaint about making a new language instead of using something existing… It seems sensible on the surface, until you understand deeply enough to know why laziness is needed, and features like the pervasive use of interpolation to generate build scripts… Once you understand those, you know why a new language was made.
The lack of tooling IS a valid complaint, and the fact the language isn’t statically typed could also be a valid complaint, but the community is growing despite all those issues, which is a good sign.
I’m hoping https://github.com/haskell-nix/hnix will help with that point, and the tooling.
You basically need to read all the nix pills (https://nixos.org/nixos/nix-pills/), the nix manual, the nixpkgs manual and the nixos manual in a loop gradually filling in what is going on… which takes a long time.
I’ve tried reading all of this but I found it all horribly confusing and frustrating — until I read the original thesis on it, which I still think is (perhaps surprisingly) still the best resource for learning how nix works. It’s still a pretty big investment to read, but imho it’s at the very least a much less frustrating experience than bouncing from docs to docs.
(I wonder if the same is true of the NixOS paper?)
How do you manage secrets in configuration files? Passwords, ssh keys, tls certs and so on. If you put them into a nix-store they must be world-readable, right?
One could put a reference to files outside the store in configuration files, but then you loose a bit of the determinism of NixOS and it’s not always easily possible with third-party software to load e.g. passwords from an external file anyways.
Besides the learning curve, that was the single big problem which kept me from diving deeper into the nix ecosystem so far.
You are right, no passwords should ever go in the nix store.
The encryption key for my backup script is in a private root owned file I put under /secrets/ . This file is loaded in my cron job so the nix store simply references the secret but doesn’t contain it. This secret dir isn’t under version control, but is backed up with encrypted backups.
Every daemon with secret config I have seen on nixos has a “password file” option that does the same thing.
How do you manage secrets in configuration files?
For my desktop machine I use pass with a hardware key. E.g. nix (home-manager) generates an .mbsyncrc with
PassCmd "pass Mail/magnolia"
For remote machines, I use nixop’s method for keeping keys out of the store:
Nix is one of those tools where you don’t know what you aren’t getting until you get it. There are so many things wrong with this post
I have to disagree, but not with the second sentence - I was sure as I wrote the post that it was full of misconceptions and probably outright errors. I wrote it in part to capture those in the hopes that someone can use them to improve the docs.
But to disagree with the first sentence, I was keenly aware through the learning and writing that I was missing fundamental concepts and struggling to fill the gaps with pieces from other tools that didn’t quite fit. If there is indeed a whole ‘nother level of unknown unknowns, well, that’s pretty disheartening to me.
I can’t speak for your experience, but that’s how it was for me anyway, on the plus side it also meant nix solved more problems I was having after I understood better. I even thought nix was over complicated to the point I started writing my own simpler package manager, only to find nix had solved problems I ran into before I knew what they were.
The OOM killer is, IMNSHO, broken as designed. Track how much memory is available, return NULL, let the application deal with it then, when it can still be dealt with, instead of killing a random (I know, not really random) process later. I disable the OOM killer whenever feasible.
In practice though, C++ throws, Rust panics, I think only well-written C code would have a chance of behaving ‘correctly’ in this case? And that’s the kind of low-level process that’s unlikely to be selected by the OOM killer.
So effectively, letting the application deal with it equals letting the application crash. The application that runs into this situation can be whatever application happens to need an allocation at some point. That seems more random than what the OOM killer targets?
That’s not the OS’s decision to make, though. With the OOM killer enabled, C/C++ doesn’t have the option to handle it differently. If Rust or Go ever wants to change how they handle allocation failure in the future, they can’t if the OOM killer is enabled. It’s too strong of a policy decision for such low-level features as allocation and process lifetime.
(Of course, I haven’t written a kernel used by billions, so it’s easy for me to judge.)
Sounds to me like a good opportunity for an opt-in flag asserting that a particular binary handles allocation failures gracefully, so return NULLs to them when appropriate; else deal with it via the OOM killer.
If there were capacity planning done and limits set on processes or process groups, the ones violating their own capacity would be the ones degraded.
OpenVMS used process limits for that reason. Plus accounting purposes like link says. Then, they had both virtualized kernels and clustering to mitigate that level of failure.
I really want to love NixOS: the ideas, the tools, how things are supposed to work… All they propose sound like future to me. Be able to have my config, which defines how I want my computer to behave, and just plug it in all the machines I may need to use sounds mindblowing.
And personally, I am finding the learning curve to be steep as hell. Not only because the documentation seems to assume that the one reading is slightly familiar with the environment and how things work, also because I need to modify certain habits to make them work with NixOS. For example, one of the must-haves for me is my Emacs configured as I like. I can tell Nix to clone my Emacs configuration to the home folder, and it should already be able to start downloading the packages it needs; but in reality that is not trivial because it seems to expect the packages to be downloaded from the Nix configuration instead of the Emacs one (to ensure the system to be deterministic, it makes absolute sense). I am used to have everything available from everywhere, but NixOS has most things isolated by default to keep the purity.
I will keep on fighting with stuff until I find things out, but I am sure that as the project grows all these corners will be polished to make it more accesible to newcomers.
For what it’s worth, I’ve been a heavy user of Nix, NixOS and Emacs for years, but still haven’t bothered configuring Emacs with Nix. The Emacs package I use is emacs25.override { withGTK2 = false; withGTK3 = false; } (this causes it to compile with the lucid toolkit, avoiding http://bugzilla.gnome.org/show_bug.cgi?id=85715 ). I do everything else with a ~/.emacs.d that’s been growing for years, across various distros, and is a mixture of Emacs Prelude (which I started with), ELPA/MELPA/Marmalade and (more recently) use-package. I just install any dependencies into my user profile or NixOS systemPackages. Actually, I define a package called all which depends on everything I want; that way I can keep track of it in git, rather than using commands like nix-env which can cause junk to accumulate. It looks like this:
with import <nixpkgs> {};
buildEnv {
name = "all";
paths = [
abiword
arandr
audacious
cmus
(emacs25.override { withGTK2 = false; withGTK3 = false; })
gensgs
mplayer
picard
vlc
w3m
# and so on
];
}
There are certainly some aspects of Nix which require “buy in” (it looks like Guix is slightly better in this regard), but there are others which allow “business as usual”.
For example, if you want to make a Nix package that just runs some bash commands, you can try runCommand, e.g.
with import <nixpkgs> {};
runCommand "my-package-name" {} ''
# put your bash commands here
# the "result" of your package should be written to "$out"
# for example
mkdir -p "$out/bin"
printf "#!/usr/bin/env bash\necho hello world\n" > "$out/bin/myFirstProgram"
''
Whether this will work obviously depends on what the commands do, but if it works then it works (you can even run stuff like wget, git clone, etc. if you want to; although I’d include a comment like TODO: use fetchurl or fetchgit). If your scripts need env vars to be set, put them between the {}. If you want some particular program available, put buildInputs = [ your programs here ]; between the {}.
Another example is programs which assume the normal FHS filesystem layout: making them work is sometimes as easy as using steam-run (e.g. https://www.reddit.com/r/NixOS/comments/8h1eu5/how_do_you_deal_with_software_that_is_not_well/ ).
Whilst there’s complicated infrastructure in Nixpkgs to support packages which use Python, Haskell, autotools, etc. sometimes we can get away without having to go ‘all the way’ :)
Woah, thank you, that was super useful! I think I got it, but I still have to test it and have my own gotcha moments :)
When starting out I just built a few packages from source in the traditional way to make them work the way I was used to, perhaps that could work with emacs and install into home initially. (I don’t use emacs, sorry I can’t help more.)
You’re not alone - I installed NixOS recently and like what I’ve seen, but haven’t been able to put in enough time to get over the learning curve yet. Until I do, I’m fairly sure I’m missing several chances to “do things properly” because I’m not sure what that looks like under NixOS. This post and comments have been quite reassuring at least!
I guess that’s the beauty of open source - now we all have to go and fix the documentation?
I guess that’s the beauty of open source - now we all have to go and fix the documentation?
Well… I guess. I’ll make some coffee.
setting up what I want to be a collaborative database of C compiler and tool tests
https://c-testsuite.github.io/ https://github.com/c-testsuite/c-testsuite
If you know anyone who is into that sort of thing , send them my way.