Never heard of it, but it seems like a super interesting approach to interactive environment. I cannot help but remember this Bret Victor’s talk about how we have been programming in almost-anachronistic ways with no innovation in the interfaces.
There’s nothing obsolete about text. Visual languages don’t work. They’ve been tried hundreds of times, to no avail, because GUIs are fundamentally bad user interfaces for experienced users. Text is a better interface for power users, and programming languages are for power users.
Why can’t I re-sort the definitions in my source instead of scrolling around then? Why is it hard to see a dependency graph for all my functions? Why do I have to jump between files all the time? Text - an interface for linear presentation of information - is fundamentally a kludge for code, which is anything but linear.
Why can’t I re-sort the definitions in my source instead of scrolling around then?
Sort them by what? It wouldn’t be difficult to write a script using the compiler module of Python to reorder the declarations in your file in an order you chose, which you could then use to replace the text of a buffer in your text editor. But usually I’d suggest what you want is to see a list of definitions in a particular order, which you could then use to jump to the definitions.
In every case that I’ve seen of not using plain text, it inevitably become inscrutable. What is actually in my Smalltalk/Lisp image? What is actually there? What can people get out of it later when I deploy it?
Why is it hard to see a dependency graph for all my functions?
Because nobody has written something that will take your source files, determine their dependencies, and produce the DOT output (a very popular text-based format for graphs, far superior in my opinion to any binary graph description format) for that graph? It’s not like it’s particularly difficult.
Why do I have to jump between files all the time?
Because it turns out it’s useful to organise things into parts. Because it turns out it’s useful to be able to parallelise compilation and not reparse every bit of code you’ve ever written every time you change any part of it.
I think that it’s definitely a requirement of any decent programming language to have a way to easily take the source code of that programming language and reify it into a syntax tree, for example. That’s very useful to have in a standard library. In Lisp it’s just read, Python has more complex syntax and requires more machinery which is in a standard library module, other languages have similar things.
One point might be: maybe you don’t need a dependency graph if you can just make your code simpler, maybe you don’t need to jump around files much if your code is properly modularised (and you have a big enough screen and narrow enough maximum line length to have multiple files open at once), maybe sorting your definitions is wrong and what you want is a sortable list of declarations you can jump to the definitions.
Not to mention that version control is important and version controlling things that aren’t text is a problem with conventional version control tools. Might not be an issue, you have your own VCS, but then you enter the land of expecting new users of your language to not only not use their standard editor, but also to not use their standard VCS, not use their standard pastebin, etc. How do you pastebin a snippet of a visual language so someone on an IRC channel can see it and give you help? How do you ask questions on StackOverflow about a visual language?
It’s not even an issue of them being unusual and unsupported. By their very nature, not using text means that these languages aren’t compatible with generic tools for working with text. And never will be. That’s the thing about text, rather than having many many many binary formats and few tools, you have one binary format and many many tools.
Hey Miles, thanks for elaborating. I think we could have more interesting discussions if you give me a bit more credit and skip the trivial objections. You’re doing the same thing you did last time with C++ compilers. Yes, I know I could write a script, it’s not the point. I’m talking about interactive tools for source code analysis and manipulation, not a one-off sort.
I don’t agree with your objections about parallel compilation and parsing. It seems to me that you’re just thinking about existing tools and arguing from the status quo.
Further down, you make a suggestion which I interpret as “better languages could mitigate these issues” which is fair, but again I have to disagree because better languages always lead to more complex software which again requires better tools, so that’s a temporary solution at best.
You also raise a few objections, and here I should clarify that what I have in mind is not some kind of visual flowchart editor. What I’m claiming is that the conflation of internal representation and visual representation for code is counterproductive, but I think that a display representation that mostly looks like text is fine (as long as it’s actually within a structured editor). What I’m interested in is being able to manipulate symbols and units of code as well as aspects of its structure rather than individual characters.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
I think the practitioners and particularly academics in our field should have more sense of possibilities and less affection for things the way they are.
When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there.
Existing VCS could work reasonably well if the serialisation/“text projection” was deterministic and ‘stable’, i.e. minimising the amount of spurious changes like re-ordering of definitions, etc. As a first approximation I can imagine an s-expression language arranging the top-level expressions into lexicographic order, spreading them out so each sub-expression gets its own line, normalising all unquoted whitespace, etc. This would be like a very opinionated gofmt.
If users wan’t to preserve some layout/etc. then the editor can store that as metadata in the file. I agree that semantics-aware diffing would be great though ;)
So you always end up separating the storage format from display representation in order to create better tools, which is exactly my point.
Yes, I agree with your points. Was just remarking that some of these improvements (e.g. VCS) are easier to prototype and experiment with than others (e.g. semantics-aware queries of custom file formats).
The way I see it is that there are tools for turning text into an AST and you can use them to build the fancy things you want. My point wasn’t ‘you can write that sort as a one-off’. You can edit code written in a text-based programming language with a really fancy editor that immediately parses it to an AST and works with it as an AST, and only turns it into text when written to disk. I have no problem with that. But really you’re still editing text when using something like paredit.
Something like vim but where the text objects are ‘identifier’, ‘ast node’, ‘expression’, ‘statement’, ‘logical line of code’, ‘block’, etc. rather than ‘text between word separators’, ‘text between spaces’, ‘line’, etc. would be a useful thing. In fact, you could probably do this in vim. I have an extension I use that lets you modify quotes around things taking into account escaped quotes within, etc. That’d probably work way better if it had that default structure for normal text and then could be customised to actually take into account the proper grammar of particular programming languages for which that is supported.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I do find it useful that I can do ci( and vim will delete everything inside the nearest set of parentheses, properly taking into account nesting. So if I have (foo (hello 1 2 3) bar) and my cursor is on the a, it’ll delete everything, even though the nearest ( and ) are beside hello and not foo. That kind of thing, more structured editing? I’m all for that.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
Ultimately I think if you have a recognised standardised text projection of your code, you might as well just make that the standardised format for it, then your fancy editor or editor plugin can parse it into the structures it needs. This helps ensure you can edit code over SSH, and have a variety of editors compatible with it, rather than just the single language-designer-provided IDE.
One of the nice things about git is that it stores snapshots internally rather than diffs. So if you have a language-specific tool that can produce diffs that are better due to being informed by the grammar of the language (avoiding the problem of adding a function and the diff being ‘added a new closing brace to the previous function then writing a new function except for a closing brace’, for example), then you can do that! Change the diff algorithm.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
Well I mean I do much prefer creating a graph by writing some code to emit DOT than by writing code to emit PNG. I did so just the other day in fact. http://rout.nz/nfa.svg. Thank god for graphviz, eh?
Note that there’s also for example farbfeld, and svg, for that matter: text-based formats for images. Just because it’s text underneath doesn’t mean it has to be rendered as ASCII art.
Cool, I’m glad we can agree that better tools would be good to have.
As far as the storage format, I don’t actually have a clear preference. What’s clearly needed is a separation of storage format and visual representation. If we had that, arguments about tabs vs spaces, indent size, let/in vs where, line length, private methods first or public methods first, vertical vs horizontal space (and on and on) could be nullified because everybody could arrange things however they like. Why can’t we have even such simple conveniences? And that’s just the low hanging fruit, there are far more interesting operations and ways of looking at source that could be implemented.
The other day there was a link to someone’s experiment (https://github.com/forest-lang/forest-compiler) where they use one of the text projections as the storage format. That might work, but it seems to me that the way parsing currently happens, there’s a lot of unnecessary work as whole files are constantly being reparsed because there is no structure to determine the relevant scope. It seems that controlling operations on the AST and knowing which branches are affected could be a lot more efficient. I’m sure there’s plenty of literature of this - I’ll have to look for it (and maybe I’m wrong about this).
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I understand your concern, but this sounds exactly like the current state of affairs (other than really basic stuff like syntax highlighting maybe). There’s a separate language plugin (or plugins) for every combination of editor/IDE and language, and people keep rewriting all that stuff every time a new editor becomes popular, don’t they?
One of the nice things about git is that it stores snapshots internally rather than diffs.
Sure, we can glean a bit more information from a pair of snapshots, but still not much. It’s still impossible to track a combination of “rename + change definition”, or to treat changes in the order of definitions as a no-op, for example. Whereas if we were tracking changes in a more structured way (node renamed, sub-nodes modified etc.), it seems like we could say a lot more meaningful things about the evolution of the tree.
Thank god for graphviz, eh?
Perhaps the analogy was unclear. Being able to write a set of instructions to generate an image with a piece of software has nothing to do with having identical storage format and visual representation. If we approached images the same way we approach code, we would only have ASCII images as the output format, because that’s what is directly editable with text tools. Since you see the merits of PNG and SVG, you’re agreeing that there’s merit in separating internal/storage representation from the output representation.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language
I might have missed something, but I didn’t see anyone proposing this.
In particular, my understanding of Luna is that the graphical and textual representations are actually isomorphic (i.e. one can be derived if given the other). This means we can think of the textual representation as the being both a traditional text-based programming language and as a “file format” for serialising the graphical programming language.
Likewise we can switch to a text view, use grep/sed/etc. as much as we like, then switch back to a graphical view if we want (assuming that the resulting text is syntactically valid).
Tools that improve navigation within textual source have existed for a long time. I’ve been using cscope to bounce around in C and Javascript source bases for as long as I can remember. The more static structure a language has, the easier it is to build these tools without ambiguity. The text source part isn’t really an issue – indeed it enables ad hoc tooling experiments to be built with existing text management tools; e.g., grep.
Those tools aren’t text, though. They’re other things the augment the experience over just using text which becomes an incidental form of storage. Tools might also use AST’s, objects, data flows, constraints, and so on. They might use anything from direct representation to templates to synthesis.
I think the parent’s point was just text by itself is far more limited than that. Each thing I mentioned is available in some programming environment with an advantage over text-driven development.
I think it’s wrong to say that the text storage is incidental. Line-oriented text files are about the lowest common denominator way we have to store data like this.
For starters, it’s effectively human-readable – you can lift the hood up and look at what’s underneath, understanding the effect that each individual character has on the result. Any more complicated structure, as would be generally required to have a more machine-first structured approach to program storage, is not going to have that property; at least not to the same extent.
If this thread demonstrates anything, it’s that we all have (at times, starkly!) different preferences for software engineering tools. Falling back on a textual representation allows us to avoid the need to seek consensus on a standard set of tools – I can use the editor and code manipulation tools that make sense to me, and you can stick to what makes sense to you. I think a lot of the UNIX philosophy posturing ends up being revisionist bunk, but the idea that text is a pretty universal interface for data interchange isn’t completely without merit.
The under-the-hood representation is binary-structured electricity that gets turned into human-readable text by parsing and display code. If already parsing it and writing display code, then one might just as well use a different encoding or structure. Text certainly has advantages as one encoding of many to have available. Plugins or input modules can take care of any conversions.
Text does often have tooling advantages in systems like UNIX built with it in mind, though.
I think it’s a reductionist argument for the good-enough, hard earned status quo. I think it can be valid, but only within a very narrow perspective - operational and short term.
To my mind, your position is equivalent to this: we should only have ASCII images, and we don’t need any of that PNG/JPG/PSD stuff with complicated specialised tools. Instead, we can use generic text tools to make CAD drawings, diagrams, paintings - whatever. All of those things can be perfectly represented in ASCII, and the text tools will not limit us in any way!
I want to search my code like a database, e.g. “show my where this identifier is used as a parameter to a function” - the tooling for text doesn’t support this. Structured tooling would be super useful.
Many things can be “queried” with grep and regular expressions. Which is also great to find “similar occurrences” that need to be checked but are only related by some operators and function calls following another. But on the other hand I’d definitely argue that IDEs at least have a tiny representation of the current source file for navigation or something and that you can click some token and find its uses, definitions, implementations … But it only works if I disable the low power mode. And with my 8Gb RAM MacBook I sometimes have to kill the IDE before running the program to make sure I can still use it at the same time.
Maybe if it wasn’t parsing and re-parsing massive amounts of text all the time, it would be more energy efficient…
Exactly. And it could extend beyond search; code could be manipulated and organised in more powerful ways. We still have rudimentary support for refactoring in most IDEs, and so we keep going through files and manually making structurally similar changes one by one, for no reason other than the inadequate underlying representation used for code.
I could be wrong and maybe this is impossible to implement in any kind of general way beyond the few specific examples I’ve thought of, but I find it strange that most people dismiss the very possibility of anything better despite the fact that it’s obviously difficult and inconvenient to work with textual source code.
The version of cscope that I use does things of that nature. The list of queries it supports:
Find this C symbol:
Find this global definition:
Find functions called by this function:
Find functions calling this function:
Find this text string:
Change this text string:
Find this egrep pattern:
Find this file:
Find files #including this file:
Find assignments to this symbol:
I use Find functions calling this function a lot, as well as Find assignments to this symbol. You could conceivably add more query types, and I’m certain there are other tools that are less to my admittedly terminal-heavy aesthetic preference that offer more flexible code search and analysis.
The base structure of the software being textual doesn’t get in the way of this at all.
Software isn’t textual. We read the text into structures. Our tools should make these structures easier to work with. We need data structures other than text as the common format.
Can I take cscope’s output and filter down to “arguments where the identifiers are of even length”?
Compilers and interpreters use structured representations because those representations are more practical for the purposes of compiling and interpreting. It’s not a given that structured data is the most practical form for authoring. It might be. But what the compiler/interpreter does is not evidence of that.
I would also be interested on your thoughts about Lisp where the code is already structured data. This is an interesting property of Lisp but it does not seem to make it clearly easier to use.
but it does not seem to make it clearly easier to use.
Sure it does: makes macros easier to write than a language not designed like that. Once macros are easy, you can extend the language to more easily express yourself. This is seen in the DSL’s of Common LISP, Rebol, and Racket. I also always mention sklogic’s tool since he DSL’s about everything with a LISP underneath for when they don’t work.
Sure, but all of these tools (including IDEs) are complicated to implement, error-prone, and extremely underpowered. cscope is just a glorified grep unless I’m missing something (I haven’t used it, just looked it up). The fact that you bring it up as a good example attests to the fact that we’re still stuck somewhere near mid-twentieth century in terms of programming UI.
I bring it up as a good example because I use it all the time to great effect while working on large scale software projects. It is relatively simple to understand what it does, it’s been relatively reliable in my experience, and it helps a lot in understanding the code I work on. I’ve also tried exuberant ctags on occasion, and it’s been pretty neat as well.
I don’t feel stuck at all. In fact, I feel wary of people attempting to invalidate positive real world experiences with assertions that merely because something has been around for a long time that it’s not still a useful way to work.
Have you noted, that the Luna language has dual representation? Where each visual program has an immediate and easily editable text representation, and the same is true in the other direction as well? This is intended to be able to keep the benefits of the text interface, while adding the benefits of a visual representation! That’s actually the main idea behind Luna.
What about the power users who use things like Excel or Salesforce? These are GUIs perfectly tailored to specific tasks. A DJ working with a sound board certainly wouldn’t want a textual interface.
Textual interfaces are bad, but they are generic and easy to write. It’s a lot harder to make an intuitive GUI, let alone one that works on something as complex as a programming language. Idk if Luna is worthwhile, but text isn’t the best user interface possible imho
DJs use physical interfaces, and the GUIs emulation of those physical interfaces are basically all terrible.
I’ve never heard of anyone liking Salesforce, I think that must be Stockholm Syndrome. Excel’s primary problem in my opinion is that it has essentially no way of seeing how data is flowing around. If something had the kind of ‘reactive’ nature of Excel while being text-based I’d much prefer that.
Textual interfaces are excellent. While there are tasks that benefit from a GUI - image editing for example - in most cases GUIs are a nicer way of representing things to a new user but are bad for power users. I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I’d call myself an “experienced user” of vim. I’ve written extensions, given workshops, and even written a language autoindent plugin, which anyone who’s done it knows is like shoving nails through your eyeballs. About once a year I get fed up with the limitations of text-only programming and try to find a good visual IDE, only to switch back when I can’t find any. Just because vim is the best we currently have doesn’t mean it’s actually any good. We deserve better.
(For the record, vim isn’t beginner-unfriendly because it’s text only. It’s beginner-unfriendly because it’s UI is terrible and inconsistent and the features are all undiscoverable.)
Most people don’t bother to learn vimscript properly, treating it much like people treated Javascript for years: a bunch of disparate bits they’ve picked up over time, with no unifying core. But once you actually learn it, it becomes much easier to use and more consistent. The difference between expressions and commands becomes sensible instead of seeming like an inconsistency.
I never get fed up with the limitations of text-only programming, because I don’t think they exist. Could you elaborate on what you are saying those limitations are?
And I totally, 100% disagree with any claim that vim’s UI is bad or inconsistent. On the contrary, it’s extremely consistent. It’s not a bunch of little individual inconsistent commands, it’s motions and text objects and such. It has extensive and well-written help. Compared to any other IDE I’ve used (a lot), it’s way more consistent. Every time I use a Mac program I’m surprised at how ad-hoc the random combinations of letters for shortcuts are. And everything requires modifier keys, which are written with ridiculous indecipherable symbols instead of ‘Ctrl’ ‘Shift’ ‘Alt’ etc. Given that Mac is generally considered to be very easy to use, I don’t think typical general consensus on ease of use is very instructive.
Bret Victor explains the persistence of textual languages as resistance to change, drawing an equivalence between users of textual languages now and assembly programmers who scoffed at the first higher-level programming languages. But this thread is evidence that at least some people are interested in using a language that isn’t text-based. Not everyone is fairly characterized by Bret Victor’s generalization. So then why hasn’t that alternative emerged? There are plenty of niche languages that address a minority preference with reasonable rates of adoption. With the exception of Hypercard, I can’t think of viable graphical programming language. Even Realtalk, the language that runs Dynamicland (Bret Victor’s current focus), is text-based, being a superset of Lua. I keep hearing about how text-based languages are old-fashioned and should die out, but I never hear anything insightful about why this hasn’t happened naturally. I’m not denying that there are opportunities for big innovation but “make a visual programming language” seems like an increasingly naive or simplistic approach.
I think it has to do with the malleability of text. There’s a basic set of symbols and one way to arrange them (sequentially.) Almost any problem can be encoded that way. Emacs’ excellent org-mode is a testament to the virtue of malleability.
Excel also has that characteristic. Many, many kind of problems can be encoded in rectangles of text with formulas. (Though I might note that having more ways to arrange things allows new kinds of errors, as evidenced by the growing cluster of Excel features for tracing dependencies & finding errors.)
Graphical languages are way less malleable. The language creator decides what elements, relations, and constraints are allowed. None of them let me redefine what a rectangle represents, or what relations are allowed between them. I think that’s why these languages can be great at solving one class of problem, but a different class of problem seems to require a totally different graphical language.
My suspicion is that it’s because graphical languages merge functionality and aesthetics, meaning you have to think very, VERY hard about UI/UX and graphic design. You need to be doing that from the start to have a hope of it working out.
Atlassian is hiring anybody interested in functional programming in Bengaluru. I’ll be available for training in any FP topics you want to learn. Haskell and Scala experience are beneficial but not necessary.
We build Docker images from Nix then deploy them to Atlassian’s internal PaaS.
The benefits we get:
The problems we have:
I think the problems are mostly solvable and the benefits can’t be obtained from any existing tools.
It would be helpful for me to see an example of this (Nix->Docker->PaaS) with an example app, if you’re looking for things to write about on your blog.
This shows the Nix and Docker tooling: http://lethalman.blogspot.com/2016/04/cheap-docker-images-with-nix_15.html
The PaaS part is mostly a docker push to a repo.
KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.
I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.
Not perfect, but works.
Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.
Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.
Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.
I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.
I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.
I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.
Nix is one of those tools where you don’t know what you aren’t getting until you get it. There are so many things wrong with this post, but I only know that because I spent weeks wrestling with lots of those issues myself.
You basically need to read all the nix pills (https://nixos.org/nixos/nix-pills/), the nix manual, the nixpkgs manual and the nixos manual in a loop gradually filling in what is going on… which takes a long time.
Nix is very confusing at first, but enables things that you would not have thought possible once you know what you are doing. The core people don’t seem to evangelize much because it is just one of those tools that solved their problems so well, they don’t have to care about the outside world anymore.
I use nixos for my laptop, desktop and a few servers, have all my machines config under version control and can roll the machines back to any version whenever I want, remote administer them, build an install on one computer, test it in a VM and then ship it with a single command to another machine. I won’t go back to another OS despite there being room for improvement, because no other OS comes close in terms of what you can do (my path has been windows -> ubuntu -> arch linux -> freebsd -> openbsd -> nixos).
I use NixOS on everything and completely agree. It’s a massive investment. It was worth it for me, but it shouldn’t have to be a massive investment. Need better tooling and docs.
Yeah, there are lots of things I wish I could explain, but the explanations take a large investment. Take for example the complaint about making a new language instead of using something existing… It seems sensible on the surface, until you understand deeply enough to know why laziness is needed, and features like the pervasive use of interpolation to generate build scripts… Once you understand those, you know why a new language was made.
The lack of tooling IS a valid complaint, and the fact the language isn’t statically typed could also be a valid complaint, but the community is growing despite all those issues, which is a good sign.
I’m hoping https://github.com/haskell-nix/hnix will help with that point, and the tooling.
You basically need to read all the nix pills (https://nixos.org/nixos/nix-pills/), the nix manual, the nixpkgs manual and the nixos manual in a loop gradually filling in what is going on… which takes a long time.
I’ve tried reading all of this but I found it all horribly confusing and frustrating — until I read the original thesis on it, which I still think is (perhaps surprisingly) still the best resource for learning how nix works. It’s still a pretty big investment to read, but imho it’s at the very least a much less frustrating experience than bouncing from docs to docs.
(I wonder if the same is true of the NixOS paper?)
How do you manage secrets in configuration files? Passwords, ssh keys, tls certs and so on. If you put them into a nix-store they must be world-readable, right?
One could put a reference to files outside the store in configuration files, but then you loose a bit of the determinism of NixOS and it’s not always easily possible with third-party software to load e.g. passwords from an external file anyways.
Besides the learning curve, that was the single big problem which kept me from diving deeper into the nix ecosystem so far.
You are right, no passwords should ever go in the nix store.
The encryption key for my backup script is in a private root owned file I put under /secrets/ . This file is loaded in my cron job so the nix store simply references the secret but doesn’t contain it. This secret dir isn’t under version control, but is backed up with encrypted backups.
Every daemon with secret config I have seen on nixos has a “password file” option that does the same thing.
How do you manage secrets in configuration files?
For my desktop machine I use pass with a hardware key. E.g. nix (home-manager) generates an .mbsyncrc with
PassCmd "pass Mail/magnolia"
For remote machines, I use nixop’s method for keeping keys out of the store:
Nix is one of those tools where you don’t know what you aren’t getting until you get it. There are so many things wrong with this post
I have to disagree, but not with the second sentence - I was sure as I wrote the post that it was full of misconceptions and probably outright errors. I wrote it in part to capture those in the hopes that someone can use them to improve the docs.
But to disagree with the first sentence, I was keenly aware through the learning and writing that I was missing fundamental concepts and struggling to fill the gaps with pieces from other tools that didn’t quite fit. If there is indeed a whole ‘nother level of unknown unknowns, well, that’s pretty disheartening to me.
I can’t speak for your experience, but that’s how it was for me anyway, on the plus side it also meant nix solved more problems I was having after I understood better. I even thought nix was over complicated to the point I started writing my own simpler package manager, only to find nix had solved problems I ran into before I knew what they were.
Some flags also enable others. There’s a list here:
Though none of them would really change the analysis!
Yes, I use generative testing everywhere. In Haskell I use Hedgehog and in Scala I use ScalaCheck.
At work we use generative testing for things like JSON/MongoDB encoders/decoders, functional optics (e.g. lenses) where we want to test laws and just testing application logic, in general.
I’ve blogged about some of my generative testing work here:
https://developers.atlassian.com/blog/2016/03/programming-with-algebra/
Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.
Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!
Offtopic: Excuse me.
I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.
My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.
Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.
I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).
As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.
I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.
I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.
How timely! Someone at the office just shared this with me today: http://makemediumreadable.com
From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.
I just click the little X in the top right corner of the popup.
But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.
I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.
On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.
Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!
I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.
Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.
I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]
You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1
Thanks for this info.
Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.
A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation
However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).
I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.
I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.
gerikson you make really good points there about the GDPR.
Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.
The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.
They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.
Someone recently made a point about the below term non-repudiation.
Non-repudiation this means in digital security
A service that provides proof of the integrity and origin of data.
An authentication that can be asserted to be genuine with high assurance.
KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.
I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.
Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.
Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.
Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.
i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.
I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).
If that could be something that interests you, let me know and I’ll let you know :)
correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.
hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.
No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.
[Comment removed by author]
Haskell has no syntax in the core language to sequence one expression after another.
It has quite a few alternatives actually. Depending what you mean by “syntax in the core language”, there are some things with specific grammar rules in the Haskell98 and Haskell2010 standards; there are some “userspace” functions/operators (i.e. their syntax is a special case of functions/operators) which are nevertheless mandated by those standards; there are some things which the de facto implementation GHC supports (e.g. via commandline flags); etc. Here are a few:
a : b is the expression a followed by the sequence of expressions b (all of the same type)a ++ b is the sequence a followed by the sequence b (again, of the same type)[a, b] is a sequence of the expression a followed by the expression b (of the same type)(a, b) is a sequence of the expression a followed by the expression b (can be different types)f . g is the expression g followed by the expression f (input and output types must coincide)g >>> f is the expression g followed by the expression f (same as above but their order flipped)a -< b is the expression b followed by the expression a (must have compatible input/output types)do { a; b } is the expression a followed by the expression b
f <$> x is the expression followed by the expression f (must have compatible input/output types)These all define a specific order on their sub-expressions. They’re not all identical, but they follow roughly similar usage:
a : b tells a Prolog-style interpreter to perform the computation/branch a before trying those in b
a ++ b generalises the above to multiple computations (the above is equivalent to [a] ++ b)[a, b] is a specialisation of the above, equivalent to [a] ++ [b]
(a, b) generalises [a, b] to allow different types. We can use this to implement a linear sequence (it’s essentially how GHC implements IO). Somewhat surprisingly, and completely separately to anything IO related, it also represents parallel composition
f . g is a rather general form of composition
g >>> f is the same as above
a -< b is is part of arrow notation and desugars to a mixture of sequential and parallel composition (using lambdas, >>>, (a, b), etc.)do { a; b } is a generalisation of b . a, corresponding to join (const b <$> a), which is the most similar form to the ; operator of other languages you refer to: both because it has the same syntax (an infix ; operator) and a similar meaning (generalised composition). This can also be written as a >> b, and is related to a >>= b and a >=> b which are also built-in sequencing syntax, but didn’t seem worth their own entries.f <$> x is generalised application of f to x. That generality also makes it a composition/pipeline operator
The reason I’ve listed all these isn’t so much to say “look, there are some!”; but more to point out how many different meanings the word “sequence” can have (a list of values, an composition of functions, a temporal ordering on side-effects, etc.); how many different implementations of sequencing we can build; and, most crucially, that they all seem to overlap and interminge (e.g. the blurring of “container of values” with “context for computation”; how we can generalise a single thing like “composition” in multiple ways; how generalising seemingly-separate ideas ends up at the same result; etc.). This tells us that there’s something important lurking here. I don’t think investigating and harnessing this makes someone a wanker.
[Comment removed by author]
I’m an application programmer at Atlassian. A monad is a critical tool for code reuse in our applications. It’s not about PLT research or even evaluation order.
Monads only matter for representing sequential execution in extremely constrained languages, like haskell. (Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.)
This is not true. Monads are critical for code reuse. I’ve used the concept of a monad in many areas, but explicitly and critically in Scala.
[Comment removed by author]
[Comment removed by author]
[Comment removed by author]
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO (IO m) k = IO (\ s -> case m s of (# new_s, a #) -> unIO (k a) new_s)
[Comment removed by author]
I’m not being pedantic and your point is not clear. IO can be sequenced, this sequence can be abstracted, code reuse is what is gained from the abstraction. That is the total relationship between IO and monad.
[Comment removed by author]
Monad is about much more than IO. IO is about much more than monad.
Objects and classes have a different relationship.
[Comment removed by author]
This is not being pedantic, it is a very critical part of understanding monad and IO. I teach Haskell at work and have successfully corrected this mistake many times.
Suppose there existed a function that reversed a list. A few fruit grocers used this function to reverse a list of oranges. They also sometimes use it to reverse lists of apples. Other things happened with this function also, but we only know of these specific circumstances.
Suppose then someone came along and proclaimed, “the reverse function is all about fruit!” then they wrote an article about this new apparent fact. Would you be able to clearly see a categorical error occurring here? What would you say to the article author? Would you reverse a list of list of functions right in front of their face? Or reverse a list of wiggley-woos? What if that person then replied, “you’re just being pedantic”? Where would you take the discussion from here? Would you be the meany person who informs them that they have almost no grasp of the subject matter? It’s quite a bind to be in :)
That’s exactly the error being made here (among some others) and it is a very obvious error only to those who have a concrete understanding of what monad means. It’s not pedantic. It’s not “avoiding a debate.” It’s a significant categorical error, and it is very common among beginners. It limits any further understanding so significantly, that it is better to have no knowledge at all. This specific error is also commonly repeated among beginners, as they struggle and aspire to understand the subject, and to the point that it becomes very difficult to stamp out, even for many of those who know the subject well. The ultimate consequence is a net overall lack of progress in understanding, for absolutely everyone.
Who wants to contribute to that?
Haskell has no syntax in the core language to sequence one expression after another.
Yes it does: do-notation. You can even use semicolons if you don’t like newlines. It’s the syntax to sequence expressions which can be sequenced. You can’t use semicolons to sequence things that can’t be sequenced in other languages, either.
And why talk about Maybe but not MonadPlus, free monads, transformers…? All you know is Maybe and IO? Of course it’s boring to you. In stead of writing blog sized posts about how blog tutorials don’t teach you everything you could read up, but oh well you do you.
By changing each stage to take and return a fat outer type holding the entire context, you can just as easily achieve the cool pipeline effect by defining >>= as function composition rather than bind.
With bind you don’t have to change each stage.
Understanding how to write programs which allow change without triggering catastrophic rewrites is pretty useful.
Understanding why some programs are easy to modify is pretty useful.
Having language to discuss why some programs are easy to modify and others are not; also pretty useful.
The original post is about how thinking in terms of Monads can make a program which is hard to modify into a program which is easy to modify, it’s a useful post.
Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.
First of all, by far the most popular monadic interface in modern software development is not Haskell’s IO type, it’s JavaScript’s Promise type, together with similar systems for writing asynchronous logic in other languages. If we’re talking about use cases where monads are “certainly important,” I think it’s worth mentioning the large number of programmers writing monadic code on a daily basis in languages which certainly do not lack native support for semicolons.
I love monads and find they’re actually among the most useful and important tools I’ve ever acquired as a programmer, but I agree that the PLT and functional programming communities could do a better job communicating exactly why monads are actually important. The use of monads as “extendable semicolons” does have some narrow but critically important use cases, such as asynchronous code, exception handling, and recursive backtracking, but I actually believe that the exotic forms of control flow you can express with monads is of only secondary importance.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not. Given that a function’s interaction with an environment is an extremely important aspect of its semantics, this is information that you would be informally documenting and keeping track of anyway; Haskell just allows you document it in a precise, machine-checked format with great integration with the compiler.
This immediately allows you to separate functions which perform IO from those which do not, but that’s not actually the coolest part. The coolest part is that once you start defining your own monad types, you can express much much more precise and interesting classes of side effects, like “a function that interacts only with a random number generator” or “a function that interacts only with my database state” or “a function which interacts only with a sequential-identifier generator.” This is the real power of monads: the ability to make fine-grained guarantees about the data dependencies and side effects of a function given only its type signature.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not.
One nit here: the idea of separating pure and effectful operations is actually pretty old. You see this in Pascal and Ada and the like, where “functions” are pure and “procedures” are effectful. This is baked into the core language semantics. The different terms fell out of favor when C/C++ got big, and now people don’t really distinguish them anymore. But there’s no reason we couldn’t start doing that again, aside from inertia and stuff.
To my understanding you also don’t need monads to separate effects in pure FP, either; Eff has first-class syntax for effect handlers and takes measures to distinguish the theory from monads.
To my understanding you also don’t need monads to separate effects in pure FP, either;
Well, you need something. Proposing to have an effect system without monads is like proposing to do security without passwords: there are some interesting possibilities there, but you have to explain how you’re going to solve the problems that monads solve.
Your Eff paper refers to papers on effect tensors to justify the claim that effects can be easier to combine than monads, but then doesn’t seem to actually model those tensors? Their example of what combining effects looks like in practice seems to end in just letting them be composed in the same order that the primary code is composed, when the whole point of a pure language is to be able to get away from that. So while the language is pure at the level of individual effects, it seems to be effectively impure in terms of how composition of effects behaves?
[Comment removed by author]
It’s not specific to Javascript. The type Task<T> is the same interface in C#, Future<V> is the same thing in Java. The concept is generally useful in all languages. It’s useful even in Haskell, where Async allows the introduction of explicit concurrency, even though the runtime automatically does the work that Task<T> is mostly for in C# (avoiding blocking on threads).
In addition async/await is a monadic syntax, which is generally useful (as evidenced by it now being in C#, F#, Scala, Javascript, Python, and soon C++).
(LINQ in C# is another generally-useful monadic syntax, which is used for just about everything except doing IO and sequencing.)
I’ve tried several linux distribution in the last 10years (ubuntu,debian,fedora,arch,nixos) and honestly, NixOS is way above the others for a developer-friendly OS. It’s:
I love being able to drop in a shell having the package or the lib I want and test things. In comparison, Arch feels like a totally standard linux distribution.
I’ve been using NixOS everywhere around me for about 3.5 years and totally agree. I now work on Atlassian Marketplace which deployed using Docker images that are built from Nix.
The nix model definitely seems like a great way to build docker images. Reproducible, minimal, flexible, it seems like a perfect fit.
That sounds strange to me; if you’re already set up to use nix, why bother with docker? Maybe I’m overlooking some things?
Because Atlassian has an internal PaaS which requires Docker. I use NixOS for everything but deploy our systems to that.
I’m not using NixOS. And it wouldn’t be for running locally, it would be for deploying on something like Kubernetes. But nix is a flexible, useful tool even when NixOS isn’t involved.
How can it be both good as my workstation, and good as a minimal container runtime?
Not for the sake of being argumentative (I have yet to try Nix), just confused because those two seem like opposites.
Flexible is the key adjective that makes it work for both. Nix allows you to install a package tree into a target directory, using binary packages. Analogous to debootstrap / kickstart. But it also lets you ad hoc add / update / remove packages in that directory like apt / yum does on a running system. It can also do all this according to a package spec a la bundler / npm / maven.
And it can do all this live on a workstation too! So that’s why it works for both.
It also does a great job of keeping things clean by installing packages into versioned directories, and symlinking the active package into the base system. Similar to what homebrew does on MacOS. That makes cleaning old versions a breeze, and allows multiple versions to be installed, which nix lets you switch between easily.
That being said, I run more conventional distros to keep familiar with the server installs used by customers. I find the utility of that expertise greater than any utility NixOS provides. But since nix is also a standalone tool, it works on less interesting distros and can be used for homedir installs or building docker images.
That being said, I run more conventional distros to keep familiar with the server installs used by customers.
Interesting. I find that I do this as well. I.e. I use a minimal vimrc, bash (not zsh/fish), etc., to not confuse my muscle memory of my day job (which is bog-standard Linux/Debian sysadmin).
Here’s a good blog post about using Nix to build Docker images: http://lethalman.blogspot.com/2016/04/cheap-docker-images-with-nix_15.html
So all the images are built from the nixos base image? It’s the first big company that I hear is using nixos+docker!
As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.
It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.
I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.
@Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.
Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).
With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.
EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.
Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.
For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.
For me, I like the ability to plan when I will solve a problem.
But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.
And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.
On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
And if an update break things, I can also roll back from that update until I have time to fix things.
Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.
Several people here said that Arch doesn’t really support rollback
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.
Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).
I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:
$ sudo pacman -Syu
... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
$ ls /var/cache/pacman/pkg | rg postgres
... ah, postgresql-x.(y-1) is sitting right there
$ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
$ sudo systemctl start postgres
... it's alive!
This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages
My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.
(Take my claims with a grain of salt. I am a mere pacman user, not an expert.)
EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date
now pacman Syu is almost guaranteed to break or change something for the worse
I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).
I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.
I have the opposite experience
I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.
I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.
Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.
I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)
Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.
How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.
I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.
As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.
I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.
It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.
Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.
It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.
Yep, this is how I figured out monads too, but when using Rust! There is more to them though - the laws are important, but it’s sometimes easier to learn them by examples first!
Can you show an example where a monad is useful in a Rust program?
(I’m not a functional programmer, and have never knowingly used a monad)
I learned about monads via Maybe in Haskell; the equivalent in Rust is called Option.
Option<T> is a type that can hold something or nothing:
enum Option<T> {
None,
Some(T),
}
Rust doesn’t have null; you use option instead.
Options are a particular instance of the more general Monad concept. Monads have two important operations; Haskell calls them “return” and “bind”. Rust isn’t able to express Monads as a general abstraction, and so doesn’t have a particular name. For Option<T>, return is the Some constructor, that is,
let x = Option::Some("hello");
return takes some type T, in this case, a string slice, and creates an Option<T>. So here, x has the type Option<&str>.
bind takes two arguments: something of the monad type, and a function. This function takes something of a type, and returns an instance of the monad type. That’s… not well worded. Let’s look at the code. For Option<T>, bind is called and_then. Here’s how you use it:
let x = Option::Some("Hello");
let y = x.and_then(|arg| Some(format!("{}!!!", arg)));
println!("{:?}", y);
this will print Some("Hello!!!"). The trick is this: the function it takes as an argument only gets called if the Option is Some; if it’s None, nothing happens. This lets you compose things together, and reduces boilerplate when doing so. Let’s look at how and_then is defined:
fn and_then<U, F>(self, f: F) -> Option<U>
where F: FnOnce(T) -> Option<U>
{
match self {
Some(x) => f(x),
None => None,
}
}
So, and_then takes an instance of Option and a function, f. It then matches on the instance, and if it’s Some, calls f passing in the information inside the option. If it’s None, then it’s just propagated.
How is this actually useful? Well, these little patterns form building blocks you can use to easily compose code. With just one and_then call, it’s not that much shorter than the match, but with multiple, it’s much more clear what’s going on. But beyond that, other types are also monads, and therefore have bind and return! Rust’s Result<T, E> type, similar to Haskell’s Either, also has and_then and Ok. So once you learn the and_then pattern, you can apply it across a wide array of types.
Make sense?
Make sense?
It absolutely does! I’ve used and_then extensively in my own Rust code, but never known that I was using a monad. Thanks for the explanation Steve.
But there’s one gap in my understanding now. Languages like Haskell need monads to express things with side-effects like IO (right?). What’s unique about a monad that allows the expression of side effects in these languages?
No problem!
This is also why Rust “can’t express monads”, we can have instances of individual monads, but can’t express the higher concept of monads themselves. For that, we’d need a way to talk about “the type of a type”, which is another phrasing for “higher minded types”.
So, originally, Haskell didn’t have monads, and IO was done another way. So it’s not required. But, I am about to board a flight, so my answer will have to wait a bit. Maybe someone else will chime in too.
A monad has the ability to express sequence, which is useful for imperative programming. It’s not unique, e.g. you can write many imperative programs using just monoid, functor, applicative or many other tools.
The useful function you get out of realising that IO forms a Monad is:
(>>=) :: IO a -> (a -> IO b) -> IO b
An example of using this function:
getLine >>= putStrLn
I should say Monad is unique in being able to express that line of code, but there’s many imperative programs which don’t need Monad. For example, just Semigroup can be used for things like this:
putStrLn "Hello" <> putStrLn "World"
Or we could read some stuff in with Applicative:
data Person = Person { firstName :: String, lastName :: String }
liftA2 Person getLine getLine
So Monad isn’t about side-effects or imperative programming, it’s just that imperative programming has a useful Monad, among other things.
You are way ahead of me here and I’m probably starting to look silly, but isn’t expressing sequence in imperative languages trivial?
For example (Python):
x = f.readline()
print(x)
x must be evaluated first because it is an argument of the second line. So sequence falls out of the hat.
Perhaps in a language like Haskell where you have laziness, you can never be sure if you have guarantees of sequence, and that’s why a monad is more useful in that context? Even then, surely data dependencies somewhat impose an ordering to evaluation?
For me, the utility of Steve’s and_then example wasn’t only about sequence, it was also about being able to (concisely) stop early if a None arose in the chain. That’s certainly useful.
but isn’t expressing sequence in imperative languages trivial?
Yes.
In Haskell it is too:
(>>=) :: IO a -> (a -> IO b) -> IO b
But we generalise that function signature to Monad:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
We don’t have a built in idea of sequence. We just have functions like these. A generalisation which comes out is Monad. It just gives code reuse.
Maybe is an instance of a monad, and there are many different kinds of monads. If you think of Maybe as “a monad that uses and_then for sequencing”, then “vanilla” sequencing can be seen as “a monad that uses id for sequencing” (and Promises in JavaScript can be seen as “a monad that uses Promise#flatMap for sequencing”).
Yes, expressing sequence in eager imperative languages is trivial because you can write statements one after the other. Now imagine a language where you have no statements, and instead everything is expressions. In this expression-only language, you can still express sequence by using data dependencies (you hit this nail right on the head). What would that look like? Probably something like this (in pseudo-JavaScript):
function (next2) {
(function (next) {
next(f.readline())
})(function (readline_result) {
next2(print(readline_result))
})
}
with additional plumbing so that each following step has access to the variables bound in all steps before it (e.g. by passing a dictionary of in-scope variables). A monad captures the spirit of this, so instead of doing all the plumbing yourself, you choose a specific implementation of >>= that does your plumbing for you. The “vanilla” monad’s (this is not a real thing, I’m just making up this name to mean “plain old imperative sequences”) implementation of >>= just does argument plumbing for you, whereas the Maybe monad’s implementation of >>= also checks whether things are None, and the Promise monad’s implementation of >>= also calls Promise#then and flattens any nested promises for you.
What’s useful here is the idea that there is this set of data structures (i.e. monads) that capture different meanings of “sequencing”, and that they all have a similar interface (e.g. they have all an implementation of >>= and return with the same signature) so you can write functions that are generic over all of them.
Does that make sense?
There is a comment below saying it pretty succintly:
A monad is basically defined around the idea that we can’t always undo whatever we just did (…)
To make that concrete, readStuffFromDisk |> IO.andThen (\stuff -> printStuff stuff) - in the function after andThen, the “stuff” is made available to you - the function runs after the side effect happened. You can say it needed specific API and the concept of monads satisfies that API.
Modelling IO with monads allows you to run functions a -> IO b (take a pure value and do an effectful function on it). Compare that to functions like a -> b (Functor). These wouldn’t cut it - let’s say you’d read a String from the disk - you could then only convert it to another string but not do an additional effect.
EDIT: I might not have got the wording entirely right. I ommited a part of the type annotation that says the a comes an effect already. With Functor you could not reuse values that came from an effect; with Monad you can.
please don’t. When learning you make a lot of mistakes. And writing a tutorial with this mistakes doesn’t help other learners when they read this.
And even if you don’t make mistakes, most people will misunderstand their own process and come up with unhelpful things like monad tutorials: https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/
But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them.
Came here to say something similar to this.
Learn new technology through writing a tutorial about it, but don’t publish it.
There’s so much misinformation by well-intentioned learners.
I’m not trying to diminish the importance of journaling either! Journaling != Tutorials.
Publishing your tutorial gives it an audience, which means someone may (hopefully!) come along and correct you on your errors. This is invaluable.
I also disagree with this negativity. Make it clear at the top of your tutorial that you’re a beginner and you may not have it all right. But with that caveat, publish away.
I think we’re discussing the same thing but disagreeing on the semantics of it.
Publishing your tutorial gives it an audience, which means someone may (hopefully!) come along and correct you on your errors. This is invaluable.
Absolutely invaluable, but at the very same time, the exposure spreads the misinformation to more readers, potentially doing more harm than good. I don’t think a disclaimer is enough. I think the word “tutorial” implies some authority, unfortunately.
I think a better way is to humbly share a report of your findings so far, with questions and an (as appropriate) admission that you don’t understand everything. Julia Evans is masterful at this style.
As a reader new to the topic, you get the benefit of an explanation of what she currently understands (which is often from a beginner’s mind), and usually some questions to seek answers to on your own. As an expert of the topic, you are invited to share more, or clarify, or correct (and this happens a lot on twitter, and/or HN, etc). But you’re doing so from a place of empathy (you want to be helpful) instead of from a place of disgust (ugh! why is this tutorial so bad!).
I got it a few bytes smaller by using Nix instead of Docker to build the image:
with import <nixpkgs> { };
dockerTools.buildImage {
name = "sleep";
contents = [ (runCommand "sleep" { } ''
mkdir -p $out
cp ${./t} $out/
'') ];
}
932 bytes, instead of 976.
The (original) title is misleading.
I can’t help but imagine how different Android would be if various concepts from guix (or nix I guess) had predated Android 1.0.
I can’t help but imagine how different Android would be if various concepts from guix (or nix I guess) had predated Android 1.0.
They did! The original Nix paper came out in 2006:
https://nixos.org/~eelco/pubs/phd-thesis.pdf
I’m always impressed that these ideas were explored 12 years ago. There’s been lots of thinking, and it’s paid off :)
The project actually started at least three years before that: https://github.com/NixOS/nix/commit/75d788b0f24e8de033a22c0869032549d602d4f6
Benjamin C. Pierce, Types and Programming Languages, MIT Press.
That’s the one you want. I’m biased towards ML (vs Haskell), and I think the book is, too (it’s not a Haskell book). You can get all six, sure, but if you had to get one that’s the one.
+1 for this. I’m using TaPL in my PL class this quarter and it’s awesome. Super well written and ML is great for this class - the work involves writing successively more complex interpreters for successively more complex toy languages. We’re not sticking strictly to the book, but the sections our prof has pointed us to have been great.
I also highly recommend TAPL.
I’ve been recommended TAPL before, but seeing as this isnt a class thats strictly about type systems, I’d like to get a more general one and read TAPL later.
TAPL isn’t just about types neither - it’s types AND programming languages.
Practical Foundations for Programming Languages is also very good.
@hwayne you can call people bulldogs but I’m still super confused why you used leftpad as a “challenge” for functional programmers after seeing my functional version. Obviously it can be done. What’s the challenge?
But the first paragraph of this blog post shows your main problem:
Functional programming and immutability are hot right now. On one hand, this is pretty great as there’s lots of nice things about functional programming. On the other hand, people get a little overzealous and start claiming that imperative code is unnatural or that purity is always preferable to mutation.
Functional programming allows:
These aren’t antonyms, they’re complementary. That’s why I am always so confident that things can be done using FP with no downsides. In the worst case we can just take your code and embed it as pure functions. This fact exists irrelevant to me completing your challenge or not.
I’m still reimplementing Sonic 2 btw.
@hwayne you can call people bulldogs but I’m still super confused why you used leftpad as a “challenge” for functional programmers after seeing my functional version. Obviously it can be done. What’s the challenge?
As I said in both the Twitter thread and the full blog post, the problem with your submission is that you assume all your stdlib functions conform to your spec. That means the prover is taking it as true without verifying it. It’s the equivalent of mocking everything out in a unit test.
I told you that this, and your response was basically “it would be easy to fix but I’m not going to do it.” The people who actually tried to fix it found it much harder than you thought.
In the worst case we can just take your code and embed it as pure functions. This fact exists irrelevant to me completing your challenge or not.
As everybody who completed Fulcrum admitted, “embedding it as pure functions” was incredibly difficult and took several days to prove. It took one of the core developers of Liquid haskell almost four days to complete functionally. I hammered out my imperative proof in an afternoon.
your submission
I never submitted. I wrote some code to learn Liquid Haskell, you saw it and turned it into a challenge to functional programmers. But it obviously can be done, you know the assumptions aren’t hard to fix.
It took one of the core developers of Liquid haskell almost four days to complete functionally.
I am unsurprised since a lot of the Haskell’s standard library functions aren’t specified yet. Was the difficulty that part and not the actual verification?
But it obviously can be done, you know the assumptions aren’t hard to fix.
But it was hard to fix for people. Your claim that it’s easy is not one I can accept without good evidence.
For the record, as I also make clear in the post, these were supposed to be in ascending order of difficulty. It was supposed to be the easiest. I even admitted, in the post, that I overestimated how hard unique was!
I am unsurprised since a lot of the Haskell’s standard library functions aren’t specified yet. Was the difficulty that part and not the actual verification?
No, the difficulty was the actual verification. Rhanjit said that, artnz said that, Dave - who was using Isabelle, which has full specifications - said that. It was, in their experiences, a fundamentally hard problem.
No, the difficulty was the actual verification. Rhanjit said that, artnz said that, Dave - who was using Isabelle, which has full specifications - said that. It was, in their experiences, a fundamentally hard problem.
It’s good to know that Dafny makes this easier. It doesn’t demonstrate that this can’t be done in FP as easily, just that our current tools lack the additional tooling to make it easy.
Reimplementing Sonic the Hedgehog 2 in Haskell. Picking it back up after playing around with it a while ago:
https://twitter.com/puffnfresh/status/916066859597758465
I have bits of decompression, colour palettes, collision data, block mappings, tile mappings, etc. this week I’m trying to connect those together so I can fully render a level.
Oh sorry before people ask for source, this is what I’ve pushed so far:
Pointfree style in ML-family languages (e.g. Haskell) lets you avoid naming things, though the style was designed more for general conciseness than that specifically.
Similarly, pipelines in shell scripts let you avoid naming intermediates.
(Pipelines are a combination of pointfree style and array language style, in that well-designed pipelines are side-effect-free and the programs in the pipeline mostly implicitly iterate over their input.)
I think Haskell strikes a very fine balance here. Both point-free and pointful styles are very ergonomic, so you tend to name precisely those things that would be better off named.
Just used this to start up Sonic Mania. Looks like it runs really well!! Super slick.