Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.
In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).
There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.
A look through the patch shows I was wrong about it being a grammar bug. Those do happen, but the actual cause of failure was a little more complex here. I should be more careful about speculating out loud!
All the changes were in the “typed elaborator” phase of the compiler, which takes the raw parsed AST into the type-annotated AST that is the top of the verified stack. It seems like a basic error when handling variable scopes that is triggered by the shadowed variable names.
Formally reasoning about scope extrusion is difficult and tedious… but I bet verifying the typed elaborator just rose a little higher in the CompCert development priority queue.
Deleted my other comment since I didnt know parser had been verified. My bad. Related to it, I will say the tally of bugs found in specs vs implementation will remain interesting as the weakest version of the proof says the implementation will be at least as accurate or failed as the specification. The Csmith bugs were spec errors IIRC that the implementation faithfully followed. ;)
Anyway, such results continue to remind folks formal methods need to be supplemented with other verification techniques.
I’m being pedantic, but the “runtime error” occurred in a program that was compiled by CompCert (and its parser), so the title still isn’t really accurate. (Just saying “error” would be enough…)
Hey, no better venue for pedantry than a story like this. :-)
I’d edit the title again, but I can’t. I’d speculate that there’s some kind of time limit or score threshold on title edits… not going to go read the Lobsters source to find out exactly, though.
You can’t claim to have created a verified compiler until you have a formal semantics of source, of target, a spec of your transformations, and proof they do what they do. It’s so hard they started with mini-Pascals before working up to C subsets (eg C0->TALC) to the subset in CompCert. There’s also semantics by another team to catch undefined behavior.
So, you just made that crap up with no evidence. Further, what empirical evidence we have by looking up data in the field shows verification works at reducing defects in various parts of lifecycle. Far as CompCert, Csmith empirically confirmed the CompCert team’s claims of strong defect reduction when they through tons of CPU time at trying to find defects. They found just a few spec errors in CompCert but piles of errors in others. Regehr et al were amazed by its quality.
Comments about worth of formal methods should have to use the scientific method or something when making claims. In this case:
Define correctness requirements for compilers. There may be several.
Assess how well various compilers met them, how often, and developed with what methods
Determine if formally-verified compilers achieved higher quality than those without it. Also, compare to other QA methods such as code review or unit testing.
CompCert passes across the board with such criteria. Not that I still want, as in EAL7 requirements or DO-178C, to throw more exhaustive testing and review at it to check for any errors in specs. I wont rely only on one method as empirical data says use diverse array for highest confidence. It’s already orders of magnitude better in defect rate than other compilers, though.
Once I have a translator for language-X, I can claim to have created a formally verified compiler for that language, using the same arguments made by the authors of the Compcert papers.
I think you can only claim that starting from the C code, that what your frontend produces will be as faithfully be converted to machine code as what is technically possible using the best technology available.
Edit:
To your articles point about undefined behaviors, I’m pretty sure it is possible to prove absence of those problems in C code using frama-c and acsl. The problems will always be pushed to the boundaries until you have done this for the entire system.
No, I mean formal methods people in general. The compcert people have done some interesting work, but they are not advertising what they have actually done.
… undefined behaviors, I’m pretty sure it is possible to prove absence…
I’m not sure about proving, but it should be possible to get a high level of confidence.
Yes, there are tools to do this, but compcert did (still does not?) use them (or do it itself these days?).
“ The compcert people have done some interesting work, but they are not advertising what they have actually done.”
The papers I read note the specific goals, limitations, and techniques used in their methods. They did publish a lot of stuff. Maybe I missed something. What do you think they’re lying about in what papers about the CompCert tool?
“No, I mean formal methods people in general”
Much of the formal methods literature done by pros lists their specific methods plus their dependencies and what parts can fail. I rarely see people outside the formal methods community talking reliability or security do that. Looking for “TCB” is actually one of techniques I use in Google searches to find the papers I submit. It’s that rare for it to pop up in papers outside either formal methods or high-assurance systems despite it being foundation of their claims.
There’s certainly zealotry to call out. There’s certainly problems with peer review or reproducible results. These are true in most if not all sub-fields of CompSci. Maybe for experimental science in general since a lot of article in recent years are complaining about non-reproducible or fake research. For some reason, though, you only bring it up on formal methods threads. I wonder why you find the other threads acceptable with their imprecise and/or non-reproducible claims but drop in a formal methods thread to throw under the bus researchers who note strengths, weaknesses, and dependencies plus publish quite a bit of source code. Quite a double standard you have, Sir!
While you’re in a Scheming frame of mind, I really like The Little Schemer and The Seasoned Schemer. They have an informal presentation style but before you know it you’ve internalized quite a lot.
In a totally different vein, I’ve been reading Type Theory and Formal Proof and I think this is a remarkable text because it makes the subject so accessible and sprinkles in interesting asides about the development of the theory and implementation of type theory along the way.
The author did a good job of demonstrating Haskell’s facility for working with a closed set of types but EDN is actually extensible. In EDN, users can define their own tags that aren’t in the base set of tags. In addition to the richer set of base types in EDN, the ability to add your own is the big advantage over JSON and the author doesn’t address that in this article.
That was even rougher than the last time. That function fails in way more cases than it succeeds. So maybe, just maybe, Maybe isn’t that bad.
The unstated assumption seems to be that it’s bad for a program to error out when it receives invalid input, which I don’t think is uncontroversial. IMO the thing that’s rough here is that the type system mandates we write out twelve degenerate error cases to compile.
I think the point was to demonstrate that there are a lot of implicit failure cases. Haskell doesn’t require you to provide a match clause for every data constructor. The author could have written the following with a wild-card pattern to error for all unsupported types:
`clmap :: (EDN -> EDN) -> EDN -> EDN
clmap f edn =
case edn of
Nil -> Nil
List xs -> List $ map f xs
EdnVector xs -> List . map f $ toList xs
EdnSet xs -> List . map f $ toList xs
_ -> error "unsupported operation"`
But a reason to avoid wild card patterns is that if you’re explicit about how you handle each pattern then the compiler can automatically check that you’ve covered all possible cases and let you know if you’ve forgotten one (like if you were to update the definition of EDN with a new constructor).
I really wish if in defense of spec and dynamic types that rich hickey et al would use the same defense that aria haghighi used when he introduced schema: sure, you can verify these same kinds of things in a dependently typed language, but those are currently much harder to use and these dynamic checks get you most of the way there with less effort.
There have been so many attempts to make the dream of the free software phone a reality that have failed or come up short. I really want this project to succeed.
I wish the fairphone was a good free software phone. But it runs android and even the “open source” version ships binary blobs and the development process is sealed off from the public :(
And while purism has a much better approach to open source, the hardware supply chain is much less transparent than fairphone’s.
I support both efforts, but ideally I would like to buy a product that meets both of these goals.
I found it relatively easy to pick up OCaml after learning Standard ML first (I learned SML at university), and the overall feel of the languages is pretty similar. But if your goal is really to program in OCaml in the near future, I think starting via an SML tutorial might end up frustrating in the short term, because there are just enough miscellaneous differences to get bogged down in things that don’t quite work the same way in OCaml.
More OCaml : Algorithms, Methods & Diversions by John Whitington and Handbook of Practical Logic and Automated Reasoning by John Harrison. Both are excellent. The former has been a good introduction to OCaml for people familiar with functional programming and what I’ve got through of the latter has been a pleasant refresher on logic. I’m excited to start getting to the juicy parts of Handbook where you start getting to implementing an actual theorem prover.
I’m confused, because this doesn’t look lock-free to me. The swap update blocks, which means that threads need not make progress unless you arrange things right.
Is the claim the weaker one that it’s possible to use these tools to write lock-free algorithms rather than that everything using them is lock-free, or have I just completely misunderstood something (it’s been years since I’ve done serious multi-threaded programming so this is quite likely)?
every reason he lists is a reason to make packages with commonly used code. They are not a reason to have a package per function. The need to have them be a package per function is unique to javascript and the browser and it’s lack of dead code optimization intersecting with the need for shipping less stuff to the browser.
I’m getting close to a module per function in my Scala projects. I want to avoid having circular dependencies between classes (with a few specific exceptions). Having them in separate modules with explicit dependencies between them enforces that.
As does Webpack 2 (currently in beta). However, both Rollup and Webpack 2 require the use of ES6 modules in order to take advantage of their tree-shaking features.
yes I was a little abbreviated there, but Closure does give you this, provided all the rest of the javascript is also in Closure. For most of the javascript ecosystem this is not true so it’s not really a solution either.
This article is misleading in the sense that there are many more illegal numbers than is implied by it, in the sense that there are millions (likely billions) of numbers that are illegal to posses without paying for them. ;)
I understand Unikernels in the context of VMs, but what are they in the context of a container? Isn’t a container just running a process in some more secure context? What do Unikernels offer containers?
Much smaller interface (and therefore attack surface) between the inside and the outside, but without the overhead of that interface being the x86 interface. I think the distinction between VM and container largely goes away (e.g. if you’re running Xen then you’re still running a full OS in dom0, so it’s largely equivalent to containerization), but that’s fine.
If I understand this post correctly http://unikernel.org/blog/2015/unikernels-meet-docker/ it sounds like you can now use Docker to build applications composed of containers running a conventional OS, Rumprun unikernels or langauge specific unikernels like Mirage.
I would be interested to see a comparison of dynamically typed languages with statically typed languages like Haskell and OCaml rather than C#/Java. As someone who programs in Ruby for work and recently completed the OCaml MOOC, I suspect the productivity differences would be much smaller.
I enjoyed the course a great deal but I recommend it with reservation. I’d written some elementary Haskell before implementing H99 exercises and read John Whitington’s OCaml From The Very Beginning before taking this course so I was already familiar with writing tiny, beginner-level, typed functional programs. For me it was a fun collection of exercises that familiarized me with writing small programs in OCaml. I particularly enjoyed writing the arithmetic interpreter and the toy database.
But sometimes I struggled to understand the instructions for the exercises and without the benefit of the community forum I would have stayed stuck. From reading the frustrated posts of some students it seemed like those without prior exposure to functional programming were not having a good time. The pacing also felt uneven, some weeks I finished the exercises in hardly any time at all while others took a great deal of time.
So I think it was a good course for motivated students with some prior experience with functional programming basics like map and fold. But someone with no previous functional programming experience might want to look at other resources first. My favorite is The Little Schemer, which isn’t about typed functional programming but does a tremendous job getting you comfortable with recursion and higher order functions. I also enjoyed OCaml From The Very Beginning and think that would also be a good introduction.
I saw that the organizers of the course are planning a second edition. Hopefully they can even out some of the rough patches and make it a more enjoyable experience to programmers brand new to functional programming.
I just finished the first book of the Southern Reach trilogy. It was completely enthralling from the first page. Reminded me at times of The Thing and also of Andrei Tarkovsky’s Stalker.
I’ve hardly been able to put the Southern Reach books down. Just started the final book, Acceptance. And thanks for the recommendations, I never knew those films were based on novels!
I’ve been regularly attending the talks in the San Francisco and they’ve been consistently excellent. I highly recommend checking out your local chapter if you haven’t already.
What a surprise! This talk is filled with humor, history and insight. Not at all what I was expecting from the title. Among the things I found most fascinating was that the paper most frequently cited to perpetuate the waterfall model of software engineering actually advocated that that model was, in its own words, doomed.
Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.
In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).
I’m very intrigued by Dhall though I’ve not actually used it. But it is, from the github repo,
it sounds neat
There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.
Yes! This is one of the reasons I’m somewhat scared of people who like Ansible.
Yep! People haven’t learned from mistakes. Here’s a list of XML based programming languages.
I was curious as to how this bug could have occurred and found on the compcert c page that
So it seems CompCert C’s guarantee is a bit weaker than
as claimed on http://compcert.inria.fr/
And it’s more like, “the generated executable behaves exactly as prescribed by the semantics of the AST we parse your C source program into”
Right: the bug was actually in the parser, not the compiler proper. Maybe I should edit the title?
Seemed like a big deal anyway.
Edit: Noted parser in title.
http://compcert.inria.fr/man/manual001.html#sec10 shows the project structure, and claims the parser is in fact verified.
Edit: https://xavierleroy.org/publi/validated-parser.pdf has the full story on the verified parser. In brief, it’s verified with respect to an input grammar. Which, in this case, contained an error.
If that’s so, why doesn’t the commit that fixed this bug contain a change to the grammar or the proof?
A look through the patch shows I was wrong about it being a grammar bug. Those do happen, but the actual cause of failure was a little more complex here. I should be more careful about speculating out loud!
All the changes were in the “typed elaborator” phase of the compiler, which takes the raw parsed AST into the type-annotated AST that is the top of the verified stack. It seems like a basic error when handling variable scopes that is triggered by the shadowed variable names.
Formally reasoning about scope extrusion is difficult and tedious… but I bet verifying the typed elaborator just rose a little higher in the CompCert development priority queue.
Deleted my other comment since I didnt know parser had been verified. My bad. Related to it, I will say the tally of bugs found in specs vs implementation will remain interesting as the weakest version of the proof says the implementation will be at least as accurate or failed as the specification. The Csmith bugs were spec errors IIRC that the implementation faithfully followed. ;)
Anyway, such results continue to remind folks formal methods need to be supplemented with other verification techniques.
I’m being pedantic, but the “runtime error” occurred in a program that was compiled by CompCert (and its parser), so the title still isn’t really accurate. (Just saying “error” would be enough…)
Hey, no better venue for pedantry than a story like this. :-)
I’d edit the title again, but I can’t. I’d speculate that there’s some kind of time limit or score threshold on title edits… not going to go read the Lobsters source to find out exactly, though.
You can’t claim to have created a verified compiler until you have a formal semantics of source, of target, a spec of your transformations, and proof they do what they do. It’s so hard they started with mini-Pascals before working up to C subsets (eg C0->TALC) to the subset in CompCert. There’s also semantics by another team to catch undefined behavior.
So, you just made that crap up with no evidence. Further, what empirical evidence we have by looking up data in the field shows verification works at reducing defects in various parts of lifecycle. Far as CompCert, Csmith empirically confirmed the CompCert team’s claims of strong defect reduction when they through tons of CPU time at trying to find defects. They found just a few spec errors in CompCert but piles of errors in others. Regehr et al were amazed by its quality.
Comments about worth of formal methods should have to use the scientific method or something when making claims. In this case:
Define correctness requirements for compilers. There may be several.
Assess how well various compilers met them, how often, and developed with what methods
Determine if formally-verified compilers achieved higher quality than those without it. Also, compare to other QA methods such as code review or unit testing.
CompCert passes across the board with such criteria. Not that I still want, as in EAL7 requirements or DO-178C, to throw more exhaustive testing and review at it to check for any errors in specs. I wont rely only on one method as empirical data says use diverse array for highest confidence. It’s already orders of magnitude better in defect rate than other compilers, though.
I think you mean the compcert people.
I think you can only claim that starting from the C code, that what your frontend produces will be as faithfully be converted to machine code as what is technically possible using the best technology available.
Edit:
To your articles point about undefined behaviors, I’m pretty sure it is possible to prove absence of those problems in C code using frama-c and acsl. The problems will always be pushed to the boundaries until you have done this for the entire system.
No, I mean formal methods people in general. The compcert people have done some interesting work, but they are not advertising what they have actually done.
I’m not sure about proving, but it should be possible to get a high level of confidence.
Yes, there are tools to do this, but compcert did (still does not?) use them (or do it itself these days?).
Try using frama-c with -wp-rte plugin to see what I mean about runtime errors. compcert compiles valid C, other tools help you write valid C.
“ The compcert people have done some interesting work, but they are not advertising what they have actually done.”
The papers I read note the specific goals, limitations, and techniques used in their methods. They did publish a lot of stuff. Maybe I missed something. What do you think they’re lying about in what papers about the CompCert tool?
“No, I mean formal methods people in general”
Much of the formal methods literature done by pros lists their specific methods plus their dependencies and what parts can fail. I rarely see people outside the formal methods community talking reliability or security do that. Looking for “TCB” is actually one of techniques I use in Google searches to find the papers I submit. It’s that rare for it to pop up in papers outside either formal methods or high-assurance systems despite it being foundation of their claims.
There’s certainly zealotry to call out. There’s certainly problems with peer review or reproducible results. These are true in most if not all sub-fields of CompSci. Maybe for experimental science in general since a lot of article in recent years are complaining about non-reproducible or fake research. For some reason, though, you only bring it up on formal methods threads. I wonder why you find the other threads acceptable with their imprecise and/or non-reproducible claims but drop in a formal methods thread to throw under the bus researchers who note strengths, weaknesses, and dependencies plus publish quite a bit of source code. Quite a double standard you have, Sir!
While you’re in a Scheming frame of mind, I really like The Little Schemer and The Seasoned Schemer. They have an informal presentation style but before you know it you’ve internalized quite a lot.
In a totally different vein, I’ve been reading Type Theory and Formal Proof and I think this is a remarkable text because it makes the subject so accessible and sprinkles in interesting asides about the development of the theory and implementation of type theory along the way.
Thank you, I was scheming to get into those!
The author did a good job of demonstrating Haskell’s facility for working with a closed set of types but EDN is actually extensible. In EDN, users can define their own tags that aren’t in the base set of tags. In addition to the richer set of base types in EDN, the ability to add your own is the big advantage over JSON and the author doesn’t address that in this article.
Is there some form of ‘extensible types’ in Haskell, potentially as part of a language extension?
The unstated assumption seems to be that it’s bad for a program to error out when it receives invalid input, which I don’t think is uncontroversial. IMO the thing that’s rough here is that the type system mandates we write out twelve degenerate error cases to compile.
I think the point was to demonstrate that there are a lot of implicit failure cases. Haskell doesn’t require you to provide a match clause for every data constructor. The author could have written the following with a wild-card pattern to error for all unsupported types:
But a reason to avoid wild card patterns is that if you’re explicit about how you handle each pattern then the compiler can automatically check that you’ve covered all possible cases and let you know if you’ve forgotten one (like if you were to update the definition of EDN with a new constructor).
I really wish if in defense of spec and dynamic types that rich hickey et al would use the same defense that aria haghighi used when he introduced schema: sure, you can verify these same kinds of things in a dependently typed language, but those are currently much harder to use and these dynamic checks get you most of the way there with less effort.
There have been so many attempts to make the dream of the free software phone a reality that have failed or come up short. I really want this project to succeed.
I wish the fairphone was a good free software phone. But it runs android and even the “open source” version ships binary blobs and the development process is sealed off from the public :(
And while purism has a much better approach to open source, the hardware supply chain is much less transparent than fairphone’s.
I support both efforts, but ideally I would like to buy a product that meets both of these goals.
Basically the Purism software should run on a Fairphone?
Not sure, what about the binary blobs that are required on the FP ? (see https://code.fairphone.com/projects/fp-osos/dev/fp2-blobs-download-page.html)
Would this help with learning OCaml?
There’s one for ocaml https://learnxinyminutes.com/docs/ocaml/
I found it relatively easy to pick up OCaml after learning Standard ML first (I learned SML at university), and the overall feel of the languages is pretty similar. But if your goal is really to program in OCaml in the near future, I think starting via an SML tutorial might end up frustrating in the short term, because there are just enough miscellaneous differences to get bogged down in things that don’t quite work the same way in OCaml.
More OCaml : Algorithms, Methods & Diversions by John Whitington and Handbook of Practical Logic and Automated Reasoning by John Harrison. Both are excellent. The former has been a good introduction to OCaml for people familiar with functional programming and what I’ve got through of the latter has been a pleasant refresher on logic. I’m excited to start getting to the juicy parts of Handbook where you start getting to implementing an actual theorem prover.
I’m confused, because this doesn’t look lock-free to me. The swap update blocks, which means that threads need not make progress unless you arrange things right.
Is the claim the weaker one that it’s possible to use these tools to write lock-free algorithms rather than that everything using them is lock-free, or have I just completely misunderstood something (it’s been years since I’ve done serious multi-threaded programming so this is quite likely)?
every reason he lists is a reason to make packages with commonly used code. They are not a reason to have a package per function. The need to have them be a package per function is unique to javascript and the browser and it’s lack of dead code optimization intersecting with the need for shipping less stuff to the browser.
as it turns out, Joe Armstrong of Erlang once wrote a post to the Erlang mailing list in which he said:
i dunno. maybe it’s not so crazy? i haven’t decided.
I’m getting close to a module per function in my Scala projects. I want to avoid having circular dependencies between classes (with a few specific exceptions). Having them in separate modules with explicit dependencies between them enforces that.
If you’re running it through the Closure compiler, you can indeed get dead-code removal (see docs). Of course, that’s not common practice.
Rollup.js also does this
As does Webpack 2 (currently in beta). However, both Rollup and Webpack 2 require the use of ES6 modules in order to take advantage of their tree-shaking features.
yes I was a little abbreviated there, but Closure does give you this, provided all the rest of the javascript is also in Closure. For most of the javascript ecosystem this is not true so it’s not really a solution either.
This article is misleading in the sense that there are many more illegal numbers than is implied by it, in the sense that there are millions (likely billions) of numbers that are illegal to posses without paying for them. ;)
Makes me wonder if there’s a file sharing community that exchanges numbers representing files rather than the files themselves.
I understand Unikernels in the context of VMs, but what are they in the context of a container? Isn’t a container just running a process in some more secure context? What do Unikernels offer containers?
Full circle. Containers are like VMs without kernels. Now we are putting kernels into containers.
VM - kernel + kernel = bold new vision for the future.
A unikernel isn’t like the kernel in a VM - it’s not separate from the process.
Much smaller interface (and therefore attack surface) between the inside and the outside, but without the overhead of that interface being the x86 interface. I think the distinction between VM and container largely goes away (e.g. if you’re running Xen then you’re still running a full OS in dom0, so it’s largely equivalent to containerization), but that’s fine.
If I understand this post correctly http://unikernel.org/blog/2015/unikernels-meet-docker/ it sounds like you can now use Docker to build applications composed of containers running a conventional OS, Rumprun unikernels or langauge specific unikernels like Mirage.
I would be interested to see a comparison of dynamically typed languages with statically typed languages like Haskell and OCaml rather than C#/Java. As someone who programs in Ruby for work and recently completed the OCaml MOOC, I suspect the productivity differences would be much smaller.
How were your experiences with the OCaml MOOC? Do you have any insights that you’d share with others?
I enjoyed the course a great deal but I recommend it with reservation. I’d written some elementary Haskell before implementing H99 exercises and read John Whitington’s OCaml From The Very Beginning before taking this course so I was already familiar with writing tiny, beginner-level, typed functional programs. For me it was a fun collection of exercises that familiarized me with writing small programs in OCaml. I particularly enjoyed writing the arithmetic interpreter and the toy database.
But sometimes I struggled to understand the instructions for the exercises and without the benefit of the community forum I would have stayed stuck. From reading the frustrated posts of some students it seemed like those without prior exposure to functional programming were not having a good time. The pacing also felt uneven, some weeks I finished the exercises in hardly any time at all while others took a great deal of time.
So I think it was a good course for motivated students with some prior experience with functional programming basics like map and fold. But someone with no previous functional programming experience might want to look at other resources first. My favorite is The Little Schemer, which isn’t about typed functional programming but does a tremendous job getting you comfortable with recursion and higher order functions. I also enjoyed OCaml From The Very Beginning and think that would also be a good introduction.
I saw that the organizers of the course are planning a second edition. Hopefully they can even out some of the rough patches and make it a more enjoyable experience to programmers brand new to functional programming.
Why is there a Hacker News bookmarklet at the end? Were they expecting to end up there when they posted?
I think so. That’s where I first found this author’s writing and when I read HN I would regularly see his writing there.
Can’t we all just use nix?
I don’t think those firmly in the source based workflow camp will be persuaded to switch to nix
What’s a “source based workflow”? (Asking because Nix itself is source-based and can optionally use binary caches)
I just finished the first book of the Southern Reach trilogy. It was completely enthralling from the first page. Reminded me at times of The Thing and also of Andrei Tarkovsky’s Stalker.
As soon as it arrives, I’ll be reading the latest in the Little books, The Little Prover. https://mitpress.mit.edu/books/little-prover.
Keep going with the Southern Reach trilogy, it gets even better.
I’d also recommend checking out Roadside Picnic (the book that Stalker was based on) and Solaris by Stanisław Lem.
I’ve hardly been able to put the Southern Reach books down. Just started the final book, Acceptance. And thanks for the recommendations, I never knew those films were based on novels!
I’ve been regularly attending the talks in the San Francisco and they’ve been consistently excellent. I highly recommend checking out your local chapter if you haven’t already.
Yah, definitely looking forward to Ben’s discussion of Google’s “Spanner”.
What a surprise! This talk is filled with humor, history and insight. Not at all what I was expecting from the title. Among the things I found most fascinating was that the paper most frequently cited to perpetuate the waterfall model of software engineering actually advocated that that model was, in its own words, doomed.
Well worth the time to watch.