Nice. This paper by Philip Wadler does something similar by introducing monads through examples of their applications: http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf A bit handwavy at times, but useful because it touches on the monad laws and there’s an extended example with parsers.
I’ve seen a similar thing with tests written in rspec where lets are exclusively used in place of local variables. Any thoughts on how to strike the balance there?
YES! This advice completely applies to let variables in RSpec tests too. A let is a memoized method definition in my book. I am happy to recommend using more local variables over let variables in tests.
Is there an OpenBSD port for Oz anywhere? I can attempt to compile from source, but I’m hoping there’s something easier.
Not that I’m aware of. Hopefully it’s buildable. The main issue is the code base is older C++ so some warnings/errors in current compilers need to be disabled. It’s also 32-bit only. Mozart 2 is 64 bit but lacks the constraints and distribution libraries.
I tried OCaml for a bit but the weirdness got to me after awhile. There was a ton of magic around project setup and compilation that I didn’t understand and couldn’t find properly explained, and the fact there is more than one “standard” library bugged the heck out of me. I’m hoping that once the Linux story solidifies a bit more around .NET I’ll be able to reasonably give F# a shot.
I’ve been using F# on Linux for a few years now using Mono. It’s a bit more manual than .NET Core, but it’s stable.
If you’re interested in trying again, I created a build system (yes, yet another one) specifically designed for getting going fast in most cases. I have a blog post here:
http://blog.appliedcompscilab.com/2016-Q4/index.html
Short version: all you need is a pds.conf which is in TOML so fairly straight forward, a specific directory structure (src/<project>) and GNU Make. Then you run pds && make -f pds.mk and you’re done. Supports tests as well as debug builds.
I’m not sure it is worth pushing yet another build system that seemingly nobody uses (at least I haven’t yet run across a package which uses it) when jbuilder seems to be gaining so much momentum in the OCaml world lately.
Maybe, but pds is pretty easy to port away from for most builds and it’s so trivial to get started and much less confusing than jbuilder’s config, IMO. My personal view is that jbuilder is a mistake but I’ll wait to switch over to it once it’s gained enough momentum. At that point, I can just switch pds over to producing jbuilder configs instead. But I’m a symptom of the problem rather than the solution unfortunately. I also use @c-cube’s containers, so yet another stdlib replacement/extension :)
My personal view is that jbuilder is a mistake
Could you elaborate on why? IMO jbuilder is not perfect either but if we get a modern, documented build system which is hopefully easy to setup, it would be a massive win over all the other solutions we currently use.
I agree, the different choices in tooling is sort of disorienting and it can lead to analysis-paralysis. For a toy compiler project I started working on, I tried to find the most basic tooling that would work: whatever ocaml compiler came with my distro, ocamlbuild, make, and then eventually, extlib, ocpindent, and then after some more time, opam, ocamlfind, utop. It may make sense to use the tooling outlined in this article if future maintainability is a big concern, but to get started and to learn ocaml, I don’t find it necessary (and definitely not appealing). Having done this, I don’t pine so much for standardization (;
It bothers me because it makes the language more difficult to learn. It also wasn’t always clear to me that an alternative was in use because, IIRC, they’re not (always) clearly namespaced. I have run into this in Haskell as well, FWIW.
Typically it’s visible when you use an alternative stdlib because you start your files with open Batteries or open Core or open Containers. I agree it’s annoying that the stdlib is not richer, and it’s a bit slow to accept contributions, but in a way the existence of alternative stdlibs/extensions shows how easy it is to roll your own :-)
Haskell, C, and D come to mind. You could also argue that Python has multiple standard libraries because it has different implementations that effectively can’t use some aspects of the normal stdlib (PyPy). Then there’s Java: SE, EE, and ME are the same language with different sets of functionality in the standard libraries.
Out of curiosity, have you tried OP’s project setup?
Also, there is only one OCaml standard library–the one that comes bundled with OCaml. The other ‘standard libraries’, Batteries Jane Street’s Core, are optional add-ons made for specific purposes.
I haven’t tried OP’s setup, but honestly it seems even worse than what I had. I pretty much followed this: https://ocaml.org/learn/tutorials/get_up_and_running.html. I ended up using Oasis, which was just awful, every time I added a file or dependency I had to fiddle with the config until everything would build again, but at least there wasn’t an entirely separate language.
From OP:
(jbuild_version 1)
(executable
((name main) ; The name of your entry file, minus the .ml
(public_name OcamlTestProj) ; Whatever you like, as far as I can tell
(libraries (lib)))) ; Express a dependency on the "lib" module
Note the comment, “as far as I can tell”. To me, that’s a terrible sign. A person who has gone to a reasonable amount of effort to explain how to set up a project can’t even figure out the tooling completely.
Jbuilder is quite nicely documented (see http://jbuilder.readthedocs.io/en/latest/). The public_name defines the name of the produced executable in the install context. It does not take much effort to read it from there
Of course you still have to find out that Jbuilder exists, which the official site doesn’t seem to mention… I am lazy, I don’t like choices, I just want one, blessed tool that works more or less out-of-the-box if you follow a set of relatively simple rules (I’m even OK with wrapping the tool in a simple, handwritten Makefile, which is what I do in Go). I’m not arrogant enough to think that the way I prefer is the “right” way, in fact in some cases it would be dead wrong (like for extremely complex, multi-language software projects), but that explains why I dropped OCaml for hobby stuff.
OK, but your criticism is that you have to find out that JBuilder exists, commenting on a post that tells you about JBuilder.
To be fair, jbuilder is very young (not even 1.0 yet actually) but it might become the “standard” build tool the OCaml community has been waiting for for years (decades?). Then clearly there will be more doc and pointers towards it.
Well obviously I know about it now, but it still isn’t terribly discoverable for someone new to the language. My actual point, and I probably didn’t make this as clear as I should have, sorry, is that in my experience OCaml isn’t very friendly to beginners, in part because its tooling story is kind of weak and fragmented.
Yeah. This is true. Especially on Windows. People are working on it but it’s slow and it’s taking time to consolidate all the disparate efforts. I myself am not getting terribly excited about OCaml native but funnily enough I am about BuckleScript (OCaml->JS compiler) because of its easy setup (npm i -g bs-platform) and incredible interop story.
Others are getting equally into ReasonML ( https://reasonml.github.io/ )because it’s coming from a single source (Facebook) is starting to build a compelling tooling/documentation story.
OP here: I didn’t really make any effort to pursue documentation re: the public_name field, and I have really almost no production experience with OCaml whatsoever. I certainly have complaints about OCaml’s tooling, but I can assure you that any argument against it appealing to my authority is certainly flawed.
I wasn’t really appealing to your authority, in fact kind of the opposite. I don’t like using systems that converge to copy-paste magic, and that seems to be what you did, and is likely what I would do. I don’t want to use a weird programming language to configure my project, I want something simple, with happy defaults, that can be understood easily.
I guess I generally prefer convention over configuration in this case, and that doesn’t seem to be what the OCaml community values, which is why I gave up on it. I’m not saying anyone is right or wrong, it’s just not a good fit for me, particularly for hobby projects.
Trying to put this more directly, it sounds like he or she is trying to say: “wake up and get out of your ivory tower”; this post is more so a reaction to the kinds of academic training that new graduates try to apply to their work. I think they usually give up that kind of thinking anyway; they eventually get scoffed at.
I’m all for getting results, but I find this attitude to be too heavy. I know the author is trying to drive home a point, but it’s the case that developers write code, day-to-day, for reasons that only in a circuitous way contribute to their salary. Is that so secondary to getting results that it’s not worth mentioning? Or is it better to keep that a secret?
The post has this weird, business-y subordinate tone to it, as if programming is mostly what you can get away with shipping to production. It is incredibly reductionistic and dull. If our lot in life as programmers is to write glue code to whatever the Internet deems as The Best Library, then I’m done with programming. There is room for a healthy appreciation for making things that work well when stressed and shipping regularly.
I take pride in my work, and that means doing things as best I can. At the end of every day, I at least have the satisfaction of knowing that.
These touch typing tests (keyhero) are a little annoying because at speed and length, they start to test focused fast reading as much as they test typing. On a long piece, I estimate about half of my mistakes to be of the kind where I insert or skip a word (“that they” vs just “they”), change a word where it makes sense in the given context (“my” vs “the”), or even entirely confuse a word (“confused” vs “focused”). Maybe insert a comma where I would’ve placed one if I had written the text.
The faster I go, the more expensive such a mistakes become; I might’ve typed the entire word or two thirds of it by the time I realize the mistake, and then I have to backspace over some arbitrary number of characters to fix it… it does not help that at speed, I have to focus my eyes almost fully on the text I’m reading and not on what I’m typing, so I don’t see my mistakes as they happen. And that makes them so much slower to correct. Annoyingly enough, sometimes I’m so far ahead when the mistake registers that the application gives me penalty even if I go back and correct the word.
Give me a short piece and I can reach 130 wpm with ~100% accuracy. On longer pieces, these expensive mistakes can easily take my speed to 90 wpm, sometimes even less.
It would be interesting to see how I’d do on a similar test in my native language. Quite a few English words slow me down dramatically, and indeed I might not even know the correct spelling.
There are other touch typing trainers / tests where the program only displays one word (or a few) at a time. On these it’s definitely easier to reach higher speeds.
Thank you for a detailed comment. All these tests are just synthetic tests. Of course, it is very different from how do we type in real life. Most of us type our thoughts in prose and code, thinking and editing as we type. I use those tests just to track my progress from time to time.
Your 130 wpm is very impressive. Could you share your learning story, please? When did you learn to type that fast? Do you practice deliberately on a regular basis? Thanks again.
I don’t practice deliberately, I just type a lot. In a context where it is often useful to type fast. In realtime chats, especially IRC. So I don’t really know what brought me here, apart from typing a lot.
I got started on touch typing when I read about the Dvorak keyboard layout and decided to give it a try. I found a file for xmodmap that remaps my keyboard to a custom variant of Dvorak, and I put up a little cheatsheet on the screen. I didn’t know how to restore the original keyboard layout without restarting X (and especially back then, I always had tons of stuff running that I didn’t want to stop and restart) so I just had to force myself to learn it, which I did, rather quick. I started with a TUI touch typing trainer and practiced long enough to have a decent idea about where all the letters are, then continued on organically by chatting on IRC, posting on forums, etc.
I still use Dvorak, and I type on a Kinesis Advantage Pro. I’d like to try another layout though (and another keyboard; I’m not super happy about the Kinesis' bugs).
EDIT: after a couple dozen snippets, I seem to average about 105 wpm and 97% accuracy on keyhero.
I’m also very annoyed with the Kinesis Advantage Pro stuck-modifier issues. Seems like they recently released the Advantage2, so maybe those issues have been addressed.
That’s what I thought as well. But I am not compelled to give them any money; they’ve been aware of the problem since forever, and IMHO the right thing to do would’ve been to acknowledge it, fix it for free or at least give us the option to buy a fixed chip if it’s really a hard-wired bug. And they really shouldn’t have kept selling the broken keyboard. In fact, when I purchased my kinesis, I kinda figured that it would’ve been fixed by then; the reviews I read before making the purchase were old from days when PS/2 was still popular, and I was getting the USB version. Instead they kept selling a product that was known to be broken. I’ll look for another maker or (preferrably) make my own, when I get around to it.
I’ve explored the research into layouts and I think there are ones that are better optimized than Dvorak. So they could hold a promise of more comfort or faster speed or both. Maybe fewer typos?
As for the kinesis.. the problem is that modifier keys occassionally get stuck. Not very often, but often enough to be annoying, and especially for a device at this price point it is inexcusable.
Admittedly I’m also unhappy with the mushy rubbery function keys. And I’d prefer to have a real numpad in the middle, as on a maltron 3d keyboard. Maybe, maybe I’d like the trackball option too. I’m not going to buy a maltron though; I’ve heard it’s got similar bugs.
The kinesis is a little slow as well. You’ll notice if you try to play a rhythm game (like stepmania) with it. There’s slight delay between each key press, and it can get unbearable on fast sections.
Noted with thanks. What do you think about steno?
Check out a quick coding with steno demo by Ted Morin.
It’s cool for live transcribing speech, but I’m not convinced it has much use beyond that, unless your day to day work involves typing tons of text in a given natural language. It works because it can be optimized for the syllables and vocabulary of that language. Once you need to step out of that confine, you run into trouble.
The coding demo is awkward. Frankly it looks like he had to pre-program his steno for the program he’s going to type, and then he’s struggling to remember his new chords. It’s not fast, and it’s especially not fast in the general case where identifiers can be anything and you can’t optimize it for the specific constructs of the language or framework you’re using. If saving strokes or typing things fast is a concern, code editors already do a great job with macros, templates and completion (the last of which requires no programming to be useful).
I have to say that character input speed isn’t really a concern for me when I’m coding. I feel like I type fast enough as is, so I don’t really even need the fancy editor features (I use vi). Much more time is spent on the thought process anyway.
My favorite tactic for “killing” these is (to use the example from the post):
# e.g. "hello everyone" => "Hello Everyone"
def upcase_words(sentence)
sentence.split(' ').map!{|x| x = x[0..0].upcase << x[1..-1]}.join(' ')
end
In an ideal world the name is clear enough that someone reading the code at the call site understands what’s happening, and if they don’t the example alongside the definition hopefully gets them there.
You mean
# e.g. "col1\tcol2\n ^ woah" => "Col1 Col2 ^ Woah"
Naming it hurts in this case, because the function does not do what you named it (e.g. in a string of tab-separated values, or a string where multiple spaces are used for formatting). If you had to name it, it would be better named as split_on_whitespace_then_upcase_first_letter_and_join or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.
The best solution is one that embodies exactly what you intend for it to do, i.e. substitute the first letter of each word with the upper case version of itself. In Ruby, that would be:
sentence.gsub(/(\b.)/) { |x| x.upcase }
If you had to name it, it would be better named as splitonwhitespacethenupcasefirstletterandjoin or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.
I disagree. You should name functions and methods based on what they’re supposed to do. If it does something else, then everyone can see it is a bug.
I don’t agree with your naming system. I think the name of your function should describe what it does instead of how it does it. If your function name describes how it’s implemented, you have a leaky abstraction.
Among other benefits, giving it a name means we can explode the code without worrying about a few extra lines in the middle of the caller.
words = sentence.split ' '
words.each { |w| w[0] = w[0].upcase }
sentence = words.join ' '
Introducing a variable called ‘words’ is a solid hint about the unit we’re working with. We may not want to pollute the caller with a new variable, but in a subroutine that’s not a problem.
Naming it does help in this case, but mostly because the reader no longer has to scrutinize over what it’s actually doing. Isn’t this sort of like polishing a turd?
That only masks the issue.
Any maintenance on that line will still have the same problems, whereas refactoring it to split it up into smaller segments AND giving it a name avoids that issue.
It gives the reader a good frame of reference to what the function’s doing. Context helps a lot when trying to read code, and although this isn’t as readable as it could be yet, it’s definitely a lot more readable than minus the function signature.
I’m pretty sure the programmer wrote that “darling” with some bad intentions in mind. Why doesn’t the author call it for what it is? The mixture of side-effecting and non-side-effecting operations, all willy-nilly and for no apparent reason, is telling.
Having professionally developed in Ruby for some time now, I find this (maybe a strong word) inspirational. It shows just how more diligence can be done when choosing a dependency. It’s easy (and I’ve gotten used to it now) to choose what’s popular and it’s easy to assume those Github stars have done the hard thinking.
I’d like to hear about the other end of the spectrum: when developers choose functional programming out of almost complete ignorance or because it’s the thing right now. Maybe this isn’t something to be discouraged, but I sometimes wonder how many choose Scala because, at first glance, it sorta looks like their favorite scripting language. To be fair, it’s possible they begin earnestly adopting the language and its style… but you can’t help the feeling that this is driven by fashion.
I found my way to functional and Scala by way of the checkers framework, initially triggered by http://www.joelonsoftware.com/articles/Wrong.html
I think there’s a convergence of languages. Scala looks like Python because both are trying to look like pseudocode. Even modern Java is moving towards that ideal. Moving towards a readable language is a good thing, and Scala is (or rather, well-written Scala can be) possibly the most readable mainstream-ish language with decent performance[1]. (Theoretically Haskell should be better, but I always find it unreadable - some combination of the terse variable (and type) name culture, the lack of syntax that looks like syntax, the overpowered $ operator/bracket, the lack of OO…)
I think if you take ordinary good programming practice - separation of concerns, encapsulation of state, composability - far enough, you end up with functional programming.
[1] I’m somewhat disturbed that Python and Ruby perform so much worse than Javascript - there’s no real reason this should be the case, and I see a lot of people leaving Python for performance reasons when it’s really the best language for what they need to do. I think it’s an artifact of there being much more moneyed competition in web browsers than in other language runtimes?
It doesn’t feel so funny if you consider the ‘in’ operator to be awk-inspired. It could blend in with the rest of the awk that’s in ruby.
One particular program where this effect is notable is the implementation of a simple stack-based virtual machine (or abstract machine), such as the SECD or the ZINC machine. Variables can be represented as just integers (De Bruijn indices), representing indices in an stack-like environment, with 0 denoting a reference to the last pushed/defined variable.
A naive implementation just represents environments as lists, and uses linear-time List.nth to retrieve the value of a variable. This feels horribly inefficient, so it is natural to try to use a dynamic array instead (you don’t even need a hash-table). It is at first surprising to discover that, for most programs one may want to write, the list-based implementation is noticeably faster than all others. This comes from the fact that most programs access the more recently-defined variables (the local variables and function parameters) very often, and the rest much less often, so the accessed indices are very small in practice.
One of the MIT Lisp Machines used pairs (frame, index) when compiling variables. The binding stack was a list, with each entry being a De Brujin indexed list of variables so for any static context the (frame, index) pair uniquely identified a single variable.
Neat, never heard of De Brujin indices, but it sounds pretty close to lexical addressing. Anyone know if there’s a relation between the two?
Hadn’t heard of lexical addressing, but it looks similar. The intent of De Bruijn indices is that they simplify compiler implementation because you don’t have to worry about names conflicting as you compose subtrees, nor do you have to do any substitution. It looks as though lexical addressing is pretty much equivalent in that respect.
Did the article say they have no code review? It looks possible to run into this problem even with that in place.
The footnote says:
Code Reviews would have solved this issue as someone else would have stumbled on that symbol and asked for more clarity and understanding. That would have help introduced the library to the team easier than a surprise. Unfortunately, at Gravity we didn’t have required code reviews in place. We do have code reviews in place at CrowdStrike.
Question about HTTP status codes in general: should they be represented as int or as string? It’s prolly inconsequential, though I’ve seen both choices floating around.
IIS will pass fractional sub-codes so I suppose it might be too support handling those?
Oh wow, hadn’t encountered that. Yes, strings would be helpful there. Thanks for passing that along.
I’d answer “no”. They should be represented as strongly typed instances, though probably with the option of giving a custom one - possibly enums with the correct ints would be adequate. Look at spray.io (or its spiritual successor http4s) for a library that I think handles status codes really well.
It’s appropriate that an HTTP library would define some response code types for the programmer. Yet it’s interesting to note that both of the libraries listed choose to represent the “raw” response code as int. Again, inconsequential, but curious…
I would use int only. Because range of the number also matters and int are easier to compare. Check these lines. For example, any code between 200 and 299, including those, is considered success.
Equivalently, for a string type, you could check the first character to determine which class the response falls in.
none really, thought it would be a fun exercise, & easier to reason about what i’m doing than an awk script :)
Has anyone had much luck installing OpenBSD on an x220 with Intel HD graphics? Last time I tried (probably with OpenBSD 5.4) I couldn’t seem to get video acceleration working.
5.5 snapshots ran fine on my x230 w/Intel HD gfx… though, I’m not sure if video acceleration was working. I was able to run Gnome3, so I think that counts?
Whoa, I installed 5.6 and it worked without a hitch. I also took the opportunity to switch over to cwm after using ratpoison for awhile… a reminder of how nice the defaults are.
I have run accelerated graphics on my x220 for a while now. Check the perms on your /dev/drm* As it turns out I have “Intel HD Graphics 3000”
Lots of luck, daily.
Only think that i had issues with is my realtek wireless but i use a usb dongle to connect wirelessly until someone takes care of it.
from my understanding, the author has shat out his “verbal diarrhea” (to use his words) all over the page and then, at the end of it all, in the tl;dr, has the courtesy to say it’s all obviously shit anyway. was that the intended message?
This part was never hard to get for me: I unterstand how monads are defined, how they are used and that they let me write a pretty sequences of
Maybe(or in this caseattempt) operations.But what I don’t get is 1. what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc., 2. why monads are necessary (if they even are) for pure functional programming languages. 3. In what sense is this related to Monads from philosophy (simplest substance without parts)
I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”. And if peoole will go on writing Monad articles, which I am sure they will do, I would very much appreciate it if someone could touch on these issues.
Apparently monads were considered a kind of breakthrough for dealing with some ugly things (IO, exceptions, etc) in a purely functional programming language. I would highly recommend this paper for understanding why they were introduced into Haskell https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/mark.pdf I found the historical perspective especially interesting.
That’s a great paper! He makes it a lot more intuitive than most explanations.
They have a common signature that follows the same laws. This means you can write code that works generically for any monad, and a lot of standard helper functions are already written for you, e.g.
traverseorcataM. For a concrete example, in a previous codebase I had a bunch of per-client reports that had different requirements - one needed to accumulate some additional statistics over the report, and another needed a callback to their web API. So I wrote the general report logic generically in terms of any monad, and then for the client that needed the extra statistics I usedWriterand for the client that needed the callbacks I usedFutureso that I could use a nonblocking HTTP library.They’re never absolutely necessary, but it turns out that a lot of the time where you’d be tempted to use impure code to solve a problem, a monad lets you use the same code style but have your functions remain pure. E.g. rather than a global variable you can use
ReaderorState. Rather than a global logging facility you can useWriter. Rather than exceptions you can useEither. Rather than a thread-local transaction handle you can use a transaction monad. etc.Not at all, it’s an accident of terminology.
Once you’re used to them you start seeing them everywhere.
Ostensibly they follow the same laws. But sometimes people let them break the laws at the edge cases. For example, State is not a monad.
State is a monad under the usual definition of equivalence used when reasoning about Haskell (in which bottom is considered equivalent to everything). Under a more natural definition of equivalence,
seqis not a function and State is still a monad (defined only on the legitimate/categorical fragment of Haskell). To pretend that the weirdnesses ofseqand lazy evaluation have anything to do with State specifically is grossly misleading.