Always cool to see somebody is using my guide to learn Haskell.
I hope they don’t drop out, author seems thoughtful.
Just wanted to say thanks for that guide – it’s very well thought-out, and packed with really good resources!
I’m using the guide as well, slowly though and mostly for personal use. At work I’ve switched all of my development from ruby to… wait for it… ye olde school C11 (well not old really but you get the point).
To be honest its not all that bad with clang+llvm and using the scan-build tools. The static analysis in llvm/clang have made me a clang fanboy.
But in either case thanks for the Haskell guide, I’m working through some project euler questions with Haskell and I also bough about every Haskell epub out there and am working through them all.
Learn you a haskell however is not quite my cup of tea. Anyone have other books? I bought Beginning Haskell which is in my personal view better but other options are welcome.
LYAH wasn’t my cup of tea either, that’s why I point people at cis194 - that’s the main way I teach Haskell. I only use LYAH as a direct guide with people that haven’t programmed before. Otherwise, LYAH and RWH are references to supplement cis194.
If you haven’t done cis194 yet, that’s what I recommend you do, followed up by the NICTA course. This is outlined in the guide.
I’ve heard Beginning Haskell is good if you want practical project walkthroughs, but I haven’t validated it and am trying to avoid non-free resources in my guide or things I haven’t tried.
Generally speaking, if you’ve done cis194 and possibly also NICTA course, I aim people towards working on their own projects if none of the supplementary sections in my guide intrigue them.
Edit: I’ve edited the guide to include this guidance.
Yep yep, i’m working through cis 194 actually and nicta is next up on the list. Think I found your github from hacker news actually. Have to say, haskell has helped my c code as well. I am finding myself mutating state way less than when I look at old c code I have lying around. It is surprisingly refreshing and honestly its making me much more intrigued how I’ll view the same code once I get to walking speed in Haskell.
My biggest win so far however has been to dedicate ~30 minutes a day to just haskell time. Harder now that its summer but quite doable in general. Not that I don’t skip days.
I’m glad you’re getting similar benefits out of it as I did.
Slow & steady wins.
BTW, what resources you will recommend to move to an intermediate level? Something past monads, transformers, combinators, etc.
I guess at this point I have to go and write a shit-ton of code (which in Haskell means ~300 lines)
My guide https://github.com/bitemyapp/learnhaskell mentions plenty of intermediate topics. Have you mastered everything listed?
And go make things. yes.
My main issue with these posts is that they are largely a complaint about how bad actual projects are. Which I hear from all sides. The fault here, mostly, are structural and planning problems, from which a type system doesn’t save you. Haskell has huge pitfalls here as well, e.g. by the record syntax making it impossible to have to attributes of the same name within the same module. Also, a lot of anecdotal evidence and that is usually negative (I mean, positivity doesn’t sell, right?).
To add anecdotes: I’ve seen and worked on huge, clean and well implemented Ruby projects, doing all kinds of things you wouldn’t expect Ruby to do - up to high-traffic, low-latency servers for ad-tracking and video conferencing. All those share one property: an undogmatic team with a good sense for constructing huge systems (which primarily means: interaction between many components that cannot be typed, e.g. network). Many of those deviated from the “Rails way” at well-picked places.
I think we are focusing on program properties too much and not enough on systems building as a craft.
I agree that bad projects exist on both sides. A project’s success is mostly dependent on how hard one can get the developers to work rather than the quality of their tools. So, IMO, the argument for or against static typing is one of making developers life easier, not about success of a project (not that I’m saying anyone is arguing that, but that is just my point of view).
That said, IME, the one benefit that a language with a decent static type system is that it gives developers what a dynamic one does not: a small sliver of guarantees when they refactor. It may not be much, but if you are refactoring code that is a mess and has no tests, it’s something you latch on top, knowing that at least your refactoring produces something of the same type. And in that way, I think it is a benefit to developers.
I think software development is going through an unfortunate anti-intellectualism fad right now. There is a lot of good research on how to design programs and how to develop teams to implement said programs. I believe many people fall back on saying that development is more of an art than a science, thus we don’t need much rigorous thinking. Maybe this is good because it means a lot of people who develop things even if the underlying design of it is poor. I think quantity is a poor substitute for quality though.
I don’t think refactoring without tests is a good case for static typing. In the end, tests are very important to ensure the formal correctness of a program taking the requirements into account. If I test +, and refactor it and suddenly 1+1 is 3, I am in the same hellhole, types or not - my project manager will call me stupid. A good test suite makes sure the program still works to specification after refactoring and types don’t cover your specification. The whole “we don’t need tests” is just as much as anti-intellectualism as the whole type debate - because it assumes all important parts of a spec can be covered by the compiler (read: the compiler can read your mind).
I don’t think we are going through anti-intellectualism - we (as unscientific programmers) are just horribly bad at seeing the advantages of both approaches and there are not a lot of people that can competently argue both sides. We are also bad at judging the impact of non-technological components in software development process (which, every scientific paper about how a certain language feature makes development sooo much easier happily brushes aside).
I am not arguing that static typing is bad - it’s a great tool at the right spots - but it’s also not flawless and dynamically typed languages are here for very practical reasons.
I don’t believe there is such a thing as a mass idiocy (which ‘anti-intellectualism’) implies.
Static types and tests can be used complementarily (at least until dependent types become cheap and easy; actually, the expense is the whole debate here). In my experience, probably 30-70% of the logic of tests in programs in dynamic languages are just implementing a bad type system. This gets especially bad when mocks are introduced.
Among the ones that aren’t type related, they’re usually very partial on the intended behavior, usually because they’re struck with the difficulty of traversing the entire possible state space.
Purity and types improve both of those situations nearly automatically. Nobody argues that types completely replace testing, instead that they make it easier to test and that fewer tests are required.
No one is making the argument that strong static typing will eliminate most bugs, especially logical ones. Static typing only eliminates a subset of bugs. There is a set of bugs caught by unit tests, and a set caught by static typing. There is an intersection between those two sets but they are not subsets of each other.
@apy was making the argument that static typing simply catches a set of bugs that is not caught in dynamical languages and provides a small sliver of guarantee.
Also, I’m not arguing that one should not test code. What I am saying is that if you inherit a codebase that is a mess and has no tests, a statically typed language gives you at least some correctness coverage and as you refactor the code, including adding tests, it gives you something to work with to know what’s going on.
Sure, but I think that effect is overrated.
You can also build yourself into the corner with a type system quite nicely and not being able to cut corners can also be a problem.
In the end, the tests - the things that actually validate the business value of your code - are the more important stuff.
I also don’t think languages should be built for how well salvageable they are.
I think tests, especially high level business value ones, are like symptoms of diseases. When they are indicating trouble your project is sick.
Types are like good appetite and healthy diet. They won’t keep you from being sick, but they are relatively cheap and lower your susceptibility across the board.
Turing completeness bites every time language comparisons are made. Of course you can achieve things in any language, and of course outside factors influence it.
The benefit of <things like Haskell> is the same benefit of usig better tools in any endeavor: a constant nudge toward quality and ease. In the hands of a beginner it’s possibly a waste of resources, and a pro will still win when coding in Perl, but a pro using a good tool will be able to perform optimally.
And finally, you can definitely type networked stuff. That’s what schema are all about. A rich network interface type declaration is exactly what API documentation is.
All to say, of course Ruby can work and of course it’ll take expert judgement and divergence from “best practices”, but I don’t think that’s the argument being had. It’s more like everyone is complaining because they play at a tennis club which requires they use baseball bats.
A schema is not a guarantee in the sense most a type system do it. And I can certainly apply schemas in dynamic languages. What I meant by that is the following: during runtime, such a system cannot guarantee me that the data read from the network will actually fit. In a sane runtime, a program can actually make that expectation.
I fundamentally criticize the notion of “better”. Haskell gives me a lot of guarantees, but asks quite some ceremony in return. I also don’t want to say that Haskell is bad - far be it from me, I love it.
What I am missing are comparisons on the level of “Static Typing Where Possible, Dynamic Typing When Needed”  that compare those two approaches without broad strokes.
A schema isn’t a guarantee, but it is a type. You just operate in a system where types aren’t respected… Unless you control both ends and can stop that. To be more clear: “types” only are distinguishable in systems with both statics and dynamics. Data is purely static so it depends upon the larger system it’s embedded in as to whether a schema acts as a type. It’s up to the system designers to make those guarantees and profit from them.
I think that Haskell is better in the sense of “whenever you can afford it, static types are better”. In my experience, dynamic types aren’t so much cheaper either: you end up with about half the “checking” expense, but it’s all performed ad hoc in comments, informal contracts, and programmer sweat. In this mode, half the checking is three times as expensive. It just feels cheaper to begin.
Activation energy is probably the biggest detriment to static types. That is definitely the case. I think my tennis racket analogy still applies though.
(Also, while I buy Meijer’s premise I’m pretty sure I don’t follow his conclusion. He’s not arguing powerfully, just building C# allies. The tradeoffs between static and dynamic can already be easily done in a static system: you just make larger types. It’s merely that without a static system at all (or with a broken one) you can’t add smaller types as available or needed.)
This seems like more of a complaint about monkey-patching than ruby or dynamic types. Not that I love ruby or dynamic typing, but almost all of the problems he lists are from monkey-patching (or just plain bad code).