This is an amazing piece of engineering work. The language integrated ability to discharge proofs of the integrity of cryptographic protocols to SMT solvers is way ahead of its time.
Here’s an example of how such a tool is integrated in an overall flow for high-assurance security:
http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf
I chose this one since it takes it down to CPU microcode using AAMP7G: a CPU verified mathematically to be correct and enforce separation a la seL4. The other end at Rockwell is model-driven development with Simulink or Stateflow converted to provers for correctness, then SPARK Ada for correctness, and then AAMP7G for correct execution. Correctness is from requirements in human-inspected model down to microcode.
There’s only one sign that you’re in a feature factory. If the the pursuit of number of features outweighs the design of the software itself.
If you have an environment where business people who will come to you with the set-theoretic union of every possible end-user item possible, and then turn that into a timeline you’ll end up with an incoherent wreck whose only ‘feature’ is being rewritten every few years and cycling through engineers who end up breaking out of the cycle of suffering and quiting.
There’s a much more interesting story about having a global network of machines with the ability to deploy executable code whose inputs and outputs are kept in sync via a consensus protocol. That’s really the interesting idea here, detached from the cryptocurrency world.
Unfortunately there are deep foundational problems with this ecosystem, and it’s mostly due to the fact the code being deployed on the network is brittle and not amenable to analysis. Nobody really can answer simple questions about state transitions in these contracts or whether they end up in a broken state with funds misrouted or drained. Until there’s a stronger claim than “trust me, this contract works as I claim it works” it’s really kind of the wild-west.
This transaction model based on Javascript string comparison is completely and utterly insane. If you use this, you deserve whatever happens to your data.
Looking for a “Haskell job” is the wrong mindset. Look for problems that are amenable to Haskell’s strengths and then find work on those problems with people who are open to using solutions that demonstrably exceed the projects needs.
The problem I’ve personally had hiring Haskellers is that there are too many hobbyists who try to turn work projects into experiments in the latest type-level programming technique, and end up burning through massive amounts of time and capital trying to solve problems that don’t really add value to the business.
No, you really shouldn’t target Rust unless you happen to be writing a language that is nearly semantically identical to Rust. Consider for instance how you’d bolt a non-standard calling convention (register pinning), like LLVM supports, onto Rust. You’re basically locked into the high-level semantics that Rust choose which are basically call/ret with it’s specific stack setup. LLVM is a much more malleable medium to target for languages of many different sorts because it makes much fewer assumptions, that’s why it’s such a great target.
This is extremely true, but also a specific case of a more general problem that people often seem to overlook: to compile language X to intermediate representation Y, Y must be able to efficiently implement all the concepts in X. LLVM IR keeps getting more features over time, in part to represent concepts that exist in certain source languages but cannot be efficiently (or at all) represented in LLVM IR without changes.
Languages don’t generally make good general-purpose IRs unless they’re designed for that (and get beaten on for years until they have the features people need). Sometimes people make new IRs for their own purposes because LLVM’s really isn’t a good mapping to what they need – but that new IR is likely only good for that purpose, because that’s what it was made to do. It’s not “better” than LLVM – just (usually) more specialized and different.
(The inverse is true too: LLVM IR has a lot of implicit assumptions, e.g. in its memory model, arithmetic system, and so on, that roughly match a C-like language model. And if you don’t want those…. it can be hard to work around them!)
Highly-skilled people that I respect just got really angry and irrational and basically refused to even try to learn Haskell, which resulted in a culture divide in the company.
This is kind of a depressing story, by every account this team did everything right. They rationally evaluated their options, ended up building solid reliable software that added value to the company, and did internal training for the new tools to ensure it could be maintained. I really question the judgement of the management that divided the company over what was seemingly a solid solution, and I don’t blame the OP for leaving.
Why bother calling our profession “software engineering” if weird superstitions and “feelings” about languages trump empiricism and measurement.
Highly-skilled people that I respect just got really angry and irrational and basically refused to even try to learn Haskell, which resulted in a culture divide in the company.
I’ve generally found most tech choices are driven by pure emotion.
I’ve found a vanishingly small number of technical decisions derived from some rational set of principles in the ~25 years that I’ve been doing this work for money.
One thing that I’ve found, unfortunately, with high-productivity languages is that there’s a sort of political Amdahl’s Law. Instead of it being the serial component of a program that holds up parallelism, the political component of your workflow can dominate your time once you make the coding aspect efficient. The negative side effect of making the work quicker, in other words, is that a higher proportion of your time is spent on political bullshit, such as justifying work. If the people you work for aren’t capable of valuing the technical excellence that such languages facilitate, it can just make your life a lot worse.
The truth is that improving efficiency isn’t always a good thing. You have to be able to trust your employer. A good employer will give you more autonomy and let you take on more ambitious technical projects. An evil employer will say, “This is great, now I need three fewer of you.”
Now that I’m more of a manager/architect who also codes, I’m less zealous about languages. If the other programmers want to use Haskell (which I also prefer) then it’s my job to make that possible. If they want to use Python, that’s more than fine with me. I’ll write Python, then. Even Java is OK if they have a damn good reason.
People build their professional identities upon their tools and platforms, which is in all honesty an absolutely stupid way to approach things (but understandable, given the investment involved). So when you propose switching out your stack or your programming language, it’s perceived as a personal attack on the engineer’s competence or professional self-worth. (I am aware of such an actual situation at a certain large corporation involving certain developers and certain front-end stacks.)
People build their professional identities upon their tools and platforms, which is in all honesty an absolutely stupid way to approach things (but understandable, given the investment involved).
This is absolutely true, and you’re utterly right to call it stupid.
I think that the short-termist and ageist culture of corporate programming is largely to blame. People no longer identify as computer scientists or problem solvers, but as X programmers. It has become this high-stakes game of choosing tools (a) where one can contribute to a corporate codebase right away, make a quick impression, and get pulled on to a leadership track in the first 3 months as opposed to never, and (b) that seem, at the time, to have a long-term future in the corporate world.
When companies invested in people, and when you didn’t need to make a strong, “10x”, impression in your first 3 months to have a career at a place, there was less of a need to brand oneself based on tooling choices or based on silly silos like “data science” (which is mostly watered-down machine learning).
I’m within a few days of turning 33, which is ancient by corporate programming standards, and I’ve worked with enough different tool sets to get a sense of the recurring themes. I find it a lot more useful and rewarding to think of myself as a computer scientist and problem-solver who can pick up any tool than as an XYZ developer. That, however, seems to be a luxury that comes with professional standing, insofar as I’d no longer take (and, because of my “advanced” age, probably not even be able to get, even if I needed a throwaway job to fill an income gap) the kind of job where I have to justify work in 2-week increments. If I was forced to play that horrid game, I’d do the same thing as everyone else and invest more energy into the tool-selection process than the work itself.
I’m curious about the halcyon days of corporate culture you are referring to. When was that? For example, “The Soul of a New Machine” - a book about a team developing a minicomputer more than 35 years ago - shows a picture of the industry that’s remarkably similar to what we see today. My impression is that corporate culture hasn’t really changed in at least the last 40 years.
The worst employers probably were the same now as then. The distributions are different, with a lot more bad examples and very few good ones.
People always complained about short-term outlooks and meddling management, but it used to be that they complained about 5-year focus as opposed to the next-quarter mentality, and that management could meddle but it was limited compared to now. You didn’t have to stick with a bad employer, in tech, back then. You still don’t, now, but the odds are that if you roll the dice, it’s not going to be better… and you’re going to have one more job hop to explain in the future.
I worked at IMVU a little before this. It was a good team and definitely some strong personalities. Large swaths of the PHP code at that point were… well “the horror” is a good way to describe it. I too am curious what would have happened with Java.
Not many people argue that society shouldn’t allocate a certain amount of capital to invest in research on the 10-100 year horizon. Nation states are the only entities that really can take that risk and history has shown that it pays off. The argument about the benefits of long-term fundamental research is not really something most rational people would debate, the only thing we do debate is how much capital nation states should allocate and who they should allocate it to.
Over the years, I’ve seen people evaluate research by how closely the paper translates into a startup idea. I’ve seen people evaluate research by how easily the work can get media attention.
This is where it argument looses people like me, who in industry, are constantly forced to justify our work to investors and shareholders because that’s how our system works. If you can’t justify your work to the funding agencies then you have to adjust your ideas to be more relevant to people who will allocate capital (like industry), that’s not ideal, but that’s life in a society based on capitalism.
In an ideal post-scarcity world, we’d be able to give every person enough resources to go off and build projects that benefit society on their own terms. But we don’t live in that world and that means tactical allocations of resources to people who can justify returns on investment, be they societal or economic.
I don’t know if it’s fair claim this is due to capitalism. I’m not sure decision makers in a communist system would be any more likely to allocate funding for crazy, out in left field ideas with no clear real world application.
There’s a never ending supply of ideas that will never result in any practical real world benefit to anybody, or will never go anywhere no matter how much time and money get thrown at them.
People are free to do all the research they want, but if they’re depending on other people’s resources they’ll always have to justify why the resources should be spent on that project instead of another one.
The article makes the big leap between “shippable products” and the much simpler goal of just having code that is available and compileable. Shippable product by industry standard means at the very minimum the code is on some public VCS, permissively licensed, documented, has a public bug tracker and is tested on major platforms. That’s not what’s expected of researchers.
For code distributed for research, it is very common to not even have it a) public b) a list of directions on how to build it, what’s needed is not a well-supported product; just the minimum viable effort to reproduce it. And in this day and age of free code hosting, free continuous integration and abundance of open source if academics aren’t giving minimal reproducible artifacts I have little sympathy for when their work gets judged as impractical and tossed out.
As I’ve moved away from academic work, I’m quite bitter about how many academics are readily willing to complain about the lack of industry uptake but are not willing to spend 30 minutes to learn Git or make a short list of install directions for a README file. There’s this vast asymmetry in the time it takes to reproduce bulids, 10 minutes on your end can literally save 10,000 community hours.
Polyvariadic functions are pretty ugly without language support. The next “big” functional programming should have a better story for named arguments because while curried positional arguments are suitable for a large class of lawful functions, they’re really painful for dealing with anything unix functions or filesystem functions that are mostly large collections of variadic optional named arguments. Having first class records in the language would pretty much solve most of this.
I stumbled upon this: https://nikita-volkov.github.io/record/
I assume by first class you mean records as a type level construct.
Do you mean stuff like OCaml’s polymorphic variants? Polymorphic records would be nice.
EDIT: Actually, I suppose OCaml’s objects and row polymorphism are a kind of heavyweight first class records.
I don’t think he means first class records. Or maybe he does. A record is something that supports named fields with no namespace clashing.
What I think he is specifically talking about, though, is SML style, where you pass a record into a function to get names associated to arguments.
makePerson { name=“Bill”; age=21 }
Would be an example usage. By passing a data structure with named fields you get names attached to arguments, gratis.
This is a really great essay on the whole, but I have to call the OP out for this:
Haskell makes abstraction so cheap that people get out of hand. I know that I’m guilty of this.
A typical progression of a Haskell programmer goes something like this:
Week 1: “How r monad formed?” Week 2: “What is the difference between Applicative+Category and Arrow?” Week 3: “I’m beginning a PhD in polymorphic type family recursion constraints” Week 4: “I created my own language because Haskell’s not powerful enough”
That’s ridiculous. It’s statements like this that create the impression that Haskell is only for super geniuses. They’re also painful to read for the “silent majority” of smart programmers who have impostor syndrome. (Unfortunately, our industry tends, both in engineering and management, to have the Russellian problem of the best people being full of doubt and the worst being full of certainty.) It is not normal, nor should it be expected, to understand Applicative, Category, and Arrow in any meaningful way after two weeks. One should certainly not give up if it takes a while to understand these concepts and why they are important. Frequently, learning them requires approaching them from multiple different angles, and even the best tutorials and exercises seem to be single-approach-based, so it often requires finding your own resources and it’s very individual what works and what doesn’t.
At risk of sounding like a douchebag, Haskell was hard to learn even for me. (It’s probably easier now, because there are better resources out there.) Was it utterly worth it? Yes, absolutely, hundreds of times over. Did I “get” monads immediately? No. It took a lot of reading and work and playing with the language. I will give Haskell this, without reservation, though: the effort is worth it. That’s not true of a lot of the other stuff that people have to absorb to keep afloat in this industry.
In defense of Gabriel I think the timeline he mentioned was just rhetorical, and the the gist of what he was saying was orthogonal: that it’s easy to go overboard on abstraction and that Haskell encourages this. Not to literally indicate that everyone should understand Category typeclass in “Week 3”.
Basically it’s impossible to get anything into the browsers that doesn’t serve the economic interests of the browser vendors. It’s always going to be more cost effective to just route around the problems using the existing technologies.
At some point just before the turn of this century, the culture in Silicon Valley shifted from being what it was (a quirky, pay-it-forward culture) toward a resource-extraction culture. Most resource extraction cultures emerged around physical commodities (e.g. oil in the Gulf States) but this one taps a different resource: the earnestness of the American middle class. People like the OP go into the tech industry (and often move to California) because they’ve been told that “it’s different” over there– that hard work is rewarded, that smart people are venerated rather than marginalized, and that these tech companies have something different from the manage-or-be-managed culture of the typical corporate grind. They read Paul Graham, failing to realize that the man hasn’t accomplished much in the past 20 years other than to monetize the reputation that comes from a mid-90s success and a couple good Lisp books, and drink enough Kool-Aid to last through three or four jobs before they realize that they’ve been sold a lie.
Then they hit middle age, and realize that the managers have all become executives, making high 6- and 7-figure salaries for jobs that involve no real work and minimal accountability, and they’ve all become… aging ticket jockeys, thanks to “Agile” and open-plan offices and the low-status, macho-subordinate culture that has infested programming. Sure, they know a lot about how to design software after 10, 20, or 30 years, but that doesn’t matter in a world where, if you didn’t get the title, you didn’t do it… and management titles are valued most of all because the people who make those decisions… (wait for it…) are managers.
It’s hard to say what “the solution” is. The indicated solution is to push a bunch of highly intelligent people to compete for management jobs that they don’t really want. (I don’t think that the OP wanted to be a manager. He just wanted the credibility that nobody ever told him that only managers get.) That might be better for the individual, but it’s bad for society. We lose engineering talent, we increase political competition for the good jobs, and we move further toward being a second-rate society governed by bullshitting rainmakers rather than real people doing real work. In which event, China will actually kick the shit out of us and Donald Trump, Jr. will win the 2024 presidential election in a landslide.
In that light, I think it’s incumbent on us as a generation to decide what we want software engineering to be. If we want it to be a stupid young man’s game [1] that people play for 5 years in exchange for lottery tickets called startup options, and to have a world in which skill is devalued and mediocrity in products is the norm, we can continue with the current path. It’s clearly making a lot of money for some people. On the other hand, if we want to be taken seriously as a profession, we have to get organized, we have to fight this nonsense (“Agile” and open-plan offices and the halfway-house culture for kids who are emotionally still attached to college life) and get serious about being treated like adults. I think the best model might be to institute something like the actuarial exams, because while I absolutely hate the idea of making software require formal education in the form of expensive institutional degrees, I do think we need to differentiate ourselves from the unqualified 19-year-olds who “write code” from their mothers' basements and the long-ago-checked-out perma-juniors (i.e. the people for whom Agile/Scrum is intended).
[1] Note that both parses work. {Stupid young man}'s game and Stupid {young man's game}.
I once had the opportunity to review a manager’s resume. Full of “built this, built that.” I thought, “cool, guy did some stuff but he’s grown tired of life in the trenches.” But then it turns out he was just the manager at the time. Didn’t actually build anything. Only watched people build things. I thought this was pretty deceptive, but then my manager explained that’s how all manager resumes look. Never really explain what they did, only focused on what the team did. It was disappointing.
What you’re saying is pretty much the observation most people make after working in certain parts of the industry few years. There’s a lot of systemic corruption.
You go out to the west coast and you see these companies packing engineers like battery chickens into open office plans, and offering them like 0.01 equity in some risky venture that will just toss them out like waste a week before their option cliff and replace them with another naive 20-something who will do the same and the cycle continues. It’s no wonder all the valley companies prefer younger people, they’re much easier to exploit. The Silicon Valley scene is really corrupt, and most people only realize this after they’ve been in the machine for a few years.
If we’re going to move forward as a profession, software engineering really needs to become a viable career path that you can feel safe devoting 10-30 years of your life to without being marginalized by this kind of corruption.
I’m writing a blog post (yeah, I decided to get back into that game, although with reduced time expenditure) on it right now. ETA: link to said post here.
I agree with most of what you write, but you should leave out the nation-state stuff. China is no less, and in many cases quite a bit more, governed in both large and small by bullshitting rainmakers. Their engineering practices are no more evolved. Their technical people are, in fact, on the same side of the same boat as developers in the US.
What you say is true of Asian corporate/management culture. That illness is worldwide. That said, it wouldn’t surprise if China (or, perhaps, Hong Kong or Taiwan or Singapore) managed to shuck this off faster than we do, if only because they’re hungrier and more willing to change.
It is correct that, as of 2016, the U.S. is a better place to be a software engineer than China. I don’t see that as a static fact, necessarily.
The article is a bit hyperbolic, but It’s hard to work in tech and not notice a really deep form of systemic corruption in the startup space. A lot of people have written about this topic, but it’s worth stating that a lot of current trends in building companies really exploit young programmers and leave them with very little to show for many lost years. It’s sad because it doesn’t have to be this way.
This is basically like the tabloid version of tech blogging. There are plenty of actual criticisms one could make of Haskell and GHC, but this is just hyperbole and sensationalism.
Disagree with the conclusion, the answer is much simpler. As a rule, if the project isn’t documented just assume it probably isn’t suitable for public use and certainly isn’t suitable for production use.
Github exists to share so by all means put code in any form up for others, just don’t put it undocumented into a package manager like Hackage or PyPi where it can subtly be pulled in. That is the real harm to communities.
My question is though, what do you hope to gain by sharing. Many people actually post these projects with the intent that people will use them, but then offer little or no documentation. Even many widely used projects have scant documentation (yes, few have none at all). In fact, the basis for the article came out of my own frustrations trying to use a number of open source project recently in preparation for a conference presentation - many of which were widely used (and many users of which shared my frustrations).
I keep virtually all of my code on GitHub, that way, I have a backup. What few non-open source things I do are in private repositories.
Or if I have multiple computers, it lets me share code between them without having to do some kind of obnoxious syncing.
A ton of people have dotfiles repos. No, my dotfiles are not production ready. No, I’m not going to mark my dotfiles as not ready for production.
Right. I think a lot of people do. What I suggested though is that you indicate with some sort of disclaimer that this is personal or experimental and you do not intend to support, maintain or document it. Makes it quick and easy for you and quick and easy for a consumer to make an informed choice about using it.
This is always true with open source, no matter what. We like to pretend that it’s not true, but at the end of the day:
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
There are projects that I’ve been the maintainer of with tens of thousands of users, and then life happened.
There are all kinds of other things that point toward this, though: only one contributor, very few commits, no documentation. That says it way more effectively than any kind of disclaimer I could make.
Yeah. We’ll know software engineering has made progress if we ever get to a point where we can safely omit that wording. It’s hard to imagine, right now, even with formal verification.
It’s hard to know or predict what someone will do with a piece of software. There are a finite number of requirements for a screw you buy at the hardware store. If your shed collapses because you didn’t use enough screws, that’s obviously your fault. If your website collapses because you used the wrong template engine, well how could you have known that? Obviously time to blame the author.
There are plenty of examples where this simply isn’t true (in fact, the author clearly intended it to be shared, but didn’t bother doing any real documentation). Thus the impetus for the article.
(in fact, the author clearly intended it to be shared, but didn’t bother doing any real documentation)
This is saying that it’s intended to be shared. But in the end, it’s still not offered with any kind of warranty; I can release a library with no documentation if I don’t want to write documentation.
Documentation decreases the barrier to entry to using a particular piece of software. Zero documentation does not necessarily imply an infinitely high barrier to entry. If someone is motivated enough, the lack of documentation may not prevent them from reaping the benefits of the code that was shared.
I get that the world would be a much nicer place if we didn’t share things that were significantly lacking in quality. We should definitely continue to advocate in favor of documenting your code, even if you don’t share it. But it’s the wrong thing to should-people-to-death for. The better thing to optimize for, IMO, is to teach others how to identify whether a project is worth using or not.
Because other people can reference it if their time is not valuable and they’re willing to put the time in to read the source. Code gets open sourced for a lot of reasons, and a lot of them are not necessarily for to make something that’s immediately usable in a commercial setting. How people donate their time is really not something you can expect to really control.
Thus the above advice: If it’s not documented and you’re on the clock, assume the library doesn’t exist honestly.
There are plenty of employers who look at your github activity as a quick gauge on your code/activity level outside of work or classes. Not necessarily a fan of the practice, but it is a real reason people put up code that’s not for widespread consumption.
Many users shared your frustrations, but they’re still users. Their frustrations with the project must be less than whatever frustrations drove them to use someone’s undocumented code. If enough people find this code useful, maybe they’ll consider contributing documentation back to the code base as they figure it out. If the author sees this, they may realize that if people find the undocumented draft project useful, that maybe it’s worth some effort to improve it. Or the author doesn’t care, someone forks the project, and it takes off.
There are many paths to useful open-source projects, and they don’t all require the author to spend an inordinate amount of time documenting every feature for a project nobody may ever see. The onus on deciding how to put together your project is still on you, not the authors of every piece of open-source code that might be relevant to your project.
You think people publishing code are doing it for you, and want to chastise them for not bringing the quality to your standard?
I think it’s much more sensible to simply assume that they’re not doing it for you, but for some other reason. That is to say that I don’t think people post these projects with the intent that other people will use them, but for some other reason. Some do it because it is a convenient backup, and others because it’s the minimum needed to get a contribution, but I don’t know anyone who does it for other people.
And yet: I do want projects to have better documentation. I think a documentation standard is more valuable than a coding standard: We are writing for humans; the computer does what we say, but we write to express what we mean so that human beings can fix our mistakes in stating it correctly. To this end, I recommend that programmers write documentation first, and then implement the documentation, and I consider a programmer who cannot document his software to not be a very good programmer. I just don’t think this blog posting is how we get there.
No, more code reuse is what we should strive for. Having every programmer constantly reimplementing the same boilerplate logic again and again is unsustainable. If you constantly repeat yourself then you’ve deprived yourself of the thing that got us all into programming into the first place: automating away repetitive tasks.
Jonathan Blow claims that code reuse is not necessarily good, because we don’t know how to build things on top of each other.
We do if we build software around strong types and algebraic laws. JavaScript is just terribly uncompositional because you have to use tests to verify that things compose as you’d expect.
The point here is that there’s a tradeoff. More code reuse is good, but using dependency management to get it is bad. There’s a line of simplicity below which the benefit of code reuse doesn’t outweigh the cost of the dependency management to achieve it.
The most recent tongue-in-cheek example of this I’ve seen is https://github.com/jezen/is-thirteen, which is obviously not an appropriate dependency to pull in. (It appears to have had some feature-creep in the past day, making it slightly less deadpan.)
@sdiehl: The most general definition revolves around first class functions, so I let it be that general. ‘Intuitiveness of syntax’ is very subjective, but I’m keeping that as a caveat. This is more to see what personal reasons people have for picking a language. So Erlang can seem to have a more intuitive syntax to one person, and Agda might to another person, but I’m trying to gauge how much weight this (vague) sense of intuitiveness has in people choosing between languages.
It’s the most general, but it’s also the most useless as nearly every language has first class function. By this chosen definition Fortran is a functional programming language.
I would phrase that more along the lines of Fortran 95, for instance, technically supporting pure functions and the FP paradigm. Going from that to calling the language a functional language seems like it requires a bit more - and I’m uncertain there. It’s possible to use a functional programming style in languages that are not considered functional languages conventionally, so I’ve left it a little open-ended. I’d appreciate suggestions for additions or modifications to the survey questions/options for a follow-up.
Polling is hard, but a definition of “functional programming” should be a prerequisite before trying to do a study on the term. As an example Erlang and Agda are often both referred to as functional languages but share almost nothing in common semantically. Trying to compare the two based on vague criterion like ‘intuitiveness of syntax’ is just going to produce noise.
I would love to see this book completed. I think it’d be a great service to the Haskell community.
Having spent the last 2 years starting a (Haskell) company, my time for open source is not abundant. If things go well I’ll certainly pick it up again though.
I’m with him on this, I’ve just read the first 4 or 5 chapters, but I’m planning to eventually sit down and follow it through, implementing everything. If there is anything we can do (as a community I mean) to help with it, say so! At the very least I can help reporting typos :]
That’s completely understandable. I hope the company’s doing well.
No rush, we can wait :)
I actually binge-read/reproduced write you a haskell for the last three days.
I really enjoyed it, thanks for writing it!
Wonderful, makes me happy to hear people get some value out of the first manuscript.
Yep, its quite cool, I learnt a lot. Just being selfish in saying it would be great to have it completed. But I also understand you starting a business and having commitments. So no worries, thanks for all the fish so far!