“If you don’t realize what this is for, by now, then it is probably not meant for you.”
This is true, but I’m still curious about it. Could some one please summarize what this is for? Thanks!
Hello and thanks for the interest. One of the applications of such a feature would be in the field of template metaprogramming generating code for parsers. See things like PEGTL. Another application is in things like type-safe printf, DSL embedding etc, see Sincovics’s academic work on such string use. It is true I need to make a better readme, but I wasn’t expecting the interest it got in certain grounds, I mostly authored this because I needed the standalone functionality for my own work in a very tight package. Technical analysis is available inside the header.
In essence, it takes a string literal or an array with constant expression semantics representing a string literal and transforms it into a single type through the instantiation of a template whose char-typed non-type template parameters in the template parameter list are the characters of the string. Now imagine being able to parse that compile-time and generate code according to a “language” you yourself designed through TMP.
As somebody that loses hours per workday to fixing problems related to not having types, I am going to disagree, language is important.
The author of the post tried to dive into learning Haskell without asking for help or doing any exercises by writing a time series database. I asked him to at least try asking a question on the mailing list because I believed he’d get good help there, he refused to even try. His attitude was that either he could it without help or he couldn’t do it.
I have to wonder if modern civilization (bridges, video games, clean water) would exist if everybody had that attitude.
This is a pretty typical flame-out that occurs when you attempt a complicated project without having learnt the language first. This article is me exactly 3 years ago, after having attempted similar without having learnt Haskell first.
You have to humble yourself, do the work, learn the language. Then tackle a project.
I didn’t want to post anything like that into the blog post, but since you’ve started going personal, I have to defend myself.
a) when I have started, I was searching for help. But since the most basic concepts were rather clear for me, I was left all by myself in majority of times. I can bring up all the occurrences of myself asking questions about things like thunk leaks, about monadic order swapping. How many times they were answered? Correct, zero.
Once a very kind person helped me to understand Monadic Transformers. I will never forget his kindness.
b) I refuse to try posting things to the mailing lists because there were multiple times when people from Haskell community were responding my questions with simple “this is Functorial, everyone should know that”, or “you should be smart enough to figure that out” and other humbling things.
c) There is a false assumption that I am having problems with Haskell. I do not have any problems with Haskell. Assuming that if other people do not like something, they automatically do not know something is false and misleading. I’m saying that Haskell will take a long time to learn. I did learn it (read several books, and caught up on theory), and I have no problem programming it.
I have to wonder if modern civilization (bridges, video games, clean water) would exist if everybody had that attitude.
This is exactly the point I had. You say that Haskell is “next level”. I say no. There’s absolutely nothing special about Haskell. And we all should live with it.
It’s very easy to say what I’ve done was a recipe for failure: https://twitter.com/bitemyapp/status/541962675488452608 Although I can’t say it’s a failure due to two reasons:
As for me, even if I had to drop the DB itself it was a success.
I think that his tone is positive in the sense that one has to know how an algorithm works in order to be able to implement it in any language for example.
I do not think that he disagrees that learning the language itself is important since there are languages that allow certain algorithms to be implemented nicely and others that do not. However the true beauty of functional languages is how closely they are related to the abstract concepts behind commonplace constructs we use in different (even imperative) languages.
In that case, languages like Haskell are in an advantage respect to languages that are more clumsy in their functional characteristics, so yes, the language is important in that respect. But the notions behind what you do are more important in order to use it efficiently.
From time to time I get a lot of positive articles about opensource, contributions and all these nice things. I wonder if anybody has written a list of the things that poison opensource communities by citing multiple, different incidents of when things go from bad to just worse. I would be extremely interested in reading such an article.
I don’t know about a concrete article, but I’ve seen a couple of posts especially from around the Linux Kernel about the bad atmosphere and environment. Personally I have a little “treasure” chest of comments from Linus that are extremely poisonous and dismissive.
Personally I never had to encounter many of these incidents, as the communities I’m involved in are mostly extremely nice and helpful. New members mimic the behavior they see. I think it has a lot to do with leadership - leaders get the community they create through their own behavior and the behavior they choose to tolerate.
On the aspect of leadership, I agree. I cannot help but have the feeling that as long as eventual contributions do not change the status quo of power balance, nobody gets hurt. Otherwise, it is not a game to play.
For what concerns me, I begin to think that focusing on F# for a while has been a good investment in view of these developments.
Every big company needs control of the programming language that makes things tick for the most part. Facebook is no exception to this rule, which is why they try to marry their interest of php to something they can control better and tailor to their liking. A smart move.
What I’d like to note is that the news that debian is making lately has more to do with people leaving or wishing to propose a fork of debian than with something really exciting coming about.
I don’t know why this person is leaving, but as a Debian user, I like that nothing “exciting” is happening. That’s why I use it.
If I wanted bleeding edge, breaking changes, and incompatibility, I wouldn’t be using Debian. There are plenty of distros for that, but I appreciate that Debian is more stable.
Exciting does not mean doing bleeding edge and broken stuff within the debian-stable, but creating ideas for making debian better. People leaving or wanting to fork the project means that they are not excited about it as they were. I think the term is clear now in its context.
I’d think talking directly to the person in question would be the first step. Even before posting it on the internet asking for advice ;)
Maybe the person doesn’t know or you aren’t getting the full story. In some cases, it might seem like that person is taking full credit when they are singing you praises and that message is just getting squashed by someone else.
Talk to the person, that’s what you should do.
I do not think that asking in our small niche of the internet would be a bad idea. Programmers are by nature of their trade a creative work force and most of us have been ripped off at least once in our career. It is how you deal with it that matters and what you do after.
You are correct that he should know the full story before hand. Most of the times it only makes you angrier though, regardless of who is actually responsible and why. I would move away from an environment where ‘credit’ issues pop up due to ‘misunderstandings’. There are no misunderstandings and there are no coincidences in credit matters.
The advice that has been presented so far is interesting, but here is another take before you decide: Is the person doing this more connected than you in the workplace you are working? If that is so, forget trying to prove that to that niche because nobody will care. People who do this kind of stuff regularly are not without experience in getting away free. Unfortunately for you and any original creative thinker / worker.
Discussing with the person won’t change anything but make you grow more angry. You have to understand that you cannot make the offender accept his offense through reason and proper argumentation because he already knows what you are after: he will try to make you more angry through this and attempt to ridicule you in front of your other colleagues.
The only way you can win such a situation would be if you had undeniable proof that what you have been working on has been completed by you. If that exists and you wish to go nuclear, ask yourself whether a legal avenue would provide a satisfactory solution to your issue; my opinion is that in questions of ‘credit’, that avenue is not a solution.
My advice would be to change environment as soon as possible; not that you won’t be dealing with the same problem elsewhere, but this experience you have already had, should now give you the initial mental tools for being more cautious next time. You cannot be creative if you are to have your work ripped off by somebody either because a notable figure (and can do any transgression and still considered a hero) or because you were the “new guy in the room”. These things happen.
Be smart; don’t fall prey to emotion because you are never going to win otherwise.
TL;DR haskellers aren’t condescending because they don’t look down on you. Rather they treat you like an educated adult fully-capable of taking initiative.
Basically she’s saying Haskellers are academics. Every thing about that describes my experience in grad school. Friendly bunch.
Since the comments mention the Socratic method, that’s something that resonates with me: the only time I went to #haskell IRC with a question, the attempt to use the method didn’t sound like treating me like an educated adult capable of taking initiative, so it felt condescending, and really, it was a waste of everybody’s time. Of course, I can understand if the person made wrong assumptions about my question, but what’s wrong with a straight answer? As an adult, I’m perfectly capable of looking for, say “blargher co-types conjecture”, and I don’t need hand-holding. My main takeaway was to not ask more questions in #haskell :)
I’ve had a very similar experience with getting advice off of IRC in general: sometimes you get the jerk that just wants to put you down for no reason, sometimes you get a very patient kind soul who will take half an hour to explain something to you. You just need to grow a thick skin and get used to not being reactive about the jerks.
There’s a high degree of variance with the quality of help you get in #haskell. Sometimes you get Cale…sometimes you don’t get Cale. I’m sure you don’t mean to paint all 1200 residents of #haskell with the same pedagogic brush, but it’s worth making it clear that not every experience is going to be the same. I’d also take the opportunity to highlight that #haskell isn’t singularly devoted to teaching as such, it’s the big tent. People who haven’t otherwise given effective teaching a moment’s thought could end up “giving it a try” and scaring/burning some new person in the process.
The variance in quality and noise problems are why #haskell-beginners exists.
Related post about teaching in #haskell:
I think that it is good that there is a big-tent channel for a programming language and that you are right in pointing out that it is a very generic ground. I’d like to say that even if you just read the logs of the meaningful exchanges, one can learn a lot even without asking. A few of this, lots of papers, books and practice, will make you able to be worthy of Thor’s mjolnir one day. Lastly, not all people learn at the same rate; true fast learners may be as “dysfunctional” as slow learners, so whoever is involved into teaching has to have a lot of patience, good will and a shiny mood.
People who don’t like you - regardless of whether you are right or wrong in what you say - will find a way to make you “pay” for it, regardless of the medium involved. Twitter makes is easier to use tagline attacks because you have to be brief in counteracting them. And most of the times in the world of technical achievements, the real explanation is never brief or easy. This is one of the very nasty things I dislike about Twitter but I think that we can manage.
It’s the combination of being unable to convey the tone of your message with the brevity that removes all possibility of explaining the nuances or the context.
I fully agree with you on this; and this is why twitter can be very dangerous. In his case, I think that they are overly unfair to him.
Still, I would not say that it is totally unhelpful or that it isn’t correct. So I disagree with the downvotes. Also see: http://www.reddit.com/r/haskell/comments/2cv6l4/clojures_transducers_are_perverse_lenses/ for more information. I begin to think that each time one posts something that does not appeal to the deepest fans of a language that is /not/ their favorite language, there is a downvote. And in the meanwhile, there is an upvote from those who do appreciate it. We are not being objective here, are we. Anyway, moving on.
For what it’s worth, I did find this interesting. I upvoted the list of links from the Haskell wiki so I can dig into the meaning behind the post.
The succinctness of the explanation is certainly helpful for people already familiar with Haskell and the concept of Control.Lens.Fold. That should not discredit, of course, any effort to explain perhaps difficult concepts to a wider audience. This is just a general observation, I know that was not your intent.
I did not downvote your post. I also did not upvote it. If you had posted a comment on another thread discussing transducers and mentioned the similarity to Control.Lens.Fold in Haskell and provided a link, I would have certainly upvoted it. You are right, it is helpful to realize there is a relationship to existing concepts when you are trying to understand something.
Thank you for bringing this to my attention.
Regardless of the actual content of the post, I think that we could be seeing more init systems in the near future and I wonder how they will look.
Nice article, only that all those blood transfusion units hanging around in these pictures are kind of creepy.
I seriously do not understand the downvote here, since this is neither trolling nor off-topic. I think that the “off-topic” and “troll” downvotes are simply from people ignoring computer science theory of the last 2 decades (to say the least). And you can do things like this in Rust by the way. So this is just seriously rude for the sake of being so. If you do not understand error and exception handling using monads it is your problem.
I didn’t downvote you but I can’t say I’m a fan of the way you contributed. Seriously, let’s try to keep this meme shit off lobsters. (Also, it should be “I can has”).
I highly doubt anyone down voted you because they don’t understand error monads, which is obviously a relevant an interesting thing to bring up on this article.
I still find it uncalled for because I do not perceive it as trolling nor is there a netiquette on lobste.rs regarding what can one use humorously or not, given that no abusive language is used nor demeaning content is implied. Even my explanation gets a downvote. The “z” was deliberate.
Regardless, I’ll move on with simply not saying much anymore. This was supposed to not be Reddit but I guess disillusionment is important.
Every time somebody tells me how excellent his code is because it has been maintained and updated for years, I always look at when it was designed, if it has been redesigned and what we gained for it. Unfortunately, you can do this only for the projects you are directly involved in as a developer and using them in your code. It is scary that we have no knowledge over how much design and implementation errors of decades ago are still around the corner, waiting for us to stumble as end users. It is terrifying. The transitive trust model is fundamentally broken: “well, they must know what they are doing if they are credited to be X”. Sheer terror.
I’ve tried Haskell a few times now and I really want to like it, but I just can’t stand it.
It seems like creating anything real in Haskell requires doing it twice. Once in the pure, lazy, Haskell way, and then a second time using unsafePerformIO, subverting laziness, and replacing all the pure, lazy stdlib data structures with mutable third party versions to get any kind of decent performance.
In theory, I think Haskell is awesome. In practice it just doesn’t live up to my expectations.
requires doing it twice
More the case when you’re first learning. You learn to anticipate what you’ll want with experience and the laziness is a non-issue for most Haskell code out there. Much like how you learn patterns, anticipate the application of approaches in the languages you know.
Whole-heartedly worth it for me, the complexity management (able to scale to larger projects), runtime efficiency, compiled binaries, Best-of-our-industry concurrency kit, best type system that isn’t costly time wise, etc etc
I came to Haskell from Clojure. I spent more time futzing around with perf and laziness in Clojure than I do Haskell :)
I guess the problem I have is that in most languages, applying known patterns and optimizations doesn’t mean avoiding the features the language is known for.
In Common Lisp, when I need to optimize a function, I don’t rewrite it to be more like C, or stop using some Common Lisp features - I add type declarations, turn on optimizations, and retry. Then maybe I try a different algorithm, or look for ways to cache more stuff, etc. I can have code with good Common Lisp style that is also fast.
This is even more true with C and C++ and some other languages.
In Haskell the goto optimization techniques are to get rid of most of the stuff that makes Haskell unique. It’s too slow, so let’s turn off laziness here. Or let’s use unsafePerformIO. Or let’s replace builtin strings with a library that uses mutable arrays and unsafePerformIO behind the scenes. The one technique that isn’t a cop out is making things run in parallel, but that’s not Haskell specific. It seems that in large part there’s good Haskell code, and then there’s optimized Haskell that breaks all the regular coding rules. And to avoid getting chastised by code reviewers and people in IRC and mailing lists, you’re only allowed to write the second kind after you’ve written the first kind and found out it wasn’t fast enough.
My point is, it would be nice to see some articles like this where the solution stays within canonical Haskell, because that’s what’s advertised by the Haskell community.
My point is, it would be nice to see some articles like this where the solution stays within canonical Haskell
Using the right data structure for the job is one example of this. (List -> Vector, using real graph library rather than homespun adjacency list, matrix libs)
I don’t think any Haskellers see selective application of strictness or unpacking as not being Haskell. Lazy is the right default, but sometimes strictness is wanted. There’s no uniquely Haskell-y way to write Haskell. Haskell is an ensemble language of multiple paradigms and tools for solving problems. It’s the combination thereof that is Haskell. Consider our concurrency/synchronization primitives. Nobody has anything that rich yet also performant and easy to use.
Speaking personally, I can’t write reliable concurrent software unless I’m using Haskell!
Example. You are trying to write a streaming parser. Lets say you did it with Strings and lazy functions. Probably not great perf-wise. Why?
Well, the laziness might not be appropriate - you want to immediately force the thunks as you process the data. So you want a streaming library. Like conduit or pipes.
You used String. String is a linked list…don’t do that :) - so you use bytestring.
You homespun your own parser lib. Don’t do that. So you use Attoparsec.
Then you realize libraries like this or this do what you wanted to begin with.
Writing fast code in Haskell, for most beginners, is about knowing what libraries and abstractions to use. I don’t know anybody that has write to “dons” style code as you see in the shootout. The numerics stack somebody is writing for Haskell is infinitely more Haskell-y than that.
80% of the perf advice I end up giving people boils down to “stop using List/String (Which is List of Char)”.
You know what, I am constantly realizing that the more things one knows, the more likely is to isolate himself into “language esotericisms” that newcomers find difficult. There is a benefit to lazy as well as eager evaluation. That’s why we have more languages. Haskell is a very practical starting point for the world of functional programming, even with its pitfalls :)
I’ve sunk my teeth in to Haskell and, coming from Common Lisp, my biggest hurdle so far is: syntax :-)
But I guess it’ll grow on me. It does make picking up the language quite a bit harder though, having to look up every other symbol, so I hope it is worth it in the end.
It has been very much worth it for me (I’m an ex-lisper).
I wrote a guide for learning Haskell, I’d be pleased if it can help at all: https://github.com/bitemyapp/learnhaskell
I saw you mention it before and I’ve been going through it already, thanks!
(I can’t see myself become an ex-Lisper though.)
Based on the obscene design limitations, this is probably more useful http://en.cppreference.com/w/cpp/algorithm/accumulate
If they ever think of expanding it without those design limitations, perhaps a woefully designed language for such things as C++ will become slightly better. I am amused that they started doing these things at a language feature level. Until then they are only introducing a shortcut to std::accumulate, in my eyes.
Ah, not general catamorphisms just a convenient syntax for left and right folds over a few choice operators.
I think that they are eventually going to be dealing with function identifiers and lambdas being allowed in place of the operators, given that in C++14 they have introduced polymorphic lambdas. There seems also to not be a syntactic collision with either using a function identifier or a lambda for doing the fold. Unless they get “scared” of complicating the language by getting into more practical functional programming territory with such an obvious addition to this.
The challenge is that this syntax does not provide for the “base” of the fold anywhere except by convention over the set of approved operators.
And still, I wouldn’t call this a catamorphism until we’re talking about general catamorphisms over recursive data types. Right and left folds are ultimately nothing more than particular catamorphisms: ones specialized to the structure of a linked list!
Perfectly agree! Whatever they come up with, they will inevitably have to deal with exactly what you are discussing, at least they started focusing on the meaning of the word and hopefully will not end up abusing it like the “functor” term as used in C++!
I found the subjective and anecdotal evidence in this a little unconvincing. The bravado (“I worked at…”, “I had written…”) also doesn’t inspire much confidence in the author.
Suffers from much the same problem as every rewrite story. They failed, I succeeded, therefore some random decision I made is responsible. I will tell you which decision you should think it was, but not provide sufficient information to verify my assessment. In particular, I will tell you about the stupid decisions made by those other morons, but not explain their reasoning.
100 times this!
Experience inspires more confidence in my heart than theorems do, though of course the author could be making everything up. It’d be nice to have more detail, though.
For example, by “purely functional” Prolog, do they just mean that they didn’t use assert and retract (making it stateless), or did they additionally restrict their use of Prolog predicates to a functional pattern, where backtracking was banned and one of the arguments of each predicate was used as a “return value”, and the others were always bound? If backtracking wasn’t banned, was cut banned? How about negation?
In its current form, this article simply says, “Wakalixes actually do matter. They actually do allow you to program better, faster and more cleanly.” Reading it will probably not help you to be a better programmer, unless you happen to be committing one of the beginner’s blunders the author calls out in the article, and even in that case it doesn’t tell you what you should be doing instead.
A problem with the credibility of the author’s claims for the wakalixes is that it’s hard to separate claims for the effectiveness of a language, or even a programming paradigm, from the effectiveness of the programmers who were programming in it, and especially the effectiveness of the social environment they’re embedded in.
Presumably if the 250 Java programmers working for the big-six firm couldn’t figure out how to reimplement a neural-network image classifier that one person implemented “on spec” after ten years, it’s not because they weren’t programming functionally — as the author pointed out, you can program functionally in Java; it’s because they weren’t making progress, probably because of mismanagement. (Or it might be because the author was just extremely lucky and chanced on such a great set of system parameters that 2500 person-years wasn’t sufficient to chance on it again, but I doubt it.) In ten years you can learn a lot about neural networks and image processing, and you can try a lot of different things. With a reasonable neural-network toolkit, which should take less than a year to build, you should be able to try about five or six million carefully-thought-out programming experiments.
A more likely culprit there is that they tried to plan out the solution of a problem that they didn’t know how to solve, which is a mistake the people often make even when they know better. As our own michaelochurch said, “Industrial management has a 200-year history of successfully adding value by reducing variance, because in a concave world, low variance and high output go together. In a convex world (as in software) it’s the opposite: the least variance is in the territory where you don’t want to be anyway. Convexity is a massive game-changer. It renders the old, control-based, management regime worse than useless; in fact, it becomes counterproductive.”
What would that kind of mismanagement look like in this case? Mismanagement by variance reduction here would probably involve optimizing the process to improve the chance that any given attempt would succeed, by putting lots of programmers on it and giving them lots of time, with the consequence that maybe in ten years they investigated three or four things that didn’t work, instead of five million.
Trying five million experiments isn’t enough, of course. You have to focus your efforts on things that might work, and learn as much as possible from each experiment. But speeding up the process of trial and error is a huge advantage, not just because you get more trials in, but because, due to hyperbolic discounting, the lessons of a quick experiment are much more memorable than the lessons of a slow one, in more or less direct proportion to their speed.
Also, if most of your experiments are going to fail — as they should, to maximize variance and the chance of netting a unicorn — experiments that fail quickly are much less demoralizing than experiments that take a long time to fail. (And of course you want to minimize people’s incentive to whitewash the results. Especially people with high prestige. For example, failed experiments often drag on for years because managers are afraid of losing headcount.)
As this process continues, how do you prefer to focus resources to the more promising exploration directions, while continuing to devote substantial effort to the dark horses? In a sense, the traditional CYA management approach errs in the direction of overfocusing on the most promising candidates. It turns out there is actually a bunch of applicable research, some of which, like multi-armed bandit algorithms, actually is being applied at some companies to the problem of managing R&D and has a robust management research literature, while other parts, like A* search, is overlooked, as far as I can tell.
And all of this has only a limited amount to do with your programming paradigm. You can iterate quickly in Java, you can iterate quickly in assembly, and you can iterate quickly in Haskell. The obstacles are different in each case, but you can do it.
I agree with kragen: I’m fine with this sort of argument being based on experience; what else could it possibly be based on?
I have another problem with it. The author claims to have been working in computing for… well, I don’t feel like trying to track down their LinkedIn as they suggest, especially since their name doesn’t appear to be anywhere on the site itself, but presumably “longer than you’ve been alive” is supposed to mean several decades at least. I resent both the assumption and the ageism there, but I suppose that’s irrelevant.
But anyone with that much experience is going to be able to solve much harder problems than people with dramatically less. Even the author themself would have to do serious introspection to have any confidence that their efficacy is due to the choice of paradigm rather than to the experience. And, frankly, I don’t believe it - I’m confident that, all else being equal and apart from any difficulty caused by being annoyed about it, the author could do these dramatic rewrites in any paradigm.
Also, of course, they don’t say anything about how maintainable people found their rewrites, after they’d moved on to the next one. The reduction in number of lines is suggestive, but it seems like we’re supposed to take it for granted that these were substantial improvements, when all that’s really being claimed is that they were successful replacements.
I’m a big fan of functional programming, and actually for a lot of the reasons the author alludes to. They just haven’t demonstrated a connection.
Perhaps more interesting is that someone with several decades of experience never worked on a project that failed…
We learn a lot from failure. Perhaps most importantly, we learn what to learn from our successes. I worry that someone who has never failed doesn’t know why they succeed.
It’s unfortunate that mostly only successes get written up. There’s a lot of selection bias in the stories we read.
In some engineering fields failures do get written up extensively, more than even successes, but in others I agree with your assessment. Failures in aerospace and civil engineering especially get a lot of study, partly because regulations require a detailed investigation, and partly because they’re spectacular enough to captivate public attention. Things like the Challenger explosion, the Tacoma Narrows bridge collapse, Apollo 13, the Titanic, etc. are probably as famous as any successes in those fields, and far more pored over by both scientists and popular documentaries. (Engineering curriculum design includes a lot of this kind of history postmortem study, too.)
Is there a list of canonical interesting failures in computing? The Intel division bug is probably the one I’ve seen mentioned most often in that regard.
There are a few canonical examples of failed software projects I remember, probably from a software engineering course. The definition of failure varies, from just eventually being killed before releasing/being deployed to being obscenely late and over budget, to being deployed but having costly and/or dangerous bugs.
The ones I remember off the top of my head were the Ariane 5 rocket, the THERAC-25 radiation therapy machine, and the Denver airport baggage handling system. Those were all old enough to be in a textbook 15 years ago, though. I wonder if there’s a good collection of newer ones, in particular of the kind that cause failures in large distributed systems.
The big fail that I recall is the Chrysler payroll system
http://c2.com/cgi/wiki?ChryslerComprehensiveCompensation
In that it was heralded as a king of XP -> agile, whatever and then just dropped into nothingness without a good failure writeup, just that wiki page that sort of acted as a living document of folks asking “what happened?”
I don’t think the CCC failure was particularly unusual; the old figure was that about ⅔ of big software projects like that fail. One unusual thing about it is that it was that, due to their focus on incremental delivery, it had already been deployed and was doing a substantial fraction of Chrysler’s payroll before being canceled.
I’d feel more comfortable with the article if:
You could read the exact same article from you average, OO enterprise veteran technical architect.
I’d also be super interested to see how the author’s co-workers viewed his work. In my experience this type of humblebrag comes from your run-of-the-mill hero developer.
That’s fair. I agree with all of these points.
I agree.
The email address on the contact page suggests the author is Douglas Michael Auclair. In a former life (?) he maintained Marlais, the Dylan interpreter.
He’s here on LinkedIn.
Thanks. sigh To be clear, I do not doubt his anecdotes, as far as they go. It was definitely jarring to realize there was no “about the author” anywhere on the blog, and yet he was making that appeal. But it’s not the kind of thing someone would bother to make up.
Agreed!
I agree with your statement. Is such way of promoting functional programming really needed nowadays? Let’s all consider that no matter how many valid and provable arguments you will provide to an established (imperative OO C++ community toxicity) status quo, the only way you are really going to sway them to your side, is by producing code that outperforms their solutions, both in terms of developer scalability and actual execution of the resulting binaries. And it is proven that this can happen, so why are we going again over this through personal viewpoints?
The problem we are having with modern functional programming languages is that because legacy companies base their success on legacy code written in legacy languages, they need to maintain the counterproductive rhetorics around. This is a very twisted side-effect of inertia, we should not feel compelled to reply each time, anymore.
edit: typos, more clarity :)
Unfortunately, I find that language/approach evangelism is hard to do in a principled, scientific way, because it fundamentally isn’t science in most cases. It’s business. And if you stick to conservative arguments supported by evidence (evidence from experiments that you’ll almost never be allowed the time necessary to perform, so you’ll have to use what’s already on the ground) then you’re often going to lose against an opposition inclined to dishonest arguments (e.g. “if we use Haskell instead of Java, we won’t be able to hire anyone!!!!111”) and phony existential risk.
The OP, at least, can convincingly tell a personal story of success that he owes to functional programming. Is it scientific proof of the “superiority” of FP? No, of course not. It’s still much more useful than a lot of what pops up in the discourse around PL choice.