In some sense, of course if you put more people on a project, you’ll get a solution with more code. I’m tempted to wax eloquent about why, but I think you-all will either get that or not, and I can’t really add much to it.
Still, this is a much more direct comparison than is usually undertaken. I’m not convinced that one needs the extraordinary circumstance of the same project being done twice; I think it would be reasonable, if one were looking at enough projects, to figure out a metric of functionality overlap between them, and aggregate the brevity metrics on that basis. I’m wondering why no smart programmer-data-scientists have tried this; it seems as though it shouldn’t be difficult to find libraries doing corresponding tasks in, say, CPAN, Hackage, and npm.
Also there’s the Shootout, of course, but even the very clever benchmarks there are still toy problems and subject to the complaint that they illustrate small-scale pros and cons of each language, not large-scale ones.
“I see a lot of logging statements in code-bases when the developers don’t understand what their code is doing”
Or, the code handles real data and sometimes you gotta know what got borked where because the real data can be hairy.
Yes and no. In Java, for example, it is very hard to tell what code you are calling from reading the code.
There is no doubt that F# is less verbose than C#, but there are other factors that affect software projects to a great extent. For example, how good the engineering practices are (possibly in the example show then C# codebase was not written in the best possible way), how prepared the developers are to solve the problem (if I understood correctly, the F# codebase was written having the previous attempt as reference, which makes easier to avoid certain issues discovered in the first attempt). Also, the bulk of the cost of software goes into maintenance, so how long it takes to build the first version is relevant but not the whole story (knowing that the C# project had 8 concurrent developers make me remember situations where the original developers loose control of the codebase when projects grow), etc.
In short, yes the language is important, but there are many other factors that can turn a project written in a good language into a glorious wreck, or a project written in C into a huge success.
What’s with the C tag on this piece? It deals with the difference between C# and F# — I don’t think the C tag is relevant.
Pretty good piece, though I can’t help thinking that the F# team had some advantages in that they had access to the C# version: the two versions were not developed in parallel. Also, we know nothing about the experience level of the people developing the C# version.
Because it used C# and my brain always thinks of the C tag before the dotnet tag. I goof this regularly, sorry.
Fixed the tag.
Using kloc to measure language productivity feels like cheating.
Might – kloc / total-features, be a better metric ?
The article says that “all of the contracts were fully implemented”, implying that the F# version had more features than the C# solution, in which “not all of the contracts had been implemented fully”. So the metric kloc / total-features would just make F# look even better.
kloc / total-features
I missed that bit ! That means just the language choice alone can give you atleast 10x power up with more type safety.