The Haskell section should’ve been: I have no idea, so I don’t have an opinion. Imagine how much better the world would be if people said this more often.
I found the author’s word emphasis stylisation quite interesting, and if you inspect any of those elements, you’ll see that the chosen classname for CSS purposes is .bullshit.
Yeah, the section feels like “here are some languages that I have no idea about, but I’m embarrassed to accept that, so I’ll just quickly google and pick the first few hits that bash them and use it as justification for not knowing them any further”. A textbook example of the anti-intellectual epidemic of our times.
You definitely can. One way in Haskell is by using the IO type. It’s just functions. I’m totally interested in seeing results of my Haskell programs. They’re running right now in production.
And I am happy for you. But your statement was that the world can be described 100% in pure functions, not only your code and it clearly cannot be. Punting “unpureness” somewhere else (compiler, libs, wherever) does not fundamentally remove it.
As I said elsewhere, if your code describes the world, and the code is pure, then the description of the world is also pure. Anything else that is not code does not have to be pure for the code to remain pure.
Similarly, it doesn’t make sense to say that it’s “punting” impurity when you compile pure code, because the output of a compiler is in a different domain. It would be just as valid to say that Haskell code isn’t 100% typed because it loses its type signatures when compiled to machine code.
This may sound like unimportant theory, but 100% pure code allows you to generate arbitrary inputs for use in property-based testing.
No, your code takes in the world as an argument and returns a new world, the problem is you can’t ever rerun it with exactly the same world. It may be pure in a theoretical sense, but you certainly have no way to rerun the function with the same world to verify that.
Your code definitely doesn’t ‘describe the world’ either.
I think there is some clear and obvious hyperbole in what @puffnfresh wrote, and that he is very well aware of what purity means and how much of a system can typically be described in such a fashion.
IO is 100% pure. The compiler transforms it into a sequence of instructions to be interpreted by the computer, impurely, but that’s just an implementation detail.
This discussion chain reminded me of the excellent James Mickens’ article The Night Watch:
You might ask, “Why would someone write code in a grotesque language that exposes raw memory addresses? Why not use a modern language with garbage collection and functional programming and free massages after lunch?” Here’s the answer: Pointers are real. They’re what the hardware understands. Somebody has to deal with them. You can’t just place a LISP book on top of an x86 chip and hope that the hardware learns about lambda calculus by osmosis
Describing the world with pure functions is all you can do in Haskell programs (ignoring unsafePerformIO, obviously), and Haskell seems to work just fine. Doesn’t matter what happens underneath, because that’s not part of the program you write.
I’m sorry, I don’t find this argument practical, and apparently neither do the authors of RWH.
However, this is a book about real world programming, and in the real world, code runs on stock hardware with limited resources. Our programs will have time and space requirements that we may need to enforce. As such, we need a good knowledge of how our program data is represented, the precise consequences of using lazy or strict evaluation strategies, and techniques for analyzing and controlling space and time behavior.
To qualify everything I’ve said so far, I write Haskell full-time, run businesses on it, and employ others to write it too. I’m not ignorant on the topic.
I appreciate your position and actually agree, but I believe purity and practicality are orthogonal concerns. I’ve still got that link bookmarked from when I wrote Haskell; note that the final optimised code is still completely pure.
(FWIW “that’s just an implementation detail” was a tongue-in-cheek aside that evidently fell flat.)
I’m getting more and more convinced that in general programmers should not try satire nor rants.
This is as bland as it gets, each argument are the cookie-cutter arguments made pro and con all the languages since their inception. There’s not a little bit of creativity and there’s not a bit of insight to be had.
You could delete it and nothing of value is lost.
I’m dead sure this article would not be spread if it weren’t published under a big name.
I’m sorry you didn’t like it. I didn’t write it to “spread” it, though, I just wrote it because I wanted to. Am I not allowed such things because my blog posts frontpage on Lobsters too often? I didn’t submit this here, nor anywhere else.
And for the record, I do think that there is value in all programmers being able to briefly summarize the advantages and drawbacks of a variety of technologies like this. This kind of thinking can help steer your judgement when starting new projects, and extends past languages and into operating systems, architectures, frameworks, etc, and is an essential kind of thinking for project design.
I do think that there is value in all programmers being able to briefly summarize the advantages and drawbacks of a variety of technologies like this.
That’s great when you’re writing from a place of experience as you are for most of your post. The problem is the small bits where you aren’t, which you included presumably for completeness sake.
dead sure this article would not be spread if it weren’t published under a big name.
Not true, I wrote a similar one and it was up voted too. Hating on C++ is just lots of fun for everyone. C++ is a big boy and can handle it.
I’m getting more and more convinced that in general programmers should not try satire nor rants.
Hey, let’s not be ‘jobist’, SirCmpwn may be a dirty programmer, but he is also an entrepreneur and community manager too. Let’s not let those jobs have fun either.
Have you found the “tiny ecosystem” limiting? Have you hit any walls, absences of libraries which caused you to say “oh well, I guess have to write this in $OTHER_LANG now”?
Not too bad, but mainly because i’ve only been using it for small scripts. It would probably be a big problem if you decided to write a web application or something like that and aren’t willing to do everything from scratch.
I do enjoy a good rant but this post comes off as extremely arrogant and condescending, which are two things I believe tech and discussions about tech would be much better without. Yes, you can and should criticize language/ecosystem features that are badly designed or not working the way they’re supposed to, but I believe one can (and should strive to) do it without calling people who use said languages “jerks” and “bad programmers” who should use a “real language”. The ad hominem brings nothing to the discussion.
@SirCmpwn
Not gonna debate the other points, but how is Haskell’s package management awful? cabal 3.0 just came out, and it defaults to a nix-like package store which both caches builds and avoids conflicts. The version constraint solving approach is also something I really like.
Nim: neat ideas but a rather bad compiler/implementation
D: don’t know much about it but what I do know seems reasonable enough. Not sure it’s different enough from anything else to distinguish itself meaningfully
This is what I sent to the creator of Nim when he asked the same question:
Hey Dominik, thanks for your patience. Let me open by reminding you that
the context for this discussion is a identifying a language that’s
suitable as a replacement for C for sqlite, which is a very high
standard to meet. I’m sure nim is great for the niches it serves. I also
have to admit that nim has gotten much better since I last gave it a
critical look. The last time I looked at it, debugging with gdb was not
feasible.
The first problem with nim for debugging, especially on embedded
systems, is the (huge) extra layer of indirection. Nim is much further
removed from the actual behavior of your computer than C is. Remember
that, especially on embedded systems, debugging is not happening at the
C level but at the assembly level. The distance between C and assembly
is at times already great enough to be difficult to deal with, but add
to that the distance between nim and C and it becomes a serious issue. I
took the example on your home page and compiled it - the generated C
source is 437 lines long, full of impossible to read/remember symbol
names, contains large amounts of glue code, crazy stack frame hacks,
unreadable loop constructs, and several calls into the nim stdlib. To
make a point, I chased down one of these stdlib entry points -
copyString - and found that it was implemented in Nim. Should I ever
need to debug it, too, I’ll face all of the same issues I faced
understanding the generated code from my own example. Nim is also able
to emit C++ or JavaScript, which (I’m making some assumptions here)
tells us that its internals likely add yet another layer of indirection
with some intermediate language. Given that Nim is a high level language
targeting other high level languages, we can also assume that it uses a
high level intermediate language, with all of the problems that can come
with that.
Also remember that on embedded systems we’re often dealing with obscure
architectures. Compilers are not necessarily going to be as
sophisticated as we’re used to on x86, ARM, etc. We may have to do a lot
of finageling to understand the performance of our generated code and a
lot of tweaking to more clearly express our intent to the compiler, a
process which would be virtually impossible to tune with Nim. We may
often run into compiler bugs and find ourselves diving into the
generated assembly to find out more - and now there are two compilers
involved. Addressing some edge case is going to be very difficult when
you have to do it through Nim, and who knows if it’ll still work with
arbitrary combinations of Nim versions, C compilers, and architectures.
Nim also changes pretty frequently, as demonstrated by my surprise that
so much has changed (for the better, I admit) since I last dug into it
deeply. The langauge is still very much under development, and any of
this low level work I put into my tuning and understanding my code today
are very likely to become outdated within only a few years. C compilers
on the other hand move very slowly and very conservatively, and given
that the world sits on their shoulders we can expect them to work with
care and attention to detail (at least in a perfect world - looking at
you, gcc!). C compilers don’t compete with the same things Nim competes
with. Nim has to compete with Rust, Go, Crystal - even Python and Ruby -
and often liberally adds features and changes to keep up. C doesn’t have
to prove itself to anyone, we’ve already built our entire empire on top
of it.
Nim also only has one implementation. There’s no competition keeping it
honest and no standard to which it is held. It’s hard to characterize
some behavior as a bug in the compiler or by design - and it’s hard to
predict how the upstream will judge such behavior, and whether or not I
can rely on some behavior to be consistent if they decide it was a bug
after all.
sqlite is 17 years old and is one of the most reliable pieces of
software in the world. It runs on almost every computer in the entire
world - every smart phone, every desktop PC, every laptop, every tablet,
every router, most cars, digital cameras, smart fridges and toasters,
home security systems, billboards on the street - within the square
kilometer you’re sitting in right now there are almost certainly
thousands of sqlite installations. There are more sqlite installations
on this planet than human beings. The standard to which it is held is
extremely high.
Anyway, despite all of this, I rather like Nim. I wish it targetted LLVM
instead of generating awful C code, but for high level users this isn’t
a huge concern. It continues to have improved every time I look at it,
and I’m looking forward to using it for the next project to which it’s
suited. Keep up the great work!
Does nim have good dwarf debugging information? A lot of the friction you had with debugging assembly could be mitigated if the debugger could display the original source code and variables or could step by line.
On the other hand C isn’t a very bad assembly language, it’s widely understood with many different compilers available for almost every single machine architecture on Earth. And C interoperability is required for every serious programming language in anyway. (IIRC there was an article about that a few weeks ago on lobste.rs)
In my opinion, writing a compiler that uses the LLVM backend is not easier. You have to learn their IR language. You have to write boilerplate for C compatibility. You have to add two massive dependencies to your project, LLVM itself and a C++ compiler… so it has serious drawbacks and I totally understand the C choice for Nim and others.
(I’m working on a (much simpler) compiler on my spare time and I choose to generate C as well, especially because I really wish to keep it as small and as simple as possible)
D: don’t know much about it but what I do know seems reasonable enough. Not sure it’s different enough from anything else to distinguish itself meaningfully
It has two compilers. One compiles fast like Go for rapid iterations. One sends it through LLVM for fastest performance of app. Although, a cleaner C++ that usually compiles fast seems itself worthwhile.
Putting Haskell and Elixir/Erlang in the same basket is at least a little bit strange, especially as BEAM languages aren’t that “pure” (due to messaging). Also package management with Hex is blessing.
Sounds pretty good. Regarding Haskell not being ready for production-grade compilers: The Elm compiler is written in Haskell. It was always super reliable and a joy to work with, since Elm 0.19 it’s also pretty fast. So I would think that Haskell can be used even for production-grade compilers.
This seems to rely on the premise that Elm and its compiler are “production-grade” … I’m not sure that can be established for any language without a 1.0.0 release.
Well Elm is used in production at several companies and that for me is a better indicator of “production-readyness” than a version number. It depends very much on your company and your project if you can use pre-1.0 software. Pre-1.0 does not mean that a software may never under any circumstances be used in production. It’s a tradeoff.
I’ve seen several dozen companies using the Flask development server for production websites despite it specifically saying in many, many places that it’s not a production-grade server and should never be deployed as such … a company’s willingness to shoot its own foot clean off with software the creators of which aren’t yet confident enough to label as a public release doth not a production-grade piece of software make.
So sure, absolutely, if it works for you and your situation of course you can happily use it … but that doesn’t mean that use automatically establishes it as “production-grade”, hence it’s not a good counter argument to the notion that Haskell’s only suitable for “lab-grade” compilers.
Clojure is a different beast. It’s kinda of like that Pyramid that descends on planets from Stargate the movie. An alien syntax to the java or javascript people. With npm in flames and the plebes revolting wanting to run ads, Clojure may need a backup planet soon. I mean runtime. Maybe Go. Maybe Swift.
Yeah - I know several amazing programmers working on wp-core. They’re willing to use PHP in return for a chance to work on the tech that powers nearly 1/4 of all websites.
There are some perhaps, but they are the exception.
Like the author I have stopped paying attention to JS developers and their latest frameworks, they show a lack of knowledge of CS and reinvent small parts of systems that have existed elsewhere for a long time and present them as something new.
No doubt there are some exceptions, but its to hard to find them as the bulk of JS developers can’t tell the difference.
Just to be clear I’ve used both because I’ve had to, but if I wanted to build something knew there are better languages. This isn’t personal, merely an observation of JS developers in general, again there are exceptions but the signal to noise ratio is too low.
It would be really good if Lobste.rs would support hiding a specific topic based on the per-domain domain setting. Then I’d add http*.drewdevault.com/* on the spot.
I operate on what I call an SDD basis: Suffering-Driving Development. I seek out and stick with whatever (languages, tools, frameworks, libraries, techniques, patterns, even style) lets me suffer the least. Minimum frustration. Minimum irritation and annoyance. Some examples of suffering:
Something (class, method, function, script, command, whatever) is advertised/documented to do X; I try to do X in an obvious way; X doesn’t happen; I spend hours trying to find out why; I discover it’s due to some gotcha that was not obvious
Finding out hours after implementation that a given bug or misbehaviour was caused by me not setting up the dominoes and house of cards exactly right when using async/await or Promises in JS (and accomplishing that with precious little help from error output (or, more usually, lack thereof))
boilerplate
anything done frequently, but which is slow. Examples: [too] large test suite; finding stuff in a poorly-organized large codebase; slow devops infrastructure; long startup/warmup times (when I have to restart often)
sysadmin/devops stuff that’s too complicated or is not straightforward e.g. Docker (I gave it an honest try, but I’m sorry, if I can set up and maintain a full Linux VPS with all the usual elements in a full app stack, but I cannot use a given offering to accomplish the same, that offering is too complicated, IMO) (asterisk: I’m a coder, not devops)
syntax (etc.) about which I need to revisit the documentation an inordinate number of times because the “feature” is so cryptic that it can’t be remembered and then understood on sight when encountered again e.g. some parts of Angular
using libraries or frameworks which are designed to be better than some established tech, but which (IMO) are really just better for certain types of people, who think in a certain way, or have certain personal preferences, but are not actually objectively better for all people, universally. Examples: Haml vs. HTML; Sass vs. CSS; query libraries vs. SQL
[As a web app developer,] while I think I’m open-minded enough to be ready to try anything new that purports to be better, my years of software development suffering have led me to stay with Ruby (important: not Rails) for the backend, and Vue for the frontend.
The author would benefit from more familiarity with Haskell. He seems a productive and prolific coder, so I’d love to see what he could achieve if he opened his mind a bit more.
I’m surprised the “cons” is package management. I’ve found cabal to not be worse than pip, npm, etc., I.e. Nothing to write home about. Stack is pretty interesting though.
With zero based indices it’s easy to slice an array in two at index i with
first = array[0, i]
second = array[i, array.len]
With one-based indices you need to resort to inclusive ranges to pick exactly i elements in first array, which then adds a duplicate to second. So you end up with ugly i + 1 indexing everywhere.
There is no “outside of pointer arithmetic”. The maintenance of your entire system, including the language runtime, is your responsibility, and from thence does complexity flow. Thus I paste into here all the pointer-based arguments for 0-indexing.
With unsigned integers and 0-indexed arrays, there is only one contiguous set of representing invalid indicies (>= len(array)), but with 1-indexed arrays zero is also invalid.
Lua is embedded, often in other programming languages, and interop with them is made more difficult for this design decision.
Least importantly, don’t rock the boat. We decided on 0-based indicies long ago, and it’s not even funny anymore to argue for 1-based indexing.
Seriously though, I’ve been programming in Lua for 10 years now, and the 1-based indexing has never been a problem for me, and I’m saying that as someone who has programmed in C since 1991, and assembly from 1985. There are also other languages that use 1-based indexing (Fortran comes to mind, and I think the APL-like languages are also 1-indexed).
There are some oddities in Lua for sure, but 1-based indexing is not one of them (in my opinion).
I think the APL-like languages are also 1-indexed).
I can only speak about APL itself, but actually it’s cooler than being 1 indexed. You can set Quad-IO whenever you like to either zero or one, depending on how you want to index your vectors right then.
Hello matrix math without confusing the math people.
For garbage-collected languages, and languages with substructural type systems, manual pointer arithmetic can be abstracted away.
For linked lists, 1-based indexing perfectly corresponds to the structure of the list (for i > 0, a[i] would be the ith element of the list, and a[0] corresponds to the terminator or empty list), and this works very well with pattern-matching clauses for total function definitions (How many bugs are created when programmers fail to consider the case of empty lists?)
Fair point
Don’t rock the boat means we can never shed outmoded practices. Also, ideas go in and out of fashion, quite often you have to go back to roots and question decisions made long ago, what were the assumptions that no longer hold true in order to go forward.
For many conventions we’re stuck with because they’ve become standardized, we’re paying the price in minor ways. In Physics, we have too many negative signs in equations because we decided electrons were the ones with negative charge. We also have 2 accompanying pi everywhere because the ancient Greeks figured it was easier to measure diameter than it was to measure radius. Other conventions had to change in order to go forward. Business arithmetic was pretty difficult in Roman Numerals in Florence, until Fibonacci brought Hindu-Arabic numerals, negative numbers, and double-entry bookkeeping.
The Haskell section should’ve been: I have no idea, so I don’t have an opinion. Imagine how much better the world would be if people said this more often.
I found the author’s word emphasis stylisation quite interesting, and if you inspect any of those elements, you’ll see that the chosen classname for CSS purposes is
.bullshit
.Perhaps this emphasis is a little underused here.
That whole section irritated me. Especially bundling in Elixir and “most Lisps”
Yeah, the section feels like “here are some languages that I have no idea about, but I’m embarrassed to accept that, so I’ll just quickly google and pick the first few hits that bash them and use it as justification for not knowing them any further”. A textbook example of the anti-intellectual epidemic of our times.
Yeah this was silly:
100% you can do this, over and over and over.
What? Describe the world with pure functions? No, you can’t unless you are completely uninterested in seeing results.
You definitely can. One way in Haskell is by using the IO type. It’s just functions. I’m totally interested in seeing results of my Haskell programs. They’re running right now in production.
Haskells IO operations are implemented with non pure operations, your system is not 100 percent pure.
My IO operations are functions, 100% pure.
And I am happy for you. But your statement was that the world can be described 100% in pure functions, not only your code and it clearly cannot be. Punting “unpureness” somewhere else (compiler, libs, wherever) does not fundamentally remove it.
As I said elsewhere, if your code describes the world, and the code is pure, then the description of the world is also pure. Anything else that is not code does not have to be pure for the code to remain pure.
Similarly, it doesn’t make sense to say that it’s “punting” impurity when you compile pure code, because the output of a compiler is in a different domain. It would be just as valid to say that Haskell code isn’t 100% typed because it loses its type signatures when compiled to machine code.
This may sound like unimportant theory, but 100% pure code allows you to generate arbitrary inputs for use in property-based testing.
My code is 100% a function describing the world. It is compiled, sure. That’s not punting.
No, your code takes in the world as an argument and returns a new world, the problem is you can’t ever rerun it with exactly the same world. It may be pure in a theoretical sense, but you certainly have no way to rerun the function with the same world to verify that.
Your code definitely doesn’t ‘describe the world’ either.
And I get huge amounts of practical benefits from this fact. Super good.
I think there is some clear and obvious hyperbole in what @puffnfresh wrote, and that he is very well aware of what purity means and how much of a system can typically be described in such a fashion.
No hyperbole at all!! All systems can be 100% pure.
Now you’re being silly.
My computer gets hot when I run my Haskell program. How do I stop that side effect?
IO
is 100% pure. The compiler transforms it into a sequence of instructions to be interpreted by the computer, impurely, but that’s just an implementation detail.This discussion chain reminded me of the excellent James Mickens’ article The Night Watch:
Yeah. I know. But I don’t think being hand-wavy about this “detail” is useful for the purposes of this discussion.
Describing the world with pure functions is all you can do in Haskell programs (ignoring
unsafePerformIO
, obviously), and Haskell seems to work just fine. Doesn’t matter what happens underneath, because that’s not part of the program you write.I’m sorry, I don’t find this argument practical, and apparently neither do the authors of RWH.
http://book.realworldhaskell.org/read/profiling-and-optimization.html
To qualify everything I’ve said so far, I write Haskell full-time, run businesses on it, and employ others to write it too. I’m not ignorant on the topic.
I appreciate your position and actually agree, but I believe purity and practicality are orthogonal concerns. I’ve still got that link bookmarked from when I wrote Haskell; note that the final optimised code is still completely pure.
(FWIW “that’s just an implementation detail” was a tongue-in-cheek aside that evidently fell flat.)
It’s literally something I do all day every day. Hard to be silly about.
No they can’t if they do anything at all, does your software send packets on a wire?
Yes, via pure functions.
I don’t understand your reasoning.
All C programs are 100 percent pure by your logic. if you reset the world to the same state you will get the same result.
C methods have side-effects, they’re not always just functions.
See https://lobste.rs/s/xt0p2l/how_i_decide_between_many_programming#c_kxjghm.
Sorry for the grumpyness, but:
I’m getting more and more convinced that in general programmers should not try satire nor rants.
This is as bland as it gets, each argument are the cookie-cutter arguments made pro and con all the languages since their inception. There’s not a little bit of creativity and there’s not a bit of insight to be had.
You could delete it and nothing of value is lost.
I’m dead sure this article would not be spread if it weren’t published under a big name.
Sometimes the resulting discussion is interesting. I learned my understanding of 100% pure differs from that of Haskell.
I’m sorry you didn’t like it. I didn’t write it to “spread” it, though, I just wrote it because I wanted to. Am I not allowed such things because my blog posts frontpage on Lobsters too often? I didn’t submit this here, nor anywhere else.
And for the record, I do think that there is value in all programmers being able to briefly summarize the advantages and drawbacks of a variety of technologies like this. This kind of thinking can help steer your judgement when starting new projects, and extends past languages and into operating systems, architectures, frameworks, etc, and is an essential kind of thinking for project design.
The curse of popularity. No fun allowed.
That’s great when you’re writing from a place of experience as you are for most of your post. The problem is the small bits where you aren’t, which you included presumably for completeness sake.
haters gonna hate. I guess if you just put a disclaimer that it is satire and opinion.
Not true, I wrote a similar one and it was up voted too. Hating on C++ is just lots of fun for everyone. C++ is a big boy and can handle it.
Hey, let’s not be ‘jobist’, SirCmpwn may be a dirty programmer, but he is also an entrepreneur and community manager too. Let’s not let those jobs have fun either.
My goto now for a scripting language with a simple but quality implementation is now:
https://janet-lang.org/
Pros:
Cons:
Have you found the “tiny ecosystem” limiting? Have you hit any walls, absences of libraries which caused you to say “oh well, I guess have to write this in $OTHER_LANG now”?
Not too bad, but mainly because i’ve only been using it for small scripts. It would probably be a big problem if you decided to write a web application or something like that and aren’t willing to do everything from scratch.
I do enjoy a good rant but this post comes off as extremely arrogant and condescending, which are two things I believe tech and discussions about tech would be much better without. Yes, you can and should criticize language/ecosystem features that are badly designed or not working the way they’re supposed to, but I believe one can (and should strive to) do it without calling people who use said languages “jerks” and “bad programmers” who should use a “real language”. The ad hominem brings nothing to the discussion.
@SirCmpwn Not gonna debate the other points, but how is Haskell’s package management awful?
cabal
3.0 just came out, and it defaults to a nix-like package store which both caches builds and avoids conflicts. The version constraint solving approach is also something I really like.On arch at least the packaging for Haskell is pretty awful, but afaik that’s primarily due to some packaging decisions in arch’s end.
Laughed when I got to C++. Unfortunately it’s still very popular in the industry, I hope Rust replaces it in the future.
I agree with all of this, just wish he’d included Nim, D and the other “new” ones.
Nim: neat ideas but a rather bad compiler/implementation
D: don’t know much about it but what I do know seems reasonable enough. Not sure it’s different enough from anything else to distinguish itself meaningfully
Interesting. Would you mind elaborating on what you find bad about the current implementation?
This is what I sent to the creator of Nim when he asked the same question:
Dominik isn’t the creator of nim, but a team member iirc.
Ah, my mistake. Still, same arguments apply when given to any audience.
So your problem is mostly “I can’t use Nim with embedded software”?
That’s the specific context in which this answer was written, but these problems are applicable in a broader context, too.
Does nim have good dwarf debugging information? A lot of the friction you had with debugging assembly could be mitigated if the debugger could display the original source code and variables or could step by line.
That’s an interesting point of view, thank you.
On the other hand C isn’t a very bad assembly language, it’s widely understood with many different compilers available for almost every single machine architecture on Earth. And C interoperability is required for every serious programming language in anyway. (IIRC there was an article about that a few weeks ago on lobste.rs)
In my opinion, writing a compiler that uses the LLVM backend is not easier. You have to learn their IR language. You have to write boilerplate for C compatibility. You have to add two massive dependencies to your project, LLVM itself and a C++ compiler… so it has serious drawbacks and I totally understand the C choice for Nim and others.
(I’m working on a (much simpler) compiler on my spare time and I choose to generate C as well, especially because I really wish to keep it as small and as simple as possible)
Any opinions on Julia?
Its compiler once caused me days of headaches when doing some packaging work for Alpine. I don’t know anything else about it.
It has two compilers. One compiles fast like Go for rapid iterations. One sends it through LLVM for fastest performance of app. Although, a cleaner C++ that usually compiles fast seems itself worthwhile.
There is also a GCC implementation as well.
Putting Haskell and Elixir/Erlang in the same basket is at least a little bit strange, especially as BEAM languages aren’t that “pure” (due to messaging). Also package management with Hex is blessing.
Sounds pretty good. Regarding Haskell not being ready for production-grade compilers: The Elm compiler is written in Haskell. It was always super reliable and a joy to work with, since Elm 0.19 it’s also pretty fast. So I would think that Haskell can be used even for production-grade compilers.
This seems to rely on the premise that Elm and its compiler are “production-grade” … I’m not sure that can be established for any language without a 1.0.0 release.
Well Elm is used in production at several companies and that for me is a better indicator of “production-readyness” than a version number. It depends very much on your company and your project if you can use pre-1.0 software. Pre-1.0 does not mean that a software may never under any circumstances be used in production. It’s a tradeoff.
I’ve seen several dozen companies using the Flask development server for production websites despite it specifically saying in many, many places that it’s not a production-grade server and should never be deployed as such … a company’s willingness to shoot its own foot clean off with software the creators of which aren’t yet confident enough to label as a public release doth not a production-grade piece of software make.
So sure, absolutely, if it works for you and your situation of course you can happily use it … but that doesn’t mean that use automatically establishes it as “production-grade”, hence it’s not a good counter argument to the notion that Haskell’s only suitable for “lab-grade” compilers.
My team recently finished removing all Elm code from production. Based on our experience, it’s not production ready as of 0.18 or 0.19.
I wrote a little more about that here: https://lobste.rs/s/brvwey/elm_why_it_s_not_quite_ready_yet#c_jivkmc
I for one do :)
I was pleased that Clojure escapes the downsides of the Haskell family as described in this post.
I don’t know how rust got only one meaningful implementation and python didn’t.
Overall I figure this post is correct. Everything is terrible but some things are better than others.
Quote from the Python list of cons:
The different phrasing makes it “get away with it”. You are blaming it on programmers not choosing instead of on implementors
Which I guess is fair.
Clojure is a different beast. It’s kinda of like that Pyramid that descends on planets from Stargate the movie. An alien syntax to the java or javascript people. With npm in flames and the plebes revolting wanting to run ads, Clojure may need a backup planet soon. I mean runtime. Maybe Go. Maybe Swift.
Java and XML everywhere? When was this written? 2006?
It’s probably safe to assume it was the last time the author used Java. This essay is just tired clichés.
it’s hard to see your technical arguments when you use personal comments like this
Yeah - I know several amazing programmers working on wp-core. They’re willing to use PHP in return for a chance to work on the tech that powers nearly 1/4 of all websites.
There are some perhaps, but they are the exception.
Like the author I have stopped paying attention to JS developers and their latest frameworks, they show a lack of knowledge of CS and reinvent small parts of systems that have existed elsewhere for a long time and present them as something new.
No doubt there are some exceptions, but its to hard to find them as the bulk of JS developers can’t tell the difference.
Just to be clear I’ve used both because I’ve had to, but if I wanted to build something knew there are better languages. This isn’t personal, merely an observation of JS developers in general, again there are exceptions but the signal to noise ratio is too low.
It would be really good if Lobste.rs would support hiding a specific topic based on the per-domain domain setting. Then I’d add
http*.drewdevault.com/*
on the spot.I operate on what I call an SDD basis: Suffering-Driving Development. I seek out and stick with whatever (languages, tools, frameworks, libraries, techniques, patterns, even style) lets me suffer the least. Minimum frustration. Minimum irritation and annoyance. Some examples of suffering:
async/await
orPromise
s in JS (and accomplishing that with precious little help from error output (or, more usually, lack thereof))[As a web app developer,] while I think I’m open-minded enough to be ready to try anything new that purports to be better, my years of software development suffering have led me to stay with Ruby (important: not Rails) for the backend, and Vue for the frontend.
The author would benefit from more familiarity with Haskell. He seems a productive and prolific coder, so I’d love to see what he could achieve if he opened his mind a bit more.
I’m surprised the “cons” is package management. I’ve found cabal to not be worse than pip, npm, etc., I.e. Nothing to write home about. Stack is pretty interesting though.
@SirCmpwn,
I’m very curious, what is the platform you mentioned C has no support on?
I was referring to Windows, which has C support but no sane API/environment to run it in.
You mean it doesn’t support POSIX?
Not if you want luxuries such as being able to open a file with non-ASCII characters, or include any system header that is newer than K&R C.
And then there’s MSVC, which appears to be (un)maintained out of spite (Microsoft thinks C is obsolete, and works to make it so).
Props for including the content warning, “don’t read further if you don’t want your sacred cow gored” :)
Is PHP not at least entertaining? :-(
Kind of unfair on Lua. Outside of pointer arithmetic, what’s “objectively bad” about 1-based indexing? Seems almost entirely a subjective thing.
With zero based indices it’s easy to slice an array in two at index
i
withWith one-based indices you need to resort to inclusive ranges to pick exactly
i
elements infirst
array, which then adds a duplicate tosecond
. So you end up with uglyi + 1
indexing everywhere.On the other hand, with 0-based indexing the element at position i is the (i+1)-th element. For example, the 5th element is at position 6.
>= len(array)
), but with 1-indexed arrays zero is also invalid.Your list starts with 1.
Seriously though, I’ve been programming in Lua for 10 years now, and the 1-based indexing has never been a problem for me, and I’m saying that as someone who has programmed in C since 1991, and assembly from 1985. There are also other languages that use 1-based indexing (Fortran comes to mind, and I think the APL-like languages are also 1-indexed).
There are some oddities in Lua for sure, but 1-based indexing is not one of them (in my opinion).
I can only speak about APL itself, but actually it’s cooler than being 1 indexed. You can set Quad-IO whenever you like to either zero or one, depending on how you want to index your vectors right then. Hello matrix math without confusing the math people.
For many conventions we’re stuck with because they’ve become standardized, we’re paying the price in minor ways. In Physics, we have too many negative signs in equations because we decided electrons were the ones with negative charge. We also have 2 accompanying pi everywhere because the ancient Greeks figured it was easier to measure diameter than it was to measure radius. Other conventions had to change in order to go forward. Business arithmetic was pretty difficult in Roman Numerals in Florence, until Fibonacci brought Hindu-Arabic numerals, negative numbers, and double-entry bookkeeping.
TLA+ is 1-indexed, and that’s got two nasty footguns:
list[num % Len(list)]
is an error ifnum
is a multiple ofLen(list)
Nat
, you getlist[1] = 0, list[2] = 1, list[3] = 2
…http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html has some interesting discussion on the topic.