Cool. Might be worth putting an “about the authors” snippet in there. I like reading things, but it’s really important to consider the source. I feel like a lot of advice for programmers out there isn’t contextualized enough.
Examples of things I want to know:
I really like the idea of anonymous ideas that can be considered and discussed purely based on their merit without clouding the judgement with considerations of source (how often do people blindly accept bad ideas from popular sources and never even consider possibly-good ideas from less-known ones?).
Whether or not an idea is beneficial or not in some particular context - can be discussed and considered by readers. No advice should ever be applied as-is anyways.
I think even thinking of it as “advice” is not ideal. Anything published out there (Internet, journal, newspaper, whatever…) should be considered “ideas”, which maybe helpful to your own ideas. “advice” has a very authoritarian feel to me, and authority is generally meant to dominate, not empower. Peer-to-peer idea sharing is meant to mutually empower.
I enjoyed the talk and like the broad direction of his thinking.
He’s just really sloppy with some dichotomies and conclusions, like “abandoning obsession with precision” disappointed me the most. Precision in expression is the greatest contribution of the computer age - programming a computer has enabled, rather than crippled many people’s abilities to understand abstract concepts, so it is certainly not something I’d want to water-down.
I finally found time to watch the Video. I found it really inspiring.
I have a different understanding of the “obsession with precision” comment: Often when interacting with computers we are forced to make commitments that we are not ready to make. When drawing a line in a drawing program the line is not approximately somewhere, but at these precisely the pixels that your mouse was at. The uncertainty that comes with scale is usually lost. So when another user is looking at the document it is difficult to know whether the intention was to put the line exactly there or approximately. For thinking and communicating these different intentions have to be represented, which is not usually the case today.
On a different note, many developers that I have encountered are reluctant to allow inconsistent representations, or have a very clear view of the consistency boundaries, and thus seem like to be obsessed with precision. Representing uncertainty in a way that isn’t cumbersome to interact with is quite difficult.
Abstractions are evil. They hide information that the designer thought was unimportant. How arrogant!
I strongly disagree with you (though I guess that’s the point of the question). Badly used abstraction is awful, but you sometimes need a mental compression algorithm of sorts to be able to even think about what you’re working on. If it’s possible to keep the entire project in your head, then by all means do so, but people have brains with less space than a project like the Linux kernel takes up, and we need abstraction to even think coherently about things of that scale.
I do not think you disagree with me at all! You even said a key word ‘compression’. Abstractions are evil because they are a lossy compression. You can also have lossless compression.
The options are not No Compression vs Lossy Compression. There is a third option.
Abstractions, by definition, are NOT a lossless compression.
How do you differentiate abstractions and lossless compression? I think we may mean different things by abstraction here. Can you post code you would consider an abstraction and code you would consider lossless compression?
Imagine an image class like this
void resize(int, int);
int width() const;
int height() const;
void resize(int, int);
int width() const;
int height() const;
///do whatever pattern you want
char * pixels();
The first one provides an abstraction of an image. The designer thought you would only ever want blue or red images and it hides the underlying representation.
The second one is not an abstraction but a lossless compression. The designer thought you may want blue and red images, but also didn’t hide the data. This is compression and this interface is infinitely more useful than the first. A compression provides a name to data and operations without throwing away the details should you need them.
While you can argue that the first one is just a bad abstraction and the second one is a good abstraction, I would argue that the first one is a bad abstraction and the second one is NOT an abstraction. Why? Because the second one does not throw out information.
Even though the second one the designer was also bad because he didn’t anticipate making green images, it won’t prevent the user from doing so.
Is the second one more dangerous than the first? HELL YES. But I think this is where Design by Contract can assist in regaining safety. As any function or method that uses the compression can define predicates that validate correct use.
To me that almost feels like an abstraction with a shortcut around it, which I fully support. Again, I don’t think we disagree except for semantics. I like abstractions, but I also like ways to circumvent abstractions if needed, and I think the difference in your two examples is that the first provides abstraction in a void, where the second one provides a way to avoid the given abstractions or even create your own.
I think the word abstraction is overused and confuses people. Abstractions throw away information and retain only the information necessary for a particular use. Calling my second example ALSO an abstraction isn’t useful because the second example is qualitatively different. You can call it “Abstraction with escape hatch”, but I just prefer to call it a compression.
By the way, I am very guilty of creating abstractions that are terrible. This isn’t easy.
I suppose you’d prefer to code on analog circuits then, not wanting to abstract details like discreteness of transistor states.
Imagine the kind of software you can write if you were able to control the discreteness and transistor states should you choose to. An example I can think of is an FPGA. While they don’t provide that level of control, they do provide you more control than a regular CPU and you can do amazing things with them.
If the designers of the transistor and discreteness could have provided you that control, they probably would have. Imagine programming software that utilizes analog circuits for REAL fuzzy logic or creating ternary systems on the fly.
If you can manipulate the very nature of matter with software, would you really argue that you should hide it behind a wall of abstraction?
I’m more than happy playing with that world, but I am also more than happy to, say, execute highly discrete, deterministic combinatorial algorithms. Since we truly live in the first world I’m happy for abstractions that allow me to pretend I live in the second.
And you said a key word ‘pretend’. Did you know some developers are not even aware of the first world??! They do not pretend!
I’m pretty alright with that so long as their combinatorial proofs are correct. I’m completely willing to judge the thing built atop the abstraction and the implementation of the abstraction itself separately.
On the other hand, if the abstraction is a poor place to stand and must be dismantled then the proofs atop it might all go away. This is why mathematics tries to build theories and models both—even if your models get invalidated in some way or another your theories still hold. Then again, if your theories have no models at all then they have no use and may even be inconsistent.
I guess the only concern here is that gates, boolean logic, and discrete states are all abstractions over just wrangling raw current!
Is it possible you are the same individual that programmed this game?
HAHA, my assembly is nowhere near that awesome!
“hard” is not the same as “evil”.
I’ll try again:
Haskell’s way of doing typeclasses is not only fundamentally flawed, but is the cause of one of the biggest issues when trying to improve Haskell’s mediocre module system.
All Haskell typeclass instances are potentially incoherent, and code expecting coherence is wrong and broken.
I don’t think that’s really a controversial opinion in the Haskell community—it’s almost just factual. Everyone agrees they’re flawed and mess up modules. Diamond dependencies cause obvious problems, orphans are often necessary and always terrible, everyone fights over the name space. Module proposals nigh universally fall flat because they’re stopped by typeclasses.
The counterpoint is that they’re capturing an interesting idea which nobody else has yet done better, though. SML modules make you pay for a certain kind of rule composability more than you maybe ought to while typeclasses make you pay perhaps quite a bit less than you should.
I’m still in love with modular type classes. Better still, I can remain blissfully in love with them until someone actually implements them and I realize I still don’t like the result.
You’ve said this so many times and yet many people have pointed out it’s wrong. Coherency can and does exist.
many people have pointed out it’s wrong
Many people pointed out that they didn’t like hearing it, but not a single one managed to show how it is wrong. I guess it’s pretty hard even for die-hard Haskell fans to argue with a six year old bug ticket with a proof of concept attached sitting in their own bug tracker.
Coherency can …
Well, if you say good bye to any useful module system, it can.
… and does exist.
No, it doesn’t. Yes, it might exist in some obscure, out-dated niche compiler, but certainly not in the compiler 99% of Haskell developers use.
Controversial opinion: I’ve never seen a useful module system.
The Haskell Language standard guarantees coherency. GHC violates the specification because of a bug.
OK, that is the first thing on this thread that I find truly controversial. Would love it if you expanded on it a bit.
Well, one wouldn’t have to try too hard to get something better than Haskell’s status quo.
Even something which would stop forcing developers to decide whether they want to
would be a worthwhile improvement.
This would be less ridiculous if there was an existing viable alternative.
But if you have a look at Hackage, even if there was a Haskell compiler which didn’t ignore one of the most important guarantees of the language, it wouldn’t matter because too many libraries write in GHC lang instead of Haskell.
Until this changes, instance coherency in Haskell is just wishful (and extremely dangerous) thinking.
As I’ve pointed out to you many times, -fwarn-orphans -Werror ensures the bug does not happen. Yes, the bug still sucks but I still would encourage turning orphans into errors.
There is probably a reason why -fwarn-orphans -Werror has not been made the default at any point in time for the last six years.
Even if you would enable orphan checking for your own code, what percentage of packages on Hackage has been tested with these flags?
It would break lots of existing things. No reason to allow orphans in any new stuff.
People have lots of reasons why they would want to use orphans.
There is a reason why the “let’s deprecate and disallow orphan instances”-move never happened (and will probably never happen).
Just like there are enough people who think depending on non-existent guarantees is perfectly fine, there will be enough people who come up with reasons for orphan instances (although I can see more valid reasons to use orphan instances than to believe instance coherence exists).
Controversial opinion: there are no valid reasons for orphan instances.
Interesting experiment: Try to convince every author of a package on Haskell which uses orphans and make them change their library.
(Although I guess that doesn’t really capture the real picture, because most usages of orphans are by definition not in the same place as the class/type definitions.)
It is easy to dismiss this example as an implementation wart in GHC, and continue pretending that global uniqueness of instances holds. However, the problem with global uniqueness of instances is that they are inherently nonmodular: you might find yourself unable to compose two components because they accidentally defined the same type class instance, even though these instances are plumbed deep in the implementation details of the components.
install dependencies globally and potentially break unrelated software or
install dependencies into a sandbox, and keep compiling the same dependency over and over again for every single sandbox
I don’t agree we need a different module system. I use Nix to solve these problems.