Web software and the “web platform” is the antithesis of portability, security, freedom, performance, robustness and good software in general.
I don’t know if that’s technical enough.
I’ll go one further: the web has set back the practice of programming at least 25 years, probably more. It promotes speed over contemplation, which means we ignore the history of the discipline and repeat mistakes far too quickly. Hell, we don’t even know they’re mistakes.
As an extension of this: all browsers are absolutely beyond horrible.
Why does a document viewer require more code (despite using more concise languages) than an entire operating system? Why does it take orders of magnitude longer to compile one web browser than it does to compile every single other piece of software on my computer? Why do browser vendors spend all their time implementing features whose sole purpose is for webdevs to make their pages obnoxious as hell? Don’t forget all the security holes!
The best recent example of garbage nobody cares about I can think of at the moment is allowing .everythingunderthesun domains. Can’t wait for there to be 400 million clones of my bank’s website, which my browser will helpfully never warn me about. Even something like the following would be useful:
| This webpage is probably terrible |
| Nobody in the world has made a |
| website worth visiting using |
| a .botanicalgarden.museum TLD. |
Strongly agreed. Regardless of which browser I try to use, they’re consistently bug-ridden and crash-prone, bog down after running for a while, suck up unbelievable amounts of RAM (which they then proceed to leak all over the place like the proverbial sieve), and do utterly idiotic things with their UIs. Remind me why we want this as our application-delivery platform of choice? (Oh right, because the damned things are everywhere.) Sigh.
(And incidentally, while I agree with you on the recent TLD population explosion, .museum, perhaps somewhat oddly, has actually been around since the early days.)
I agree that it causes a lot of problems, but it solves many too. You are wrong about portability, I can open Gmail on Windows, Mac OS X, and Linux and it looks exactly the same. Run Thunderbird on all of those platforms, and it’s completely different.
Also, data portability. If I were to run Thunderbird on 3x machines, I’d have all the data cloned and it would be syncing to the internet anyway.
You are wrong about portability
How many million lines of code is your browser and all its deps, plus the compiler and other infrastructure needed to actually compile and run it? And how much pressure does it put on your OS, as far as system calls, library functions and other features (including nitty gritty implementation details) go? How far does it go about assuming commodity hardware and being a pain on everything else?
You can call it portable if your definition of portability is that – over the decades – the code has already been ported to some specific systems. Along that line of arguing, you’d also have to declare every other system irrelevant. That declaration however doesn’t make it a single bit easier to port that 50Msloc stack of software to other systems.
For me, portability is something that involves taking source code and porting it to a system it doesn’t run on yet. I don’t really want to be the guy who ports (and maintains a port of) a modern browser along with all the other stuff it requires, on some non-mainstream system. It’s too much.
Oh yeah, you also have to declare older computers (including mine, even though it’s only four to five years old) irrelevant. I can open Gmail on my little netbook and soon enough it’ll either crash the browser or just swap so much I wish I never did that.
I can open Gmail on my little netbook and soon enough it’ll either crash the browser or just swap so much I wish I never did that.
Try the basic HTML view.
The web should to go back to documents, not applications. Maybe actually make it easier for the web to be used to edit - something that was intended at the start that never quite happened.
Could not have said it better myself! Not sure how managed to avoid web software throughout my career but I have!
I’ll lay this down:
Exceptions are an inherently poor abstraction as implemented in most languages. They are leaky in that they force you to understand the underlying code for what often should be a black box. They make code harder to understand as they are effectively GOTO who’s scope is determined at runtime. And most languages don’t allow a way to encode them into the type of a function, making them both uncheckable by the type system and undocumented by the type system. You are beholden to the author to document their code adequately.
Using an error monad in a language like ocaml is superior to exceptions in almost every way.
Not sure if this is controversial …
Depends on your target audience, I suppose. Saying static typing > dynamic typing won’t raise much eye brows at Haskel Symposium but might not go so well in PyCon.
Ok, maybe controversial: Using exceptions for control flow is perfectly fine if it is the only thing exceptions are used for.
As much as people might grown, or raise an eyebrow, Java’s Checked exceptions solve most of the problem here. Since it’s part of the method signature, it’s really not leaky…
Except that, Java also supports unchecked exceptions, which fucks the whole thing up. Error monad for the win.
I’m not a Java programmer but I was articles criticising checked exceptions. Could anyone provide more details about the pros and cons of checked and uncheck exceptions?
IMO the problem with checked exceptions in Java is mostly that the they are not inferred. For example, if I write a method that calls something with a checked exception that I want to propagate up several levels, each level has to explicitly state that they throw every checked exception under them. Like most things in Java, the annotation there is verbose and heavy weight.
Plus, they interact terribly with lambdas and SAMs which gives raise to fun things like UncheckedIOException.
They are leaky in that the information you generally use is the stacktrace to debug.
A debugger usucally can expose a backtrace. Are you suggesting that every language that has a debugger is a leaky abstraction?
The fact that an exception includes a backtrace (or has the necessary information to generate it) is a convenience to the programmer because she didn’t have a debugger attached. Implementing exceptions and generating a backtrace are completely orthogonal.
No, debuggers whole point in life is to break through abstractions. This is fundamentally different than the point of errors in a program.
This is fundamentally different than the point of errors in a program.
Only if you squint. A debugger allows you to break through abstractions such that you can debug the underlying abstraction. Exceptions and error handling exist to guard invariants of your program, but if that were their only purpose we’d never log that errors occurred, we’d just fly blind.
I understand that your argument is based on the purity of abstraction, but I’m basing mine on pragmatism, which is really all that matters when handling, and recovering from errors. If it’s broken, it doesn’t matter how pure the abstraction is, it’s not working.
Aside: Are log messages that indicate that an error occurred leaky?
I don’t really understand what you’re saying there. I’m not arguing against error handling, I’m arguing exceptions are a bad form of doing errors. Among other reasons, because stacktraces are leaky (although that is certainly not the most important reason to dislike exceptions). From a pragmatic perspective, I almost never use exceptions in my Ocaml code and have not felt the sting of lacking a stacktrace because the error monad forces me to make decisions about errors at every level of the code anyways. What ‘pragmatism’ are you basing your point of view on?
My argument so far has been about the leakiness of exceptions. I think it’s silly to base an argument against exceptions on that point (You enumerated other, more important points, which I agree with). Hiding information when an error occurs makes it more difficult to find the root cause and fix it (if it can be fixed).
I will argue one of your other points, though: exceptions are GOTO. So too, can an error monad! Throwing an exception unwinds the stack. Choosing to handle exceptions in the Error monad, by bubbling the error up the stack has the exact same effect!
Now, in the presence of checked exceptions (and only checked exceptions), the two are essentially equivalent. The programmer is choosing (with a declaration, in a language like java) to bubble the exception up. The programmer in the type-safe language is choosing to bubble up in the Error monad (perhaps with the inconvenience of returning a different type at each point).
Where this, of course, breaks apart, and I’ve mentioned this above, is in the presence of Runtime exceptions, in which case the error monad is clearly better.
Hiding information when an error occurs makes it more difficult to find the root cause and fix it (if it can be fixed).
I think this is the root of our disagreement. My argument is, more or less, that with an error monad (and a bit of discipline) the compiler will force me to handle errors at each step. In that sense, I will not hit a situation where an error occurred that I was not expecting. Thus, I’m not hiding information, I’m making all of this information available at compile-time.
But, I’ve only worked on my own, relatively small, projects in Ocaml so perhaps I’m wrong.
But, it’s still possible, as it is with checked exceptions, for the underlying library to handle an error in a way that throws information away. Now, if you write everything from the ground up and are the only maintainer, then your point is fine. But, most software isn’t maintained (or operated by) by a single author all of its life, and most bugs are triggered at times when you’re not expecting them (aka 4am), according to Murphy.
What ‘pragmatism’ are you basing your point of view on?
If I’m fighting a fire trying to figure out what happened, I want as much information as I can possibly have about where the error occurred, and why. Backtraces aren’t even enough, because they don’t give me any of the arguments that caused the exceptional case.
In regards to forces me to make desicions, the decision might be premature. You may not have reasoned through all of the incredible ways that the program might fail (you only know the type of the last error that occurred, which was handled by someone else, who is possibly making different assumptions). You need to appease the type checker, so you handle the error in the best way you can at the time, but when an error actually occurs what do you even know?
Disclaimer: I’ve never worked on a large code base where the error handling mechanism is the Error monad, and only the Error monad. I have worked on code bases which use a non-safe version, Go, which uses an error type to indicate failure, and we’re pretty good about not ignoring them. Again, it’s not the same as using a type checked, Error monad, so, I may be completely missing some enlightening experience.
Here’s one of mine: vim/emacs do not significantly increase productivity.
The undue fixation on editors is common among younger devs, who usually use dynamically typed languages and have to write a lot of code. They routinely confuse the time spent editing code as the time spent programming. And since they’ve just put significant time into their editor, few are about to admit that it was wasted time. Maintaining extensive configs for vim/emacs is a timesink in and of itself.
This is part of the over-emphasis that devs place on the act of writing code vs maintaining it. The hype cycle of industry is firmly locked to the dopamine hit of “wow, it works?!” Devs would rather chase this fleeting, temporary feeling over building robust, reliable systems with more boring tech. It’s like they’ve internalized that software development always has a certain pain threshold, and they’ll ensure that they reach it, no matter what.
controvertial counter opinion. People who use vim or emacs do not use it for productivity gains, but for the pleasure they give. The justifications are about productivity, but the real reason is pleasure. The same pleasure you get mastering a musical instrument.
Absolutely. Editing text in vim is so good. I just wish it would understand what I’m writing more.
It’s not that I wish people use them less, but that they’d expect more from their tools.
To what end? Imagine the ultimate IDE. You take the anal probe and stick it in. It immediately creates a program that you are thinking about, and maybe some you didn’t realize you were thinking about (think porn aggregation). You squeeze your but cheeks and it runs.
Why an anal probe and not something that you wear on your head? Because there is no pleasure in wearing something on your head, as there is no other pleasure using that type of IDE.
As soon as programming stops being enjoyable, you can count me out.
P.S. I hope you can see the humor in my response. I really do think tools need to improve.
I have always admitted that this is exactly why I use emacs. I have no illusions that it makes me more productive; I consider it a vice to be honest.
Can’t agree with this one enough. When people ask me what my preferred development environment is, I usually say it’s whichever has the best support out of the box for whatever language or technology I’m using.
I usually spend a lot more time thinking than writing code.
Hell, I still use ed as my main editor and tmux manages my editor windows. It’s surprisingly easy and robust.
I need to play with tmux more. Sometimes, I’d love to be able to split my screen and focus more. I need to wean myself off graphical Vim first, though.
I still like VIM-like keybindings, so I use a setup similar to this gist. I’ve gone ahead and annotated each line so you can get an idea of what each one does.
Additionally, I’ve found that the following shell aliases are highly useful (I can never remember the session management commands):
alias tn='tmux new -s'
alias ta='tmux attach -t'
alias tl='tmux list-sessions'
Nice, just added to my bashrc. Number of characters in commands is TOO DAMN HIGH
I like this one, even as an adherent. Especially as an adherent. There’s definitely a whole lot of fiddling-for-pleasure.
On the other side, do you think that there are cognitive tradeoffs to using IDEs? (I know that’s not your original point, which is programming vs. thinking, but curious)
I think IDEs dispose you to think in terms of larger, more monolithic projects. You can believe you need to decouple and ship the smallest pieces possible, but if the primary workflow is one of amalgamating source files into one giant project, that’s what’s going to occur, because it’s easiest.
I also think IDEs make you care too much about them, such as manually merging project files (silly). The UIs are usually bloaty, start-up time can be anywhere from great to terrible, and they often smell of Big Software Syndrome, where most of the features you aren’t meant to use, but instead are for checking boxes for an enterprise paper pusher.
IDEs usually have crappy editors and decent-good code analysis tools. Actually, I think I believe less in IDEs and more in analysis tools that are integrated with the editing experience. Still lots of room for improvement here.
“IDEs usually have crappy editors and decent-good code analysis tools. Actually, I think I believe less in IDEs and more in analysis tools that are integrated with the editing experience. Still lots of room for improvement here.”
I like e.g. ensime, for when I’m working in Scala
Big data is a scam.
As a statistician (as much as I am): yes!
Real knowledge is orders of magnitude more difficult than just gathering data. Worse, “just gathering data” perhaps blinds the practitioner to the work that would build knowledge. It’s solutions all looking for problems.
As I saw somewhere on twitter:
That’s not big data. That’s just a slow query.
I keep repeating that mantra at $WORK. I’ve been trying to explain that as our data set fits on my phone we do not have a big data problem and applying traditional big data solutions will slow us down even further. This suggestion is falling on surprisingly many deaf ears. Not sure if it’s actual incompetence or just wilful ignorance because they want to play with the tools despite not being suitable for their problem.
…they want to play with the tools despite not being suitable for their problem.
Indeed. Further, I am constantly upset by the amount of people who say they have “big data,” when their dataset fits in memory. =(
There was an incredible talk at !!Con last May regarding this. https://www.youtube.com/watch?v=jw-3Ufd_u4c
I think ‘big data is not a scam’ would be more controversial these days.
I’ll try this:
Object-oriented languages and object-oriented design in general is a bad idea. Data structure and functions are not the same thing, and should not be treated as such, state in general is bad, let alone multiple private states, and the amount of useless boilerplate and cruft enforced is incredibly damaging to productivity.
Not so controversial of an idea here. Other places it is borderline religious.
I’ll also add that members are often just globals with a slightly smaller scope. It pains me to see classes where methods just reference dozens of variables outside of their function scope. Bonus points if they use inheritance.
Objects - ML Modules - Mutability - Inheritance = ?
I started to think recently that the only thing that is at the fundamental core of OO is dynamic dispatch, everything else can be emulated one way or another without too much pain involved.
I’d argue final encodings are the more fundamental and interesting thing OO focused on.
a) I’m not sure it is even showing an interesting example of what dynamic dispatch can do.
b) That’s exactly what I meant with “without too much pain involved”.
You can build everything in every language, but will it survive more than 5 seconds in code review?
I disagree. I agree that it is not the one-size-fits-all solution many companies, communities and developers make it out to be though. But in some situations, it a pretty natural and clean way to solve a problem.
Ok. You win.
Your opinions are controversial. ;-)
Databases should be avoided.
As a DB engineer, I disagree. But people do need to be more judicious about what databases they use, people pick the wrong DB most of the time.
Also, a filesystem is a database. It just has different access patterns and a different API than a relational / other database. If it fits your performance and feature needs, it’s a legitimate, understandable, and portable database choice.
Not sure if controversial:
A lot of software engineering as an area of research and practice is at best folk wisdom. The uncertainty and risk of software is sufficiently great that we’ve created a series of narratives and methodologies that support the greater idea that if we just do it this way, we can succeed. It’s not without worth, but I think we’ve created an entire narrative genre that exists simply so we don’t have to directly face the problem that we are reckoning with systems none of us have the capacity to fully understand.
The very name “software engineering” is suspect, as the field lacks the rigor of any real engineering practices. It is difficult to know whether something will or will not work beforehand, and the design trade-offs of individual decisions may often be poorly understood.
As it is I like to think of it as a social science or a rhetorical practice, which isn’t necessarily a bad thing when informing decisions around team structures and the like. Negligence of trade-offs is absolutely a blind spot.
I’ve got a few, I’m sure I’m wrong but it’s cathartic say it out loud.
Basically all software sucks, now get off my lawn!
Startups aren’t a good idea financially unless you’re a cofounder.
I agree, but here’s a counterpoint. If you work at a startup you’ll get opportunities to work with all sorts of new technologies and solve all sorts of problems. Most other places won’t give you that much exposure and education in such a quick fashion. You can leverage those new skills towards higher pay in the future.
Also, even if you’re a cofounder, the same tradeoffs are at play, but at a higher intensity. It’s still not worth it financially (i.e. expected value is less that what a normal wage would be), but the skills you learn will help you earn more in the future.
Technology is inherently political and not particularly meritocratic, just like every other field of human endeavor.
Also, Unix is FUCKING TERRIBLE, and it doesn’t matter at all.
And: much of the “NoSQL/Big Data” movement is a result of being kicked in the teeth by how bad an abstraction ActiveRecord is, and while there certainly is a problem domain that is appropriate for e.g. Cassandra, that domain is much smaller than is currently projected.
Strong agreement around the AR bit. If it weren’t AR it’d be because everyone dislikes the syntax of SQL: it’s not very “readable”.
You have a little LISP machine in you don’t you?
I’m old enough to have used one, once, but I’m not a dreamy-eyed revisionist.
Technology is not just figuratively political, it is deeply political. Most technology that got popular did not get resources because it was was good, but got good because it got resources. Typically those resources ARE because of politics.
This is a wonderfully concise way to put it.
Can you clarify your point? I’m curious.
It’s jwz’s proverbial bookshelf built of mashed potatoes – it is a hairy bag of patches and diffs on one take on the way the world worked in a corporate research lab in the ‘70s. We don’t live in that world any longer, but the context is baked into the way Unix works, and we all pay and pay and pay. And it doesn’t matter, because we’re stuck with it.
Do you have an example of a system that you think is different that doesn’t get enough attention?
Perhaps you should read the Unix Hater’s Handbook – written long ago (Published in 1994), but mostly still relevant today!
OO is really cool and will be rediscovered by the FP community. When that happens it will be re-built in a much nicer form and piss off all the people who learned OO prior to that point because while it will objectively meet many of the core specifications of OO it won’t “feel” right because it’ll probably heavily eschew mutability. This system probably already exists today and some poor researcher has been writing papers for years to a generally apathetic audience. Nobody will believe it’s true until MixML modules evolve into it.
See: CLOS, Erlang
I have, and to be honest they’re not even close to what I’m thinking of. Erlang/OTP is one step of the way perhaps where gen_server encapsulates all of its state.
I mentioned them because both of their object semantics match your description “re-built in a much nicer form and piss off all the people who learned OO prior to that point because while it will objectively meet many of the core specifications of OO it won’t “feel” right”, though only Erlang pushes immutability. I find CLOS closer to the functional paradigm, myself.
ocaml also has OO, after a fashion, as do the hybrid languages like scala, F#, and clojure, but CLOS and Erlang/OTP are much closer to your description, in my experience.
Full circle and back to actors and message passing and away from large frankenstein mixtures of data structures and algorithms.
Syntax just does not matter at all. There are ones which are objectively bad (ambiguity, runtime dependency) but anything non-terrible is no longer worth talking about.
Genuinely, we’re all just trying to represent some AST and how you get there is your own problem. What the nodes are is everybody’s problem.
To add to the controversy: programming with text is an awful idea. Thinking in nodes, writing them down as a string of characters and then asking the compiler to parse back into nodes is very silly.
I think I agree, but I would put it this way: syntax matters a lot, but almost all syntax is really good. You have to botch it tremendously (or intentionally make it obtuse, c.f. Whitespace) for it to be bad enough to actually interfere with use of the language.
That’s part of why lots of syntax arguments (e.g., curly braces versus significant whitespace) seem so irresolvable: the different syntaxes are both really good, to the point there’s no meaningful “quality” difference between them. We’re not comparing them to something like inverse significant whitespace that’s actually bad, so the incredibly minor differences seem grossly exaggerated.
Indeed, as a corollary I’ll add that popular commentary about languages tends to overemphasize lexical syntax to an absurd degree and distracts from more substantive discussion about real open problems like the design of core calculi and type systems. Comments about “readability” of a language are too often presented as if they were actually factual statements.
Complete agreement. And that makes my blood boil just a touch.
Your technical problem is actually a people problem.
visual basic (pre-.net) got a lot of things right - it may not have scaled up very well, but the combination of visual designer, ide and language really was one of the best ways to write a quick gui app and have it look decent that i’ve seen.
If somebody put up a video with light banjoey music showing how easy it generally is to fix day-to-day business issues with it, VB6 would be an Internet sensation.
Another controversial opinion: most of why VB is derided is because its name contains the word “Basic”. Its potential users have monetary and pride incentives to have their tools (and thus their work) not be seen as “basic”. But a huge portion of the technologically-addressable problems that most organizations have are solvable with basic solutions.
basic is not a bad language either. not the best, but certainly not the worst, or one that irretrievably poisons beginners with bad habits (as people love to quote sans evidence). i was perfectly happy to have it as a first language, and even taught myself c by prototyping code in qbasic and then translating it once it worked.
http://prog21.dadgum.com/21.html is a good read too.
Here’s my big one:
No one is ever right about anything, ever.
Object oriented programming isn’t bad, good, or ugly. FP isn’t the one-true-way. Exceptions aren’t evil. GOTO is just fine. Mutability and Immutability are both reasonable choices in context. Everything has it’s place and anyone who tells you otherwise is asserting surety in a situation which demands only doubt. No paradigm, no mode of thought, no set of rules is ever wholly correct. Including this one. In essence, Dogma is the only thing which is dogmatically evil, and the right answer is always to choose what is best for your team based on the people working in it. You should try many different approaches both alone and with your team, be open to change, and when you find a better way to do something, embrace it fully.
As the wise man once said – you can hoot and hollar about how FP/OOP/Patterns/Go/Haskell/whatever is going to save my developer life and that it’ll make me ten times more productive, and in the time it takes you to make that argument, I’ll have made 30 apps in PHP and Java and be making money hand over fist using whatever hodge-podge of techniques seem to work at the time. A bad app that makes money is better than a platonic-ideal that exists only in your mind.
What about things that have physical limits? Such the CAP Theorem? Or what about things with formal proofs?
At the cost of equivocating a bit – stuff like the CAP theorem is so commonly misapplied or otherwise misunderstood (aphyr’s posts about distributed systems bear this out – as he looks at more systems that claim certain properties, when he clearly shows they demonstrate others) that I feel comfortable saying that the theorem may be true, but the application is usually wrong.
That said, it is a bit of hyperbole. It’s probably better to say, “No one is ever right about anything (when it comes to philosophy of/approach to programming), ever.” But that doesn’t read as well.
As an engineer, I prefer a well worded and precise statement. I would just say “No one’s opinion is ever right, and if it is, it’s a proof not an opinion” or something.
There are things you can be fundamentally correct about. IMO, the “no one is every right about anything”-sentiment is a problem in technology now. People try to create the very systems you are pointed out because they don’t realize when someone is telling them that the system they want to build is impossible that they should listen.
For the record, this ends up having a bit more of a rant-y tone, that rant is not directed at anyone in particular, it’s just an expression of frustration with the phenomenon I am ranting about. apy makes a solid point above and is, I think, a lovely human being who should not take my ranting personally.
EDIT: This double posted, apologies.
Alright, let me hedge, there is a class of things which you can be assured hold true given some set of assumptions. Mathematics is, indeed, pretty difficult to argue with. However, I assert this set is relatively small in the context of what I was (probably not very effectively) trying to get at.
My argument is this – there are a lot of people who make very strong statements about things for which they have no evidence or proof to rely on. I’m speaking about the OOP and FP zealots. I’m talking about the language ideologues. I’m talking about the sort of high-minded verbal flatulence that some people sputter out about every damn thing. Yes, CAP is an unassailable theorem. It is capital-T True. But for the most part, people don’t operate in a world of capital-T Truth. They live in a much muddier world of little-T truths. Where everything is subjective and the points don’t matter. The folks spouting these baseless assertions that dynamic programming is better than static is better than this that or the other are basically just noise obscuring the signal – nothing is ‘right’ here, it’s just about what works. I’ve worked with a number of teams who get focused trying to figure out the ‘right’ way to do something – whether how-to Agile, or choosing some tool, or whatever. Equally many have been pretty hamstrung by being caught up in a sequence of bad practice decisions that they are inextricably committed too because someone once claimed they were ‘right’ and thus were beyond reproach. It’s dogmatic thinking, and I have a strong allergy to dogmatic thinking.
Unfortunately (and even apparent in my OP) – dogmatic thinking (‘no one is ever right about anything, ever’) is pretty endemic to the class of people who hang out in these sorts of circles (and I think more generally of the engineering type). I do not consider myself an engineer, at least not like you do. Well – that’s disingenuous – I’m a different sort of engineer, I’m a contractor (like the kind that builds, or, especially, renovates houses). Not every house (app/program/whatever) is necessarily newly built and designed perfectly (legacy code), not every approach will work well – yes, there are some fundamental laws of physics to follow (CAP theorem), and often you’ll have to improvise (do something hacky) to meet both code (quality requirements) and deadlines (… deadlines). In my world, well-worded and precise statements are a liability. I live in the grey area I can give to the people above me demanding new features, and the hardware below me demanding maintenance and refactoring. In this metaphor, you get to be the architect, you have the luxury of surety. That’s not a bad thing – we need architects too – but like the guy writing PHP, or adding the extra wallbracing because snow-load has been high this year and you’d rather belt-and-suspenders the whole thing then have a collapsed roof on your record, I’m not here to be precise and build pristine apps. I’m here to make my employer money. I don’t get to feel like any one direction is the one-true-path, because reality rejects such simplicity.
Anyway, I guess what I’m saying is. Yes, sure, there are unassailable true things. There are also a plethora of quite assailable things, these are what I have a problem with, and – for better or worse – our community seems fixated on the idea that there is a ‘right’ way to do things, and I just don’t think that’s true. It’s not a particularly pleasant idea – as it means I’m basically arguing that a lot of folks (including, often, myself) who hang around places like lobste.rs and HN and reddit are basically blowhards, but OP didn’t ask for a pleasant opinion that everyone would like. :)
I agree with the overall point you’re making: that people are too quick to think their opinion is a fact. But I disagree with how you’re presenting it. There are many things that are Truth.
In my world, well-worded and precise statements are a liability. I live in the grey area I can give to the people above me demanding new features, and the hardware below me demanding maintenance and refactoring.
You’re drawing a false dichotomy here. Precise statements are what make the grey zone identifiable. Knowing that you are, in fact, unable to make a precise statement about something you’re doing means you know you have a lot of freedom there.
So my point is:
There are things with capital T truths. You better be able to identify and understand them. Otherwise the greyzone you think you’re in may not be greyzone and you’ll end up with something that not only makes no sense but could never make sense.
You’re drawing a false dichotomy here. Precise statements are what make the grey zone identifiable. Knowing that you are, in fact, unable to make a precise statement about something you’re doing means you know you have a lot of freedom there.
I can see that – it’s definitely not a strong argument. I think we probably fundamentally disagree on how much stuff is ‘Truth’ (which maybe are better called ‘Laws’ like gravity), or at least, how much Truth is relevant to the day-to-day process of making software which makes money; but I suspect that our respective perceptions of how much there is not only isn’t material, it’s actually probably valuable (since different approaches, in my experience, make for better software).
I really like this one. I’m a sucker for the dramatic hyperbole.
Features of object-oriented and functional languages have large overlaps and are often the same, just from a different point-of-view.
Sometimes, both approaches make a lot of sense and depend on the use-case at hand.
Example: OO’s dynamic dispatch and FP’s typeclasses. They complement each other extremely well, but sometimes one is more appropriate to use than the other one.
Having options at your disposal enables developers to mix-and-match/pick the right abstractions, without pushing some abstractions to the breaking point (like in Haskell … “if you only have typeclasses, everything looks like a potential instance”).
Smug FP weenies are just as boring as OO pattern advocates.
I once read someone describe methods as functions that happen to be partially applied with the same set of data. Made me see FP vs OOP in a new light.
Along the same lines: http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent
You should definitely read some of Oleg’s work on implementing OO in Haskell. It’ll explain not only methods very well, but also the effect system implicit in OO along with what “open recursion via this” ends up meaning. Highly recommended!
Incrementalism, conformity, risk-aversion, and the quest for “backwards compatibility” and “performance” (basically C and UNIX) have led us to a place where we’re doing almost everything the wrong way.
Runtime sub-typing is the worst kind of polymorphism. It’s difficult to determine what code is being executed at runtime by reading the code. Interface guarantees are often not enough to know of something “is the same thing”. Sub-typing can also be implemented manually with records and parametric polymorphism. Doing it manually also makes the cost of runtime sub-typing clear: you’ve got to carry around this dispatch table + data.
In the code I’ve written I’ve used parametric polymorphism extensively but only needed runtime sub-typing a few times.
Almost all the benefits of runtime sub-typing can be found in compile-time sub-typing, a la ML functors.
Programming languages with virtual runtimes and JIT compilers are the future, statically compiled code will die or only be used for very low level code.
Static compilation is getting better and better, but I believe in the long run you won’t be able to beat a system that can observe the code that is executing, and recompile it over time, i.e. a JIT. Especially with multicore machines where a separate core can analyze and recompile code that needs to spin as fast as possible on one core.
I feel like a better title would be “What do you think about my approach to hiring?”
That, to me, is the focus of this article and I agree 100% with the premise of the author. Hiring is about determining ‘fit’. I agree. The question I feel is interesting here is “is it appropriate to ask questions intended to misdirect the interviewee in the hopes of exposing their personality in an indirect way?”.
I avoid and do not use, what appears to me to be, manipulative approaches to interviewing. I’m interested in other people’s experiences.
And, to follow up, is this the equivalent of the ‘golf balls on a plane’ (or whatever) question in the “social” domain and does it not have the same questionable effectiveness.
Do a paid probation period of one month would be my response to hiring tactics questions. But who has the time?
Alright, here I go gulp:
While privacy concerns are real, fearing new technology for privacy implications will be the downfall of innovation.
See the comments on HN on Amazon’s Echo, or on any self-driving car technology, Google Glass, Wear, cloud storage, etc, and you’ll see the basis behind my point.
So innovation will stall until someone innovates a way to implement those things while preserving privacy.
In practice, I don’t see many places where innovation is stalling because of privacy concerns.
This has always sounded reasonable to me, but it must be controversial because I see counterexamples in the wild all the time:
If you’re complaining about a technology being the core problem with your app, there’s a good chance you’re not using the technology correctly or well. If that’s the case, rewriting your app to use a different technology will not help – the core problem is not understanding your tools, not the tool itself.
I disagree with several popular best practices:
Compatibility-based versioning, as used by many popular and older open source projects, is better. Version decimals should indicate compatibility breaking, rather than what type of change it is. A.B.C should be: A is strictly backwards-compatible (to the user: always upgrade), B is partly incompatible (to the user: carefully read the changes and decide whether to upgrade), C is a major rewrite and possibly breaks everything (to the user: not really an upgrade, basically a new lib under the same namespace, decide if you want to switch).
Semver (MAJOR.MINOR.PATCH), compared to the above scheme (A.B.C), gets rid of A, and adds a D decimal (bug fix, strictly compatible) which is—as far as the user is concerned—identical to C (minor feature change, strictly compatible). In my technical opinion, semver is loses information over traditional versioning schemes based more tightly around compatibility signals.
REST and RESTful resource URLs only make sense/work for simple CRUD scenarios. Over time, RESTful APIs slowly deviate from the ideals as scenarios start to become more complex and side effects on ideally-idempotent operations become inevitable. Things like aggregate queries or ones which join across several resources become difficult to express, and typically REST slowly degrades into an RPC hybrid where resources are simply RPC function endpoints.
Further, “proper REST” dictates we must use HTTP verbs to communicate operation types. This creates very real constraints on browser security which is largely disconnected from the underlying API mechanisms. People often deviate from this by creating override method variables so that you can send whatever HTTP verb and override it, but I feel like this just illustrates that it’s a bad idea to begin with. Why limit yourself to GET/POST/PUT/DELETE/PATCH? Why even try mapping your specific API to a completely unrelated protocol?
Also calling something RESTful is a bad idea to begin with as it invites tons of controversy on what it means.
As an alternative, I’m supportive of easy-to-read URLs for resources when it makes sense, and using HTTP verbs when the security/accessibility/caching implications make sense (e.g. disabling cross-site response reading for a resource by requiring POST) but not for “semantic” reasons.
I typically lean towards RPC-style APIs (e.g. /api?method=foo&arg1=value1).
Possibly started with this post, people switched their default Github repos from master to develop, and then treated master as always-stable.
I believe a superior approach is to make a release branch which is more descriptive of what it is—every tagged release is merged into release. master continues to be what it has always been, the active development branch. It’s more important for the release branch to be descriptive than for the development branch to be descriptive, as far as the end-user is concerned. Also this is more congruent with the rest of the DVCS culture.
Abstractions are evil. They hide information that the designer thought was unimportant. How arrogant!
I strongly disagree with you (though I guess that’s the point of the question). Badly used abstraction is awful, but you sometimes need a mental compression algorithm of sorts to be able to even think about what you’re working on. If it’s possible to keep the entire project in your head, then by all means do so, but people have brains with less space than a project like the Linux kernel takes up, and we need abstraction to even think coherently about things of that scale.
I do not think you disagree with me at all! You even said a key word ‘compression’. Abstractions are evil because they are a lossy compression. You can also have lossless compression.
The options are not No Compression vs Lossy Compression. There is a third option.
Abstractions, by definition, are NOT a lossless compression.
How do you differentiate abstractions and lossless compression? I think we may mean different things by abstraction here. Can you post code you would consider an abstraction and code you would consider lossless compression?
Imagine an image class like this
void resize(int, int);
int width() const;
int height() const;
void resize(int, int);
int width() const;
int height() const;
///do whatever pattern you want
char * pixels();
The first one provides an abstraction of an image. The designer thought you would only ever want blue or red images and it hides the underlying representation.
The second one is not an abstraction but a lossless compression. The designer thought you may want blue and red images, but also didn’t hide the data. This is compression and this interface is infinitely more useful than the first. A compression provides a name to data and operations without throwing away the details should you need them.
While you can argue that the first one is just a bad abstraction and the second one is a good abstraction, I would argue that the first one is a bad abstraction and the second one is NOT an abstraction. Why? Because the second one does not throw out information.
Even though the second one the designer was also bad because he didn’t anticipate making green images, it won’t prevent the user from doing so.
Is the second one more dangerous than the first? HELL YES. But I think this is where Design by Contract can assist in regaining safety. As any function or method that uses the compression can define predicates that validate correct use.
To me that almost feels like an abstraction with a shortcut around it, which I fully support. Again, I don’t think we disagree except for semantics. I like abstractions, but I also like ways to circumvent abstractions if needed, and I think the difference in your two examples is that the first provides abstraction in a void, where the second one provides a way to avoid the given abstractions or even create your own.
I think the word abstraction is overused and confuses people. Abstractions throw away information and retain only the information necessary for a particular use. Calling my second example ALSO an abstraction isn’t useful because the second example is qualitatively different. You can call it “Abstraction with escape hatch”, but I just prefer to call it a compression.
By the way, I am very guilty of creating abstractions that are terrible. This isn’t easy.
I suppose you’d prefer to code on analog circuits then, not wanting to abstract details like discreteness of transistor states.
Imagine the kind of software you can write if you were able to control the discreteness and transistor states should you choose to. An example I can think of is an FPGA. While they don’t provide that level of control, they do provide you more control than a regular CPU and you can do amazing things with them.
If the designers of the transistor and discreteness could have provided you that control, they probably would have. Imagine programming software that utilizes analog circuits for REAL fuzzy logic or creating ternary systems on the fly.
If you can manipulate the very nature of matter with software, would you really argue that you should hide it behind a wall of abstraction?
I’m more than happy playing with that world, but I am also more than happy to, say, execute highly discrete, deterministic combinatorial algorithms. Since we truly live in the first world I’m happy for abstractions that allow me to pretend I live in the second.
And you said a key word ‘pretend’. Did you know some developers are not even aware of the first world??! They do not pretend!
I’m pretty alright with that so long as their combinatorial proofs are correct. I’m completely willing to judge the thing built atop the abstraction and the implementation of the abstraction itself separately.
On the other hand, if the abstraction is a poor place to stand and must be dismantled then the proofs atop it might all go away. This is why mathematics tries to build theories and models both—even if your models get invalidated in some way or another your theories still hold. Then again, if your theories have no models at all then they have no use and may even be inconsistent.
I guess the only concern here is that gates, boolean logic, and discrete states are all abstractions over just wrangling raw current!
Is it possible you are the same individual that programmed this game?
HAHA, my assembly is nowhere near that awesome!
“hard” is not the same as “evil”.
Adding large dependencies with many unnecessary features (e.g. using Rails for a small project) is widely seen as a bad idea (coupling to a framework, accidental complexity etc).
However, it’s usually talked about as adding cognitive load to the project. I think it adds cognitive load to the group maintaining the project, and only once per framework.
This matters quite a bit when one group is maintaining 5+ codebases; if they all use (e.g.) Rails, you only pay the cognitive overload once; if each uses a framework more suited to the app, you pay the sum of the costs for each.
Do you view the comment on C as due to cultural reasons, or you actually think it’s the best systems language available? I’m asking because one of these doesn’t actually strike me as controversial, whereas the other is.
META: It’s both kind of ironic and amazing that I’m reading comments in a topic called “What’s Your Most Controversial Technical Opinion?” and wondering whether there was any other topic on Lobster yet where I have agreed with so many things people said.
I’ll try again:
Haskell’s way of doing typeclasses is not only fundamentally flawed, but is the cause of one of the biggest issues when trying to improve Haskell’s mediocre module system.
All Haskell typeclass instances are potentially incoherent, and code expecting coherence is wrong and broken.
I don’t think that’s really a controversial opinion in the Haskell community—it’s almost just factual. Everyone agrees they’re flawed and mess up modules. Diamond dependencies cause obvious problems, orphans are often necessary and always terrible, everyone fights over the name space. Module proposals nigh universally fall flat because they’re stopped by typeclasses.
The counterpoint is that they’re capturing an interesting idea which nobody else has yet done better, though. SML modules make you pay for a certain kind of rule composability more than you maybe ought to while typeclasses make you pay perhaps quite a bit less than you should.
I’m still in love with modular type classes. Better still, I can remain blissfully in love with them until someone actually implements them and I realize I still don’t like the result.
You’ve said this so many times and yet many people have pointed out it’s wrong. Coherency can and does exist.
many people have pointed out it’s wrong
Many people pointed out that they didn’t like hearing it, but not a single one managed to show how it is wrong. I guess it’s pretty hard even for die-hard Haskell fans to argue with a six year old bug ticket with a proof of concept attached sitting in their own bug tracker.
Coherency can …
Well, if you say good bye to any useful module system, it can.
… and does exist.
No, it doesn’t. Yes, it might exist in some obscure, out-dated niche compiler, but certainly not in the compiler 99% of Haskell developers use.
Controversial opinion: I’ve never seen a useful module system.
The Haskell Language standard guarantees coherency. GHC violates the specification because of a bug.
OK, that is the first thing on this thread that I find truly controversial. Would love it if you expanded on it a bit.
Well, one wouldn’t have to try too hard to get something better than Haskell’s status quo.
Even something which would stop forcing developers to decide whether they want to
would be a worthwhile improvement.
This would be less ridiculous if there was an existing viable alternative.
But if you have a look at Hackage, even if there was a Haskell compiler which didn’t ignore one of the most important guarantees of the language, it wouldn’t matter because too many libraries write in GHC lang instead of Haskell.
Until this changes, instance coherency in Haskell is just wishful (and extremely dangerous) thinking.
As I’ve pointed out to you many times, -fwarn-orphans -Werror ensures the bug does not happen. Yes, the bug still sucks but I still would encourage turning orphans into errors.
There is probably a reason why -fwarn-orphans -Werror has not been made the default at any point in time for the last six years.
Even if you would enable orphan checking for your own code, what percentage of packages on Hackage has been tested with these flags?
It would break lots of existing things. No reason to allow orphans in any new stuff.
People have lots of reasons why they would want to use orphans.
There is a reason why the “let’s deprecate and disallow orphan instances”-move never happened (and will probably never happen).
Just like there are enough people who think depending on non-existent guarantees is perfectly fine, there will be enough people who come up with reasons for orphan instances (although I can see more valid reasons to use orphan instances than to believe instance coherence exists).
Controversial opinion: there are no valid reasons for orphan instances.
Interesting experiment: Try to convince every author of a package on Haskell which uses orphans and make them change their library.
(Although I guess that doesn’t really capture the real picture, because most usages of orphans are by definition not in the same place as the class/type definitions.)
It is easy to dismiss this example as an implementation wart in GHC, and continue pretending that global uniqueness of instances holds. However, the problem with global uniqueness of instances is that they are inherently nonmodular: you might find yourself unable to compose two components because they accidentally defined the same type class instance, even though these instances are plumbed deep in the implementation details of the components.
install dependencies globally and potentially break unrelated software or
install dependencies into a sandbox, and keep compiling the same dependency over and over again for every single sandbox
I don’t agree we need a different module system. I use Nix to solve these problems.
In the world of OOP, domain-driven design (DDD) never got a fair shake.
It’s espoused by higher-end consultancies, but everyone tends to Just Use Rails because…Business. Or some other hand-wavey concern. It bothers me that we have vastly more powerful languages, runtimes, burn more CPU and RAM, and end up more coupled than we were even five years ago.
I would like to truly understand this more, but unfortunately I tend to see more proselytizing around it than actual advice or practice.
Yeah, I feel like I don’t have a good grasp on it. I liked most of what I’ve read on it, but haven’t been able to find much substantive discussion on it. It also doesn’t help that I still haven’t picked up Eric Evans' book.
It feels like it was seriously espoused within the .NET community, but later abandoned. I’d like to know why. From the outside, it seems a bit heavy, but I like some of the ideas involved. I think it came out at a time when people were trying to model less and less to get away from feeling enterprise-y.
Also, I’ve maintained a long-standing interest in ports/adapters-style architectures, so I’m particularly interested in how it compares to those.
“general-purpose programming languages x and y are just better for different purposes” is just easier to admit than “this thing I use is filled with overwhelmingly agreed upon anti-patterns”
Ok, I will play too. I have many.
How many more would you like?
Language interpreters are a waste of resources and custom bytecode and JIT are not satisfactory solutions to this problem.
Here it goes: Feature Specs (as in the automated test suite variant) are overrated and create more overhead than they prevent.
Especially for web apps where moving functionality around (in e.g. a rails project) to different pages is nothing more than moving a partial; you can spend more time on updating your test suite than actually making said changes.
Here be pitchforks.