I’m not so sure that tone has a lasting effect on the toxicity of code review – though I’ve never dealt with something as extreme as a vomit emoji in a code review so maybe my experience is skewed.
Every time I join a new team, or start on a new codebase, the reviews on that team seem to coalesce around a common tone. Sometimes that tone is brusque (“call this variable foo”), sometimes less so (“consider calling this variable foo, as it’s consistent with bar”.) I’ve definitely noticed the difference when joining a team that was more brusque than not, but it’s something that you can adjust to easily – so long as the tone doesn’t translate to real life.
What really sticks out in the long term are the other things the author mentions – “while you’re at it” type requests, style reviews that should really be caught by tooling, flat out ignoring reviews, etc.
Regarding the part about Java using the same reference for small integers, the author wrote,
even if the performance benefits are worth it
even if the performance benefits are worth it
But are they really? Making things faster by breaking the == operator is cringe-inducing. Is there some place where this property is so useful that it justifies itself?
I mean if you want to write something that’s really, really fast, you don’t really look at Java (fast as it is), you’d likely go for something like C.
Interning strings is something that the JVM supports, though I’ve never been able to find compelling studies that measure its usefulness. It’s also easy enough in Java that if I ever found a need, I would probably give interning a shot before doing a rewrite in C.
I could imagine it being worth it if you have an application that caches a lot of data with in-memory maps/lists and you need to make the list.contains/list.find/map.get methods fast. All those rely on Object.equals, so being able to speed up equality checking could have a real effect.
In most cases, I’ve found annotations to be the cause of most runtime exceptions. Don’t get me started with Guice + annotations…
That ignores the entire other half of the annotation world that are compile time only annotations, either useful for javac itself or for doing common things like code generation for JSON encoding/decoding.
The author touches on compile-time annotations briefly, mainly to say that compile-time annotations are rarely used.
For my part, the only compile time annotations I’ve encountered in the wild were things like FreeBuilder/Immutables/etc. and Dagger. There’s an argument to be made that those are also local minima – especially FreeBuilder. Kotlin, Scala and Ceylon have special syntax which solves the problem of writing data classes quickly and correctly at the language level. I’ve found the annotation processing solutions to be a lesser replacement – mostly because of poor integration with all of the existing tooling.
Of course those use cases also make good use of annotations. In practice though, I found those to be siginificantly less used than runtime annotations.
This article compares dropwizard to Spark, but I’ve never heard of them being used in the same way. Does anyone use spark the way the author is? (i.e. routing, serialization, auth)
I actually have, but only in little proofs-of-concept when we looked at ditching Python for Java at work. It honestly didn’t even occur to me that you wouldn’t use Spark the same way as Dropwizard, and it takes only a couple lines in a build.gradle (or pom.xml, I assume) to package up Spark apps the same as Dropwizard. What were you thinking of as the “normal” way to use Spark?
I read this a bit too quickly ;)
I thought the author was talking about the other spark.
It’s interesting that figure 3 shows one kafka consumer writing to the cache. This is indeed a good way of ensuring sequential consistency because there will never be two concurrent writes to the cache happening. But if you can write to your database faster than you can consume messages and fan them out to caches, you either need to drop messages on the floor or throttle writes to the database.
You could solve this by partitioning the messages based on some application logic, which is a common idiom with Kafka. I’m not so familiar with how replication in mysql/postgres works. Could you achieve something similar?
The largest (and most invisible) is the addition of an Abstract Syntax Tree (AST) — an intermediate representation of the code during compilation. With this in place, we are able to clean up some edge case inconsistencies, as well as pave the way for some amazing tooling in the future, such as using the AST to produce more performant opcodes.
How was PHP interpreted prior to this?
The parser would call directly into the interpreter. Compare this (5.6 ish) with this (6 ish).
Admittedly tangential comment: This used to be a common way of structuring compilers as well, though quite a long time ago. In the ‘60s there was an active debate between one-pass and multi-pass compilers, with the former translating parsed bits directly to machine code as soon as possible, and the latter constructing complete parse trees and then reducing those through one or more further passes to machine code. The one-pass compilers used less memory, and some researchers furthermore perceived them as an interesting challenge: it seemed like it should be possible to flatten multiple conceptual passes into one efficient physical pass over the input stream. Probably the most lasting legacy of this challenge was the invention of coroutines, which were first conceived of as a way to handle interleaved input contexts in a one-pass compiler.
Actually, for me it looks like both are the same code, down to a comma.