The recommended line length in Black is 88 characters.
I’ve participated in a few code standardization processes. By and large I’m an enormous fan: I read a lot of code and am helped by the consistency they promote. Python has particularly been kind to me here with pycodestyle and pylint.
The thing I’ve learned from them though is that I’m the last person on the planet (or at least the three companies I’ve had a hand in standardizing) that still uses an 80-character terminal to code. It’s the one thing I have a (literal) hard line on. I note the nature of the problem often doesn’t get through and so I’m offered compromises that are as equally bad as to the original. I don’t think I’ve ever been offered “how about 88 characters” but I have worked with folk who themselves tell me they won’t tolerate anything less than 100+ but try to talk us somewhere between 81 and 99 characters.
The terminal is 80 characters wide.
The terminal is 80 characters wide.
I’m a fan of short lines, but this is a pretty bad argument. What is The Terminal? Even without running a GUI like X11, you can get higher resolutions in framebuffer or whatever it’s called.
IMO, the compelling argument for me for shorter lines is readability. I just can’t read long lines, I got lost and confuse by the end. I like reading down than across.
I had it put to me that code review is significantly easier with an 80 character limit. My worst case is working on my laptop (in terms of screen size), and I get 84 characters across in vimdiff before ‘running out of screen’.
The terminal is a VT100, with support for an 80x24 display. The 80 character limit was present before then, however, via the 80-character punched card. Those having 12 lines rather than 24.
It is marvelous that we have a display technology backward compatible with the entire history of computing.
I’m not sure I get your argument, though. Because old tech had a width of 80 characters, we should now? Old tech had a clock rate of 100 Mhz and little storage, but I doubt you’d argue we should be using machines with those specs. And the truth is: almost nobody is programming in a VT100 terminal today. So are you making a luddite argument for line length or is there an actual benefit to 80 characters?
Long story short, lines are limited to 80 characters because Roman roads were designed for chariots drawn by two horses, side-by-side.
I’m saying that if you have to choose a line length, in a standard, that 80 characters is the choice we already made. As a consequence of that choice, 88 characters is two lines. That if your choice is 80 characters or 88 characters, 80 characters has a superior and sufficient claim to be a standard.
Pep 8: Limit all lines to a maximum of 79 characters.
OpenBSD mailing list netiquette: Plain text, 72 characters per line
It is true that terminal emulators will emulate a display larger than 80 characters. I’m a fan of the -S flag to less when I’d like my input file to tell me how wide my display should be. I also routinely work with two terminals side-by-side on a graphical display. Both modes of working are fantastic. The later is enabled by a standard line length of 80 columns. When my terminal has an 81st column, I’m using that column. It’s not available for use by the standard.
So is there any way to standardize on lines longer than 80 characters or is this it for eternity/the rest of our natural lives? What would be a compelling argument for longer lines, for you?
The other area where the question of line length comes up is typography. There, line length is measured by words rather than characters owing to the proportional spacing of characters in a font. If I was dealing with source code in a typographic context the conventions of that field would also apply–possibly to the exclusion of monospace font conventions, possibly in combination with monospace font conventions, depending on how the code is typeset.
You’re missing my point, I’m not saying we should change it just to change it, I’m asking why a decision made 50 years ago still applies today. If long lines are uncomfortable to read, that is at least an argument that would still apply today and not simply the momentum of history.
After lobste.rs talked about A Fire in the Deep and A Deepness in the Sky, I just finished reading them. Developments like these make me think of the tech, thousands of years old and completely irreplaceable, that exists in A Deepness in the Sky. On one hand we lambaste the JS world for rewriting itself every other week, but at the same time we keep Java alive and kicking. I am one of those people that thinks the JVM is long past its prime (every heavy use of Java is running it on hardware the have defined ahead of time and don’t benefit from anything the JVM gives them), but I guess we’ll be stuck with Java for a long time as it keeps on doing accreting new functionality to keep chugging a long. IBM posted its first quarter in 5 years where it will see growth. Why? Thanks to the mainframe portion of the business.
I am one of those people that thinks the JVM is long past its prime
IIRC, the JVM is an incredibly fast VM with hundreds of man years put into performing well.
That alone ensures it will stick around for quite awhile.
It’s also incredibly complicated. Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about at a fraction of the complication. The JVM exists because of “write once, run everywhere”, but that vision never panned out.
Got a cite for that? I don’t doubt what you say, but I think a lot of people formed impressions about Java that are of varying levels of accuracy.
The fact that companies like Square, Netflix and Amazon use Java extensively even for greenfield projects should be an indicator that Java is far from the tire fire some people make it out to be.
Sure, it’s verbose, and there are parts of its ecosystem that are excessively complicated, but there are newer choices that have learned from the past and eschew that kind of complication in favor of a much leaner style.
I’m not trying to get anyone to love or use Java who doesn’t want to, but I’d encourage people to challenge their long standing impressions and come to a better understanding of what the language is good at and which use cases might call for its use.
What are you asking for a citation for, exactly?
I’m not saying Java is a tire fire, I am saying its runtime no longer suits the case it was designed for. People know what hardware and processor they are running their programs on.
Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about at a fraction of the complication.
That.
Ah,
One source:
http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=java&lang2=go
The JVM beats Go for 3 out of 10 problems (and by a pretty significant margin).
For Ocaml, it’s 4/10 for beaten by a significant margin, and the other numbers are pretty comparable:
http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=java&lang2=ocaml
The most common use-cases I see today is server-side, which are almost entirely I/O bound. Erlang compares well to Java in this use case, if only for utilizing I/O so well.
I don’t have any studies for you other than the fact that everything is a service now.
For the complexity aspect of my claim, I think that is self-evident if you’ve looked at the code of the various runtimes.
So, thanks for some good food for thought. I’ll leave you with this - Every problem is different. There are still large swaths of problem space that the Go ecosystem has barely even nibbled at which have very rich support in the Java world.
That combined with newer frameworks which are the polar opposite of some of the useless complexity we’ve all battled in the past (Take a look at Play for an example of what I mean) can make Java a really great choice for some problem domains.
I just think we need to be careful about making overly general claims, and open minded to the fact that there are huge swaths of the industry still coding in Java for a reason.
I think some aspects of my point are being conflated a bit, though. I’m not making a statement about Java-the-language, I’m making a statement about the runtime. My point about Ocaml/Go, which I didn’t make very well, is really that these are languages with a much simpler runtime but still quite comparable performance combined with my claim that the problem the complex runtime is solving is not a problem a vast majority of Java users have.
I just think we need to be careful about making overly general claims, and open minded to the fact that there are huge swaths of the industry still coding in Java for a reason.
If you reread my first comment, I think you’ll see I fully acknowledge that. Mainframes are still a money-making business (would you advocate one use a mainframe, though?) It’s a fact that people are running lots of workloads on the JVM. I even work for a young hip company that uses the JVM. But I’d also be cautious of reading too much into that, IME, the “reason” people do it is often not connected to a technological merit.
Mainframes are still a money-making business (would you advocate one use a mainframe, though?)
That depends. What do you mean by ‘mainframe’? There are scads of business running on the descendant of mainframes - the IBM Power system to this day, running ageless workloads using tools like COBOL and RPG and the like, because those tools suit he use case.
Sure, there are tons of people out there supporting legacy hardware and software nobody in their right mind would choose for a greenfield project, but that’s a different problem.
It’s a fact that people are running lots of workloads on the JVM. I even work for a young hip company that uses the JVM. But I’d also be cautious of reading too much into that, IME, the “reason” people do it is often not connected to a technological merit.
Technical merit has many variables attached. If you’re really talking strictly about runtime size, then you may have a point. I’d argue that for many (most?) people, runtime size is pretty much meaningless.
You’ve successfully proven a couple of assertions like “Go runtimes are smaller than Java’s” and even “under certain circumstances, Go can outperform Java” but I respectfully disagree with the idea that choosing Java might not be the right thing based on technical merit.
The JVM exists to be a Java bytecode interpreter. It’s counterproductive to assign any more labels to it – write once, run anywhere is hardly its main focus these days.
I would argue, basing on what most Java software is built for, that the JVM exists to be the best abstract bytecode interpreter there is. It’s not particularly great for small-scale algorithmic benchmarks like the alioth benchmark games, but where it shines is long-running processes. Servers.
The TechEmpower benchmarks demonstrate this. JVM languages occupy a significant portion of the top 10 in every segment.
Comparing the JVM to OCaml/Go runtimes is not fair. The JVM is a much more complicated beast, given that it supports some very advanced features like:
And the new Graal compiler is really cool.
And HotSpot is just one implementation. There are several enterprise-grade JVMs out there that include crazy things like real-time support (PTC Perc, JamaicaVM), AOT (ExcelsiorJET) and native execution (IM3910.
I think your citation of the expiry of the write-once, run-anywhere paradigm is anecdotal. I develop on OSX and run my .jars on Linux and Solaris v11.
As I said, the comparison is not fair. The JVM has about 25 years of engineering behind it. For that reason alone, it is extremely unwise to downplay it as outdated.
I don’t really understand the core of your response. Part of my claim is that the JVM is a big complicated beast, and that’s not a good thing. And your response is “It’s not fair to compare it to <X, Y> because the JVM is a big complicated beast”. How is one to argue that being a big complicated beast is not a positive thing?
Go and Ocaml are on-par with the JVM in most workloads
That’s a huge benefit of the JVM right there. Most developers and their managers have absolutely no idea what their workloads will be 2 years down the road.
Using the JVM obviates the risk of coming back to a system and having to significantly re-engineer it due to Go’s/OCaml’s runtime starting to choke as the amount of data grows:
Service needed 1GB back then, now it requires 10GB.
Using lesser runtimes is basically a bet that your application will never experience an increase in traffic.
Your claim that the JVM just magically scales up to any workload does not match my experience. I see software rewritten on the JVM as much as any other language. Perhaps you have some specific experience in mind that you could share. Maybe you’re talking about something like Azul? Sure, I’ll grant you that. But in the microservice world, those situations are few and far between. To be clear, I am not saying that some people aren’t just buying bigger hardware to run their JVM programs, I am saying that usecase is dwindling, IME.
Well, you said: “Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about”.
I pointed out that the Alioth benchmark game is not “most workloads”. I gave the TechEmpower benchmarks as a more relevant benchmark environment (web applications, since that’s what most people do). These benchmarks demostrate that JVM is more performant than the languages you mentioned.
Where you are correct is that the JVM is a complicated beast. I do not disagree there. But it’s a performant and sophisticated one, that is definitely not past its prime.
I am one of those people that thinks the JVM is long past its prime (every heavy use of Java is running it on hardware the have defined ahead of time and don’t benefit from anything the JVM gives them).
Could you elaborate on this? I find this statement quite confusing.
The point of the JVM is a portable runtime so you can compile your program once and run it anywhere. However, every company I have worked for deploys their program to one platform, running one OS, in a very well-defined environment. The value of the JVM is limited. “What about the JIT?” one might say, but IME, the JIT offers no value over AOT for modern workloads and it’s significantly more complicated.
I do run across projects that make use of the JVM’s non-platform-specific binaries fairly regularly, but for forward compatibility in case of a future platform migration, rather than cross-platform portability of the style where you need to deploy to multiple platforms simultaneously (where, yes, only one platform is usually targeted). It’s not uncommon to find some random ancient .jar file in the libs/ directory, and for the development team to assume that kind of thing is going to keep working forever even if the project migrates to a different platform; there may not even be source, if it was a licensed commercial library. In that respect it has some of the same uses in enterprise as mainframe-style binaries, which also typically compile to some kind of bytecode to ensure binaries will keep running without a recompile, even across major platform updates.
I’m not sure what part of our industry you’re in but I think those usecases are decreasing over time. I hear horror stories about banks that just have a .jar sitting around without source code that they depend on, but I’m not sure that is motivating enough for why the rest of us have to live with a massively complex runtime.
I think that was the point once, but I don’t think that has been Java’s primary thrust for quite a while. That paradigm made much more sense when Java was aiming for the browser and the desktop, and today those use cases are not the language’s primary focus.
My point is not that the use cases for the language are static, but that they have changed and we are still stick with this hugely complex runtime for a use case that isn’t really used that much (IME).
Gladly:
See https://lobste.rs/s/hsdcqo/repo_upon_deep#c_vles2i
Specifically this comment from @akkartik : https://lobste.rs/s/hsdcqo/repo_upon_deep#c_y1nfhu
I think there’s a great deal of truth to that article, in part because I tend to feel that there hasn’t been a meaningful shift in technology since the 70s, when the early iterations of what became the Internet were laid down. The past three decades have been the story of the democratization of technology, not really the invention of new technology. We all carry a supercomputer in our pockets that can talk to anybody else’s supercomputer, but all the pieces of that were already there. We just had to make them small and cheap enough that everyone could have one.
In a way, everybody having a “supercomputer” in their pocket hasn’t played out very well, either, and almost demonstrates the anti-intellectualism somebody else mentioned.
A thing I’ve noticed reading some early computer books (like this one by one of the inventors of BASIC)), is that many early computer pioneers expected computer programming (and other computer skills) would grow as computers became more ubiquitous. A person wouldn’t just have a computer, but they’d know how to program it to do simple tasks.
I’d love if everyone knew some basic bash or python. Even just basic formulas in excel can teach you something about applying a combination of mathematics, logic and programming to solve a problem. Programming in general is pretty cool and many people don’t even know what they are missing out on while they are consumers checking their Facebook stream for 2 hours each night.
There’s a huge amount of new technology coming out of the bio sciences. Genome editing and sequencing are good places to look.
Improving our e2e testing on an electron application. The application is a fairly complex Angular application (used to control a DNA sequencer). I’ve written jsonifier, which allows us to (I think elegantly) construct JSON objects for testing. Early stages though.
Working on a tool to help manage project dependencies, particularly to coordinate branch checkouts on our development machines. I’m sick of working on a branch, checking out one of our backend services and having a runtime error because one of our share libraries is on the wrong branch.