University taught me one lesson the value of which I didn’t realize until later.
It was the ability to put up with the indeterminate and arbitrary.
Exams. Essays. Deadlines. Waking up. Going to sleep. Revising. Writing a thesis. Completing a course module. Completing a minor subject.
Life is many things, but it is some times about tolerating things. Random stupidity. Rules that make no sense, that seem to just be, and you can’t do anything about it.
To me, university was like a microcosm of the arbitrary and insane, a boot camp for the real world that, just like university, makes no sense at times.
Of course it was a vault of knowledge like no other. The depth offered by its courses took me beyond imagination. While computer science is not rocket surgery, the scholarly methods I learnt I still cherish to this day. When I lacked motivation, the school gave me a deadline. When I was out of my depth, it gave help.
I know I could have taught myself most of it, but it would have been under my direction, and I know for certain that university educators have a better sense of direction than I. I would have, most likely, studied myself into a corner.
I see the author begrudge university for the same abject senselessness in its rules and values. To me, that apparent senselessness, alongside the possibility to learn so many things, is priceless.
Though they might not have realized it just yet I think they too have learnt this lesson.
It makes me sad to see what was once largely a craft become what is essentially a commodity. Then again, I suppose it’s a pipe dream to expect quality to prevail in such a fast-growing industry.
This sums up my disillusionment.
When I was young, I dreamed of building beautiful cathedrals of software. But if I pay too much attention to tech, it can feel like everyone obsesses over building the crappiest backyard sheds to power barely-thought-out predatory business models. And I’m supposed to be excited about the narrow possibility of accruing disproportionate financial gains.
I view the actual craft of programming as almost orthogonal to tech itself. Tech headlines are so preoccupied with what other people are doing: who is buying whom, how many Github stars does this have, what OSS product should we be obsessed with from $MEGACORP, how much do you really love JavaScript, etc. I don’t really care about that stuff, that’s celebrity gossip at best. As a result, I don’t pay much attention to tech. The orange website is permanently blocked in my hosts file, I’d block it at the router level if I could.
Since I have a family to support, I’ll continue to do great work (and get paid decently!) for something I generally like. However, I’ve had to accept the fact that so many devs and non-devs want to make a commodity of something that I value more as a craft, and just sort of let the idea that it could be an industry driven by craft more than commodity go. FWIW, the more we try to commoditize development, the worse everything seems to get; e.g. having a near-fully declarative UI has not fixed the difficulty of creating reliable UIs. Thus, there is still a high skill floor and ceiling to programming, and I’ll likely always be able to find work.
My own future projects will probably be more art than intended for end users, because devs only seem to be able to adopt whatever is pushed to them by those with massive marketing budgets.
It’s not surprising since software is a commodity these days. I suppose it will become similar to automobile mechanics: it requires training and apprenticeship, but is not extremely difficult (compared to say, college level STEM), and is a necessary profession, as long as there are cars.
The corollary is that while it may no longer be that unique to be a software engineer, if you work in a prestigious position you could be developing something really interesting that could be one day be used by millions of other engineers.
For those that have tried to wipe the piece of salad from their phone screen, that is an image of a fern of some sort below the menu, not food on your screen.
When I first read the title, I thought it was going to be more of a beef than the chronic it turned out to be. In any case, it actually surprises me that after ten years using modal editing he actually says that:
There’s a steep learning curve in Vim and seeing all those modern IDEs become better at understanding the user’s intent, editing text became way easier and faster in general.
I did not find vim to have a learning curve that steep: it can be painful at first, but you are probably fine the second week already, and being productive after a single month. And even if it is easier at first to use an IDE, I have never seen anyone be faster working in PyCharm than someone in vim, for example.
Being productive after a single month of using Vim? It is, or might be true. But how much productive? After 3 years of using Vim (ime), I think I’m nowhere productive as I would be in perhaps 7 more years of using it. It’s not that Vim has a steep learning curve, but rather it offers so much that even with 10 years usage, you do not fully understand it’s power. And that is what the author is talking about.
Absolutely, after all practice makes perfect, especially in something like vim where muscular memory is key. What I meant when I said you can be productive in a month is that you can actually use it in your workflow: in my experience, after a month using Emacs you are probably still overwhelmed and cannot fully integrate it in your workflow (imho has a much steeper learning curve).
Then again you can always have vim like modal editing in PyCharm, and be doubly as productive!
Oh c’mon that is obviously cheating in this scenario.
(/s, but I meant vanilla IDE shortcuts like OP for comparison!)
I find this… weird. Docker is a packaging mechanism, not an orchestration system. You need an actual orchestration system on top of it to work reliably. The author coming from a JVM world knows that you can’t just scp product-catalog.jar production3:/apps/pc/ and then expect stability from java -jar /apps/pc/product-catalog.jar … application servers that supervise and orchestrate the systems have existed for decades.
Or did I misunderstand the article? Is he arguing that Docker is a bad packaging mechanism? I thought he is arguing that docker run --restart=blahlbah my-application -p 123:123 ... is not a reliable way to run applications in production. If that is what he is saying I agree with him.
But I thought it’s fairly obvious that docker run isn’t, and hasn’t ever been, the only thing you need to do to run applications in production. It’s nowhere near stable enough to be practical or reliable. Maybe Docker (the company) likes to pretend it is, but the way I see it, you always have to bolt things like k8s/marathon/nomad on top of it.
I have tried several approaches to modelling errors with effect types in Scala, and they all stink a little bit.
The first one is the one the Bifunctor IO is a counterpoint to, which is using plain IO[A], which is MonadError[A, Throwable]. To represent my error states I use sum types that at the top extend from java.lang.Exception. This is practical, because if I’m writing a web server, I can verify if it was a known exception (like BusinessLogicRejection) or some other error. Most if not all of these ...Rejection types are recoverable and non-fatal, and produce a 4xx HTTP code. Any other Exception is most likely a 5xx error or a 4xx error.
This approach stinks because I have to have an “open” model of errors: by extending Exception I have to deal with all Exceptions and distinguish the ones in my error hierarchy from other exceptions. On the other hand, this model is really, really easy to use since I can either IO.raiseError(IdempotencyBusinessBlahBlahRejection) or call some arcane JVM crap and that error gets handled seamlessly. But, at heart, it’s dynamically typed. Having dynamically typed errors gives me these sudden flashbacks, the stuff from nightmares where everything is on fire and you’re so, so alone, that I would probably sleep better when my errors are “closed” in the sense that everything I know is either recoverable, represented by my custom error ADT, or fatal, like ...Error in JVM lingo.
The second approach is to either have IO[Either[E, A]] where E is the recoverable error (I often use the word rejection for these), but this is requires using monad transformers and they are cumbersome and have an inherent performance hit, because monad transformers don’t work nicely on the JVM. So while this is completely typed, it’s annoying to use, and slow. It stinks too!
So the bifunctor IO essentially solves this problem by merging these two. On one hand, I am forced to have a “closed” error model, but I don’t have to use monad transformers. Woot!
Time will tell if cats adopts this approach or whether they continue with the current Throwable approach. Seems like the cats-effect community is divided on this, exhibit A and exhibit B. It is certain that Scalaz 8 will have bifunctor IO, but that library isn’t released yet. Until then, this will be very interesting to watch!
Thanks @jdegoes for all your work in improving the functional programming experience in Scalaz!
LastPass, have used it since forever. Works well enough for being a free service. Use it with MFA and change my master every year, have had no security troubles ever. It’s easy to use and it integrates seamlessly with all browsers.
Those little Zotac boxen are wonderful–I’ve just had no luck with the bluetooth support on Debian for them. >:(
Bluetooth doesn’t work on OpenBSD anyway. ;)
:P
I happen to have a bunch of bluetooth jam box little speakers I picked up for super cheap, as well as various exercise gear that all claims to be bluetooth compatible. I have the dream of being able to get everything talking together. :(
Wireless headphones rule. I can never go back. I frequently stand up and walk around while working, and keeping my headphones on throughout has been heavenly.
For anyone looking to get into wireless headphones, I highly recommend the Sony MDR-1000X. Top notch sound quality, noise cancelling, 20 hour battery life, compact carrying case, optional 3.5mm input for non-Bluetooth devices, and you can buy manufacturer refurbished on eBay for $200. That’s what I did, my set came indistinguishable from new. Same experience from several of my coworkers who tried mine and bought their own.
That’s a great price for quality headphones. I bought my Audio-Technica ATH-M50 for $150, and for $50 more my 1000X beats the M50 in comfort and sound quality (with noise cancelling). The noise cancelling alone is worth $50, even if you never use them wirelessly. Truly phenomenal product.
I’m not a fan of wireless anything tbh (except wifi). I’ve always found the inconvenience isn’t worth it. For most peripherals (e.g. mouse, keyboard, headphones), I only ever use them within 3 ft of my desk. The occasional interference doesn’t add anything, and the batteries always seem to fail at the worst times.
With wired headphones you can interchange your Amp whenever you need to, and you use a standard connector with extremely wide support (except if you’re using a newer apple device). I try to avoid bluetooth in general because of its history of security problems.
Oh wait, you’re right, sometimes I use a 2G phone! That counts. I don’t use laptops these days, though.
They’re something I always talk about when OpenBSD fans make disingenuous remarks about the relevance of wireless technology in general. I get it, OpenBSD devs weren’t satisfied with their implementation of Bluetooth, so they axed it out out of security and sanitary concerns. I just find the attitude of “nobody needs Bluetooth” rather annoying. It is actually preventing me from seriously considering OpenBSD as a desktop OS. Why? Because wireless headphones are goddamn amazing.
Perhaps you could use a headphone jack to Bluetooth transmitter device? They look like they’re around £15 and seem to have good reviews.
Personally I listen to music ‘on’ my computer by keeping my AirPods connected to my iPhone and using Spotify on the laptop, remotely controlling Spotify on the phone. This works really well, rather surprisingly.
Antoine, please excuse my trolling. I’m sincerely sorry. Wireless headphones are amazingly convenient, that’s true. OpenBSD doesn’t support Bluetooth, that’s also true. We may not like the combination of those facts, of course.
I really like all core features of OpenBSD: it’s simple, well documented, consistent, reliable, has sane defaults, etc. Obviously OS can’t do everything and stay as simple as it is. We all know that resources of the project are extremely limited.
What we can do about it? Contribute patches, sponsor the project, help with testing, etc. That’s the way it works for OpenBSD. A pretty fare and straightforward way, I’d say.
We always can (and should) use multiple systems for their best parts.
I’m not OS developer… yet. :) We better ask an active developer. For example, Bryan Steele.
For context: https://mobile.twitter.com/canadianbryan/status/984785986780585985
Not a language, but a language feature: in Elixir, there’s a capture operator & that will wrap functions in an anonymous function and also can be used to refer to the nth argument of a function. For example:
add_one = fn x -> x + 1 end
is replaced by
add_one = &(&1 + 1)
This helps avoid naming arguments unnecessarily.
This is one of the features inspired by Clojure. In Clojure, #(foo %) is short for (fn [x] (foo x))
There’s also the pipe operator |> which passes the result of the expression on the left side as the first argument of the function on the right.
Scala also has _, which in _ + 1 is an alias for x => x + 1, or in the case of f(_), is an alias for x => f(x). Oleg Kiselyov has an interesting take on it.
I’m not that familiar with Elixir (only having done a few basic distributed algorithms in it), but this feature has piqued my interest even further in the language, thanks for the handy tip!
I get the author’s point about the Z component being broken. If the library behaves incorrectly but the dependent program uses the incorrect behavior to get functionality, once the incorrect behavior is fixed in the library, the program will stop working. But the library will now be working correctly!
I think semver is not able to solve this issue, but it can mitigate against it: thorough testing and quality analysis before a 1.0.0 release is made is necessary, and careful review of anything that comes afterward.
If strictly adhering to SemVer, wouldn’t the correct approach be to change the default behaviour, while still providing a fallback for the old incorrect behaviour? You could then provide a deprecation notice and actually remove the old incorrect behaviour with the next major version.
I think the problem is that libraries rarely do this (especially for “trivial” fixes) because it’s a PITA. But that’s not really SemVer’s fault.
But that doesn’t solve the problem: dependents upgrade to Z+1 and their stuff breaks, which is expressly not what should happen when doing that under semver. Semver in this case tells you to bump the major version. I don’t mind, it works and it does satisfy the semver specification. I don’t have a problem with stupidly high major versions, since it’s all meaningless anyway, only the differentials are meaningful. Fundamentally going from 98 to 101 is the same as going from major version 3 to 6.
Yeah, I think we’re on the same page. Either you figure out a way to fix the bug in a manner that’s backwards compatible, or you bump the major version. In practice people rarely do this for Z level fixes, but that’s more of a problem with how people interpret SemVer than with the philosophy itself.
I don’t think the analogy holds either. An unattended garden will most likely die or become something different.
But software doesn’t rot. It can run forever. Last week I was contacted by my past employer about a small server I wrote in Perl that hooked into the employer AD. They told me they shut it down since it hadn’t been used for five years, but the server it ran on was being retired to a VPS.
So I logged in on the machine and sure, the last time I or anyone had modified the init script that kept it running, was in March 2007, a few months before I left that employer. So it had been running for 11 years. It could have run for another 13, 26 or even 100 years, had it been initially put on some virtual machine.
If you want to check out a practical gradually-typed language, I’ve been using Typed Racket.
It’s very convenient to use untyped code early-on when the design of the program is unclear(or when porting code from a different language), and to switch individual modules to typed code later to reduce bugs.
Another great gradually typed language is Perl6. It has a Cool type, a value that is simultaneously a string and a number, which I think is pretty… cool!
Based on reading https://docs.perl6.org/type/Cool, kinda? Although it also looks to me as if this is at once broader than what Perl 5 does (e.g. 123.substr(1, 2), or how Array is also a Cool type) and also a bit more formal, typing-wise, since each of those invocations makes clear that it needs a Cool in its Numeric or String form, for example.
That makes sense that it changed. perl5 is not so.. structured. But this stuff worked:
"4" + "6.2"
$q=42; print "foo$q"
print "foo" + $q
It makes things like numeric fields in HTML forms very easy (if $form["age"] <= 16), but the subtle bugs you get…
Anyway. That was perl5. The perl6 solution seems to make things much more explicit.
stanza is another interesting language that is designed from the start to be gradually typed.
Typed Racket is indeed an awesome example. I believe TypeScript would also qualify very well here (as might Flow; I’m not as familiar with it). This also reminds me of Dylan of yore, too: https://en.wikipedia.org/wiki/Dylan_(programming_language)
Yes, Typed Racket is gradual typing, but for example, the current version of Typed Clojure is not. The premise is that gradual typing must support being used by dynamic typing, to simplify a little bit.
Zach Tellman’s On Abstraction
Leiningen for Clojure once again defaults to the latest version.
Leiningen doesn’t default to any latest version as far as I know. Leiningen does
Versioning/pinning is not only about having an API-compliant library though, it’s also about being sure that you can build the exact same version of your program later on. Hyrum’s Law states that any code change may effectively be a breaking one for your consumers. For example:
Of course, pinning is not a panacea: We usually want to apply security issues and bugfixes immediately. But for the most part, there’s no way we can know a priori that new releases will be backwards compatible for our software or not. Pinning gives you the option to vet dependency updates and defer them if they require changes to your system.
1: Unless you use version ranges or dependencies that use them. But that happen so infrequently and is strongly advised against – I don’t think I’ve ever experienced it in the wild.
Hyrum’s Law
FYI, Hyrum finally made http://www.hyrumslaw.com/ with the full observation. Useful for linking. :)
Hmm, perhaps I misunderstood the doc I read. I’m having trouble finding it at the moment. I’m not a Clojure user. Could you point me at a good link? Do library users always have to provide some sort of version predicate for each dependency?
Your point about reproducing builds is a good one, but it can coexist with my proposal. Imagine a parallel universe where Bundler works just like it does here and maintains a Gemfile.lock recording precise versions in use for all dependencies, but we’ve just all been consistently including major version in gem names and not foisting incompatibilities on our users. Push security fixes and bugfixes, pull API changes.
Edit: based on other comments I think I’ve failed to articulate that I am concerned with the upgrade process rather than the deployment process. Version numbers in Gemfile.lock are totally fine. Version numbers in Gemfile are a smell.
Oh, yes, sorry for not being clear: I strongly agree that version “numbers” might as well be serial numbers, checksums or the timestamp it was deployed. And I think major versions should be in the library name itself, instead of in the version “number”.
In Leiningen, library users always have to provide some sort of version predicate for each dependency, see https://github.com/technomancy/leiningen/blob/master/doc/TUTORIAL.md#dependencies. There is some specific stuff related to snapshot versions and checkout dependencies, but if you try to build + deploy a project with those, you’ll get an error unless you setup some environment variable. This also applies to boot afaik ; the functionality is equivalent with how Java’s Maven works.
Hmm, I’ve been digging more into Leiningen, and growing increasingly confused. What’s the right way to say, “give me the latest 2.0 version of this library”? It seems horrible that the standard tutorial recommends using exact versions.
There’s no way to do that. The Maven/JVM dependency land always uses exact versions. This ensures stability.
I’m not an expert on Java, but why are we putting method implementations into interfaces when we already have abstract class inheritance for (what feels like) this exact thing?
Then it feels like the solution to this is to enable multiple inheritance on abstract classes, rather than adding default implementations to interfaces.
But that’s not the point of abstract classes. Abstract classes enforce that (1) you must inherit from only one of them and (2) you cannot instantiate them without inheriting them. If you remove either of these constraints you have a regular class, and there definitely are use cases where you want rules 1 and 2 enforced.
I disparaged Java 9 for this too, because it felt like they were further blurring the line between interfaces and abstract classes. After talking with some of my friends (pinging @dsh), it helped me outline some more differences. To expand and maybe give you more to work with:
abstract classes (to me) feel more class-y, even with these changes. You can’t have final methods in interfaces, nor can you have member variables. As mentioned, you also can’t have multiple inheritance with abstract classes - they are a strictly one-to-one relationship with child classes. Interfaces, on the other hand, are just collections of methods. All you’re doing with a default implementation is telling the compiler to use the given method body if it’s not been specified by the implementor.
That being said, I think the muddying does harm to people learning the language and the idea of OOP. It gives you two different ways to solve the same problem, and I can see that difference really messing with beginners/intermediates until they fully grasp the philosophy of OOP and when it really is appropriate to use an interface or abstract class.
I think the intention was to introduce something like Scala’s traits construct without introducing a new keyword. New keywords are practically impossible to add without breaking code.
Whenever Carmack’s .plan is mentioned I think org-mode should be mentioned as well. Any rigorous life changing program, whether manual or automated, is potentially life-changing.
After lobste.rs talked about A Fire in the Deep and A Deepness in the Sky, I just finished reading them. Developments like these make me think of the tech, thousands of years old and completely irreplaceable, that exists in A Deepness in the Sky. On one hand we lambaste the JS world for rewriting itself every other week, but at the same time we keep Java alive and kicking. I am one of those people that thinks the JVM is long past its prime (every heavy use of Java is running it on hardware the have defined ahead of time and don’t benefit from anything the JVM gives them), but I guess we’ll be stuck with Java for a long time as it keeps on doing accreting new functionality to keep chugging a long. IBM posted its first quarter in 5 years where it will see growth. Why? Thanks to the mainframe portion of the business.
I am one of those people that thinks the JVM is long past its prime
IIRC, the JVM is an incredibly fast VM with hundreds of man years put into performing well.
That alone ensures it will stick around for quite awhile.
It’s also incredibly complicated. Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about at a fraction of the complication. The JVM exists because of “write once, run everywhere”, but that vision never panned out.
Got a cite for that? I don’t doubt what you say, but I think a lot of people formed impressions about Java that are of varying levels of accuracy.
The fact that companies like Square, Netflix and Amazon use Java extensively even for greenfield projects should be an indicator that Java is far from the tire fire some people make it out to be.
Sure, it’s verbose, and there are parts of its ecosystem that are excessively complicated, but there are newer choices that have learned from the past and eschew that kind of complication in favor of a much leaner style.
I’m not trying to get anyone to love or use Java who doesn’t want to, but I’d encourage people to challenge their long standing impressions and come to a better understanding of what the language is good at and which use cases might call for its use.
What are you asking for a citation for, exactly?
I’m not saying Java is a tire fire, I am saying its runtime no longer suits the case it was designed for. People know what hardware and processor they are running their programs on.
Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about at a fraction of the complication.
That.
Ah,
One source:
http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=java&lang2=go
The JVM beats Go for 3 out of 10 problems (and by a pretty significant margin).
For Ocaml, it’s 4/10 for beaten by a significant margin, and the other numbers are pretty comparable:
http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=java&lang2=ocaml
The most common use-cases I see today is server-side, which are almost entirely I/O bound. Erlang compares well to Java in this use case, if only for utilizing I/O so well.
I don’t have any studies for you other than the fact that everything is a service now.
For the complexity aspect of my claim, I think that is self-evident if you’ve looked at the code of the various runtimes.
So, thanks for some good food for thought. I’ll leave you with this - Every problem is different. There are still large swaths of problem space that the Go ecosystem has barely even nibbled at which have very rich support in the Java world.
That combined with newer frameworks which are the polar opposite of some of the useless complexity we’ve all battled in the past (Take a look at Play for an example of what I mean) can make Java a really great choice for some problem domains.
I just think we need to be careful about making overly general claims, and open minded to the fact that there are huge swaths of the industry still coding in Java for a reason.
I think some aspects of my point are being conflated a bit, though. I’m not making a statement about Java-the-language, I’m making a statement about the runtime. My point about Ocaml/Go, which I didn’t make very well, is really that these are languages with a much simpler runtime but still quite comparable performance combined with my claim that the problem the complex runtime is solving is not a problem a vast majority of Java users have.
I just think we need to be careful about making overly general claims, and open minded to the fact that there are huge swaths of the industry still coding in Java for a reason.
If you reread my first comment, I think you’ll see I fully acknowledge that. Mainframes are still a money-making business (would you advocate one use a mainframe, though?) It’s a fact that people are running lots of workloads on the JVM. I even work for a young hip company that uses the JVM. But I’d also be cautious of reading too much into that, IME, the “reason” people do it is often not connected to a technological merit.
Mainframes are still a money-making business (would you advocate one use a mainframe, though?)
That depends. What do you mean by ‘mainframe’? There are scads of business running on the descendant of mainframes - the IBM Power system to this day, running ageless workloads using tools like COBOL and RPG and the like, because those tools suit he use case.
Sure, there are tons of people out there supporting legacy hardware and software nobody in their right mind would choose for a greenfield project, but that’s a different problem.
It’s a fact that people are running lots of workloads on the JVM. I even work for a young hip company that uses the JVM. But I’d also be cautious of reading too much into that, IME, the “reason” people do it is often not connected to a technological merit.
Technical merit has many variables attached. If you’re really talking strictly about runtime size, then you may have a point. I’d argue that for many (most?) people, runtime size is pretty much meaningless.
You’ve successfully proven a couple of assertions like “Go runtimes are smaller than Java’s” and even “under certain circumstances, Go can outperform Java” but I respectfully disagree with the idea that choosing Java might not be the right thing based on technical merit.
The JVM exists to be a Java bytecode interpreter. It’s counterproductive to assign any more labels to it – write once, run anywhere is hardly its main focus these days.
I would argue, basing on what most Java software is built for, that the JVM exists to be the best abstract bytecode interpreter there is. It’s not particularly great for small-scale algorithmic benchmarks like the alioth benchmark games, but where it shines is long-running processes. Servers.
The TechEmpower benchmarks demonstrate this. JVM languages occupy a significant portion of the top 10 in every segment.
Comparing the JVM to OCaml/Go runtimes is not fair. The JVM is a much more complicated beast, given that it supports some very advanced features like:
And the new Graal compiler is really cool.
And HotSpot is just one implementation. There are several enterprise-grade JVMs out there that include crazy things like real-time support (PTC Perc, JamaicaVM), AOT (ExcelsiorJET) and native execution (IM3910.
I think your citation of the expiry of the write-once, run-anywhere paradigm is anecdotal. I develop on OSX and run my .jars on Linux and Solaris v11.
As I said, the comparison is not fair. The JVM has about 25 years of engineering behind it. For that reason alone, it is extremely unwise to downplay it as outdated.
I don’t really understand the core of your response. Part of my claim is that the JVM is a big complicated beast, and that’s not a good thing. And your response is “It’s not fair to compare it to <X, Y> because the JVM is a big complicated beast”. How is one to argue that being a big complicated beast is not a positive thing?
Go and Ocaml are on-par with the JVM in most workloads
That’s a huge benefit of the JVM right there. Most developers and their managers have absolutely no idea what their workloads will be 2 years down the road.
Using the JVM obviates the risk of coming back to a system and having to significantly re-engineer it due to Go’s/OCaml’s runtime starting to choke as the amount of data grows:
Service needed 1GB back then, now it requires 10GB.
Using lesser runtimes is basically a bet that your application will never experience an increase in traffic.
Your claim that the JVM just magically scales up to any workload does not match my experience. I see software rewritten on the JVM as much as any other language. Perhaps you have some specific experience in mind that you could share. Maybe you’re talking about something like Azul? Sure, I’ll grant you that. But in the microservice world, those situations are few and far between. To be clear, I am not saying that some people aren’t just buying bigger hardware to run their JVM programs, I am saying that usecase is dwindling, IME.
Well, you said: “Language implementations like Go and Ocaml are on-par with the JVM in most workloads people care about”.
I pointed out that the Alioth benchmark game is not “most workloads”. I gave the TechEmpower benchmarks as a more relevant benchmark environment (web applications, since that’s what most people do). These benchmarks demostrate that JVM is more performant than the languages you mentioned.
Where you are correct is that the JVM is a complicated beast. I do not disagree there. But it’s a performant and sophisticated one, that is definitely not past its prime.
I am one of those people that thinks the JVM is long past its prime (every heavy use of Java is running it on hardware the have defined ahead of time and don’t benefit from anything the JVM gives them).
Could you elaborate on this? I find this statement quite confusing.
The point of the JVM is a portable runtime so you can compile your program once and run it anywhere. However, every company I have worked for deploys their program to one platform, running one OS, in a very well-defined environment. The value of the JVM is limited. “What about the JIT?” one might say, but IME, the JIT offers no value over AOT for modern workloads and it’s significantly more complicated.
I do run across projects that make use of the JVM’s non-platform-specific binaries fairly regularly, but for forward compatibility in case of a future platform migration, rather than cross-platform portability of the style where you need to deploy to multiple platforms simultaneously (where, yes, only one platform is usually targeted). It’s not uncommon to find some random ancient .jar file in the libs/ directory, and for the development team to assume that kind of thing is going to keep working forever even if the project migrates to a different platform; there may not even be source, if it was a licensed commercial library. In that respect it has some of the same uses in enterprise as mainframe-style binaries, which also typically compile to some kind of bytecode to ensure binaries will keep running without a recompile, even across major platform updates.
I’m not sure what part of our industry you’re in but I think those usecases are decreasing over time. I hear horror stories about banks that just have a .jar sitting around without source code that they depend on, but I’m not sure that is motivating enough for why the rest of us have to live with a massively complex runtime.
I think that was the point once, but I don’t think that has been Java’s primary thrust for quite a while. That paradigm made much more sense when Java was aiming for the browser and the desktop, and today those use cases are not the language’s primary focus.
My point is not that the use cases for the language are static, but that they have changed and we are still stick with this hugely complex runtime for a use case that isn’t really used that much (IME).
Gladly:
See https://lobste.rs/s/hsdcqo/repo_upon_deep#c_vles2i
Specifically this comment from @akkartik : https://lobste.rs/s/hsdcqo/repo_upon_deep#c_y1nfhu
I am interested in personal opinions about CHICKEN vs Racket. I want to get into one of them but I am not sure which one. I am looking at them from the point of view.of someone who likes developing web and apps. Can anyone share some of their experiences with me?
Caveat: I’m a CHICKEN user.
Racket is a kitchen-sink/batteries-included kind of Scheme that compiles to bytecode that runs in a virtual machine. It’s got the largest Scheme community and ecosystem by far. It seems to excel in GUI in particular. It also has its own varieties like Typed Racket and Lazy Racket, which are quite neat. (You could argue that Racket is a separate dialect of Scheme at this point, as it doesn’t exactly follow the RnRS.)
CHICKEN is a much more minimal Scheme dialect that compiles to C. It’s fast and portable, and the compiled applications are very easy to deploy elsewhere, given you bundle libchicken.so with the executable (or statically link it). It has a very clean C FFI. It implements most of R5RS with growing R7RS support.
Honestly, if you like developing web apps, I’d personally recommend Racket since it has a sizable and mature codebase for web dev, mostly using a sublanguage called Insta.
On top of other comment, Racket has advantage of How to Design Programs written for it.
What does this mean?
The book How to Design Programs is written by Racket authors and uses Racket throughout.
Ooh, thanks for the context! :)