Readability is subjective, as the author says. A think a helpful question is “who is the audience?” If I write code and my teammates can’t understand it, that’s a problem. Whether some hypothetical outsider could is not relevant.
Maybe my teammates can’t understand it because they need to learn something, or maybe because the code is poorly-written. In either case, I need to help solve the problem, lest I leave them with something they can’t maintain.
I think a faster way to do this is to use a LFSR generator to create a sequence, then select ‘rowid in (…)’. This will be constant time even for deep pages.
Interesting. If I understand correctly, you’re saying I should generate a list of ids in memory on each request, then query the db for rows with those ids?
Yeah. You can find lots of examples by searching for LFSR generator. It has a very small internal state that you can add to the request. Also used for the Doom fizzlefade effect. http://fabiensanglard.net/fizzlefade/index.php
You can pick parameters based on your table size. One issue you might have is if you have missing rows, you’ll only get 18 or 19 rows instead of 20 sometimes, but that’s probably not a problem for infinite scroll. Or you select a few extra, etc.
This is a very cool idea. The main downside I see is that it doesn’t play well with other WHERE criteria; if we’ve pre-selected the ids for page 3, but few or none of those records match a WHERE condition we want, we’re out of luck.
But I’m definitely going to keep it in mind for future reference. It could be done even without LFSR, by pulling SELECT id FROM table, chopping it into pages in app memory, and caching it.
Reminds me of a quote:
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
- Brian W. Kernighan
:) Came here to post that.
The blog is good but I’m not convinced by his argument. It seems too worried about what other people think. I agree that we have to be considerate in how we code but forgoing, say, closures because people aren’t familiar with them or because we’re concerned about how we look will just hold back the industry. Higher level constructs that allow us to simplify and clarify our expression are a win in most cases. People can get used to them. It’s learning.
I think he may not disagree with you as much as it sounds like. I don’t think that sentence says “don’t use closures,” just that they’re not for impressing colleagues. (It was: “You might impress your peers with your fancy use of closures… but this no longer works so well on people who have known for a decades what closure are.”)
Like, at work we need closures routinely for callbacks–event handlers, functions passed to map/filter/sort, etc. But they’re just the concise/idiomatic/etc. way to get the job done; no one would look at the code and say “wow, clever use of a closure.” If someone does, it might even signal we should refactor it!
It seems too worried about what other people think.
I agree with your considerations on learning.
Still, just like we all agree that good code must be readable, we should agree that it should be simple too. If nothing else, for security reasons.
On the other hand, sometimes concepts at the limit of my understanding (like advanced formal verification techniques) allow me to write better code than if I had stayed well within my mental comfort zone.
Clojure. It felt like the natural progression, especially since I was interested in diving deeper into FP. Now I can’t not love s-exps and structural editing, as well as even more powerful meta-programming.
(Also notable that I saw Russ Olsen, author of Eloquent Ruby, moved to Clojure, and now works for Cognitect.)
I’m really interested in Clojure, but compared to Ruby there seems to be an order of magnitude fewer jobs out there for it.
I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists. I’ve seen maybe 2 major Clojure opportunities in the last 6 months.
I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists.
What’s been your success rate when bringing carrion to job fairs?
Clojure is absolutely great and so is Russ. He still loves Ruby (as well) though :)
I still maintain that one of the best books I ever read for my coding skills is Functional Programming Patterns in Scala and Clojure.
Clojure never really got me personally - I would have liked but weirdly short names, friends telling me that for libs tests are more considered “optional” & others were ultimately a bit off putting to me. Still wouldn’t say no, just - switched my focus :)
Tests are definitely not considered optional by the Clojure community. However, you’re likely to see a lot less tests in Clojure code than in Ruby.
There are two major reasons for this in my experience. First reason is that development is primarily done using an editor integrated REPL as seen here. Any time you write some code, you can run it directly from the editor to see exactly what it’s doing. This is a much tighter feedback loop than TDD. Second reason is that functions tend to be pure, and can be tested individually without relying on the overall application state.
So, most testing in libraries tends to be done around the API layer. When the API works as expected, that necessarily tests that all the downstream code is working. It’s worth noting that Clojure Spec is also becoming a popular way to provide specification and do generative testing.
I’m a Rubyist who moved to Elixir. The BEAM seems to be fundamentally a better foundation for web development than Ruby can offer: concurrency, fault tolerance, and not having your service fall over because of one expensive request. There are fewer libraries (for now), but it’s easier to add libraries to Elixir than to build shared-nothing concurrency into Ruby.
Saša Jurić’s talk “Solid Ground” explains well and has some nice demos. https://www.youtube.com/watch?v=pO4_Wlq8JeI
I also wrote a related post: http://nathanmlong.com/2017/06/concurrency-vs-paralellism/
Elixir is great and I feel like most of the major building blocks are there. It’s not just elixir itself though - especially these days I just feel like Ecto is so much better. ActiveRecord and triggering DB requests whenever along with all those validations can be hard toll to take. Today I had to make preloading an association while only selecting certain columns work. Not nice. Would be nicer in ecto as ecto is just a tool to work with the database.
Thanks for Saša’s talk - didn’t know that one yet. On the “to watch list” now :)
As for elixir - I also have a list of non performance reasons I like it: https://pragtob.wordpress.com/2017/07/26/choosing-elixir-for-the-code-not-the-performance/
I wonder bet the extra index would help noticeably if they were using edge pagination instead of limit+offset pagination?
OP here. I’m not familiar with “edge pagination”, and my Googling fails me. Care to explain?
(Is it the same as “keyset pagination”?)
I think so. I’m not certain because different authors seem to like different terminology.
I’ve got a table with an index on it, and I’m presenting the content of that table to the end user in the same order as that index. With every query, the user sends me the value of that index from the last element on the previous page that I sent them, and in my next query I’ll select up to (page size) records where the index value is > the “edge” that they sent me.
e.g.
CREATE TABLE dogs (name TEXT PRIMARY KEY);
INSERT INTO dogs (name) VALUES ('Annabelle'), ('Bernard'), ('Clarence'), ('Daisy'), ('Edgar'), ('Francine'), ('Gerald');
SELECT name FROM dogs ORDER BY name LIMIT 2; -- first page, returns ['Annabelle', 'Bernard']
SELECT name FROM dogs WHERE name > 'Bernard' ORDER BY name LIMIT 2; -- second page, returns ['Clarence', 'Daisy']
SELECT name FROM dogs WHERE name > 'Daisy' ORDER BY name LIMIT 2; -- third page, ['Edgar', 'Francine']
SELECT name FROG dogs WHERE name > 'Francine' ORDER BY name LIMIT 2; -- last page, ['Gerald'], can tell that this is the last because I got fewer results than last time
the idea being that the DB just has to find the starting point I pass it, which is one walk down a b-tree from the top, then it can walk along the b-tree siblings until it hits the LIMIT.
I started using StackOverflow in the beta days and have a hojillion points now. I’m still passively getting points from things I posted at the start.
However, points aren’t meant to say “you’re a good developer.” They’re meant to say “you contribute the kind of content that StackOverflow users want and therefore we trust you to keep doing so.” Editing privileges, etc, flow from that understanding.
It’s probably impossible for someone starting today to ever catch up to my score. But I’d argue it doesn’t matter, if they can get enough points to get privileges and keep curating the site. (If not, the site will suffer.)
I’ve seen complaints about “rules lawyering” and such for years, and yes, it does happen. But personally I’m still able to get good answers quickly. Eg, I posted https://stackoverflow.com/questions/47016737/xpath-for-text-matches-one-of-the-following-strings a month ago and nearly immediately got great help.
I’m not sure if my success there is due more to the fact that I know how to ask questions on StackOverflow or to the fact of my high score. But for me at least, the demise of StackOverflow is greatly exaggerated, and has been for years.
I’m not sure if my success there is due more to the fact that I know how to ask questions on StackOverflow or to the fact of my high score.
That’s a pretty simple experiment. Make a new account and ask some questions and see how they are treated. I suspect you’ll have so much less success that it’ll surprise you.
Did you ask your CTO about it?
Yes. He said no. I’m being paid to develop their software not my or someone elses software.
I usually want and try to contribute, the major roadblock is usually the barrier to entry; I find it rather difficult to enter open source project and contribute meaningfully.
That’s an extraordinarily narrow view of things by a CTO (I was one) but it might have been how you broached the subject, ex.
As to the second roadblock, I suggest taking something you use or want to learn and read the docs. Try to use it. If the docs don’t match the behavior, correct the docs and submit it. If there’s no example, submit one. If you find a bug, try to fix it. Can’t figure it out? Compose a test case that demonstrates it and submit that. No tests? Write some. If it’s using an old version of some library, try updating it and run the tests. Very quickly you become a useful contributor.
The other scenario that the original article doesn’t take into account is what if the OSS code you’re using just works fine for you and doesn’t need any changes for the use case you need? It doesn’t get blogged about much, but most software development is just adapting existing solutions to the use case of a specific business; in a situation like that it would be a bad sign if you were constantly patching the underlying libraries to get them to do what you need.
Quite true. It’s really nice when things just work.
I will say sometimes you run into a project that is not-really-opensource. I’m using a few of these now and we’ll probably never blog about it. The code is OSS in all the usual ways except that the company that produced it doesn’t really want to accept patches and while their design supports plugins the language and tooling doesn’t without it being in the main tree. After months of going pillar to post with various fixes and some generally useful extensions we’ve just accepted that we’ll maintain internal forks and suffer the merges while we look for replacements and write proprietary ones. Which I think is a bit of a shame.
You never find problems, missing features, or confusing docs?
What ecosystem of code do you work in?
The issue is, I don’t really work on anything that would make me come into serious enough contact with any open source library, let alone setting up the development environment for patching it and then submitting upstream, then waiting for the next release to trickle into Debian.
Agreed. And going to a website promoting ostensibly professional software only to see “sexy” in large type multiple times just doesn’t feel work appropriate.
“the little sweet and sexy” is just not a phrase you should be using to describe software. It’s off-putting to people, and it’s generally (at least in pop culture) used by leachers old men.This feels like yet another example of how tone deaf men in tech can be.
Glad to you took the time to insult and signal how much better you are than those leacher, tone deaf old men who wrote some free software for you. It’s really a great way to earn friends and show them the errors of their ways by shaming people publicly. /s
p.s. I agree with the sentiment, and hwayne’s comment is far more appropriate than some of the others I have seen. He expresses his own opinion, not theoretical opinions of others, and doesn’t shame anyone.
p.p.s The funny thing Is rereading my own comment, I see I am not even following my own advice! A better comment would be something like:
I do not agree with calling potentially well meaning people “tone deaf”.
Same for me, but that’s probably the sign of times. I have also the same feeling when people say that they love this company or that software.
Of course when old established projects use such a lingo it may sound like when old people say something in teenage slang. It will feel off for teenagers and alien to other old people. Sort of uncanny valley?
At some point you are reading way too far into things… It just means ‘stronger than like’ in that context.
I love my pet dogs. I love good food. I love good software.
It may be because I’m not a native English speaker. In my language love is mostly reserved to the top emotion. Then if you love something (your work or music genere) it means that it can literally compete with the feeling you have to e.g. your spouse. I guess it’s something that I can’t get over. Especially regarding purely profit motivated endeavours.
Almost certainly a native/non-native speaker thing. In American English at least, ‘love’ is a pretty tame word that gets thrown around for everything. There really isn’t a specific word distinct for, e.g., the feeling one feels about their spouse; about their kids; etc. Usually ‘love’ is used there too, and context determines the level of effect.
Occasionally you might see modifiers like, “brotherly love”, “fatherly love”, “familial love”, etc. That’s not super common though, mostly just context to delineate the quality of the usage.
What is your native language? I know Greek has a few different words for different classes of ‘love’, and I imagine it’s not super uncommon, but I’m always curious about language related topics and the different quirks various languages have.
I’m Polish. We say something like “brotherly love” or “fatherly love”. One can love their work, hobby and certainly their pet. But when someone says that he loves food or a thing it sound strange. “Like” is “lubić”. “Love” is “kochać”. “Love” in context of things would be more commonly translated to “uwielbiać”. It literally means “worship”, but in this context it is really more like “love” used as “stronger than like”. So maybe it is more crazy then in English.
Love as a verb is “kochać”. But love as a noun is “miłość”. So “kochać” means that you feel “miłość” to somebody.
I heard people from more pop part of younger generation saying such things, but it sounds for me like a literal translation from English. I heard it in movies and especially children movies. It almost always sounded off to me, but next generation is learning this foreign use. So I guess I’m doomed thanks to globalization ;).
I’m also Polish and to be honest I find nothing strange in usage of “love” in context of “food or a thing” (both in Polish and in English). Considering that it seems from your linkedin profile that I’m older (32) than you I think your generalizations about younger generation is wrong :)
Yes. Also: laptops, companies, fields of study, consumer electronics, genres of literature, fonts, cooking techniques…
Unless you are literally indicating sexual attractiveness, please use a word such as “exciting”, “sleek”, or “fashionable”.
I don’t think I have a problem with the sexy part, I have a problem with the screenshots make it not even look all that great. Those fonts are terrible. There’s nothing in the feature list that really even makes me want to try it out over the editors/IDEs I currently use.
I filed an issue. Please consider +1
Yes, sexy is gender neutral. What makes it potentially offensive to women is the association with exploitation and objectification.
The word itself isn’t offensive. I can say that I find my wife to be drop dead sexy, but that’s because in that context it’s entirely appropriate.
I completely agree that sexy in context of software sounds strange at best. I just don’t think that mentioning one particular gender in that issue was needed.
Fascinating that you see it that way. When there is a gigantic groundswell of people saying “your behavior makes me uncomfortable” I try to change that behavior.
I for one value women in tech. I find their presence in my day to day working life improves my productivity and the productivity of the teams I work on, as does a diversity of backgrounds, opinions and characteristics.
So, for me this isn’t about offense, it’s about trying to make the industry I care deeply about a more welcoming place for a group of people I also care deeply about.
Folks can play dumb about “sexy” alone, but when you address the complete phrase, “little, sweet, and sexy,” someone’s gotta be pretending to be reeeal oblivious to show up and say oh that’s neutral we’re not talking about software like we wanna talk about women.
Anyway keep speaking up, because yeah it’s not “taking offense on behalf of others” its paying attention to them and having consideration without them having to speak every time. And I sure as heck don’t like to wade directly into this kind of talk on lobsters very often, it’s rarely worth it.
Thanks. I think that’s why it’s important for people in privileged situations like myself to at least try and raise awareness. I don’t let the negative comments get to me - I was donning my asbestos underwear and wading into email/USENET threads before most of these people were born :)
I can’t imagine people talking about women that way. Would be super creepy to use a phrase like “sweet and sexy” about a person instead of a thing…
Maybe you are (or someone reading this is) not aware of the counter argument so I thought I’d share: the implication in your comment is that sex necessarily exploits women, which is false. The idea that sex necessarily exploits women reinforces the belief that we must protect women from sex as we do children. This is a defining aspect of anti-sex, Third Wave feminism, which I believe runs counter to the feminist goals of dismantaling fascist and patriarchal structures in society.
I am very rarely seeing a groundswell of people saying “Your behavior makes me uncomfortable”.
What I actually see is people saying “I assume your behavior is making somebody else uncomfortable, and I am taking the credit for ‘fixing’ you”. I far prefer the original comment from hwayne where he was talking about his own opinions, rather than imagining those of other people.
My upvotes usually mean “you speak for me also”. It’s quite a time saver. :) So, to clarify, I myself personally was made uncomfortable by someone describing software as “sweet and sexy”. So much so that I only skimmed the first page or so and closed the tab.
I assume they had good intentions. If I were the author, I’d work a bit more to come up with some way to express my excitement at having written something cool, without sounding creepy.
And I’d like to be very clear, I don’t disagree with the argument, I disagree with some of the methods used to enforce them.
I for one value women in tech. I find their presence in my day to day working life improves my productivity and the productivity of the teams I work on, as does a diversity of backgrounds, opinions and characteristics.
Non-native English speaker here. How does the term sexy offend only women and make them unwelcome to OSS? I mean, I understand the top comment (by hwayne) here saying how it would make someone uncomfortable, but why I don’t understand why it is only limited to women.
The word “sexy” when used to mean that something is sexually attractive, is what it is. You may or may not be expressing something offensive when you use it. The word “sexy” when used to describe something that is not sexual - a car, an algorithm, a user interface - still evokes the idea of sex. It implies that you should feel sexually “turned on” by it, even if it is not literally a thing with which you would have sex. Given the cultural and historical context of our times, a professional environment where people are expected to feel sexually “turned on” by things, or where the idea of sex is constantly referred to when it is not technically relevant, is not an environment where many women will assume they are respected or even safe. You personally might go ahead and assume you are safe and respected. Many women won’t. This reduces the pool of women who are interested in applying for jobs at your company, or interested in staying once they have experienced it for awhile. The people who create the culture of a company either care about that, or they don’t.
For those who are about to read: note that geany.sexy is not managed by the maintainers of the Geany IDE, so the issue didn’t end up going anywhere.
This seems like a silly thing to even care about. It’s like the whole master/slave IDE cable debate. Seriously, it doesn’t need to be a big deal. It’s not even the editors official site. There are more important things to spend time on.
We could just call this Developer Propagandist. Actually, for some companies with a brand that enjoys that level of humor, that might even work.
Overall, I personally find the title Developer evangelist to be a bit on the pretentious side, as the product being propagandized isn’t usually something that could even pretend to be a proper life changer.
I would probably want to hear from the first person I saw calling themselves a “Developer Propagandist”. Humor + honesty FTW.
Yep. As a Christian, I’ve always felt the term “evangelist” as applied to marketing is disrespectful.
If the role is about teaching developers the benefits of a product, why not “developer instructor”?
Another writeup (mine): http://nathanmlong.com/2015/03/understanding-big-o-notation/
The real-world examples I gave were around greeting people at a conference.
I applied this to “how would you re-implement Ruby’s Hash?” in http://nathanmlong.com/2015/10/reimplementing-rubys-hash/
I’m sympathetic to the desire for simplicity, fast feedback, purpose-built code, etc. And “no framework” is the right approach sometimes.
But for internet-accessible code, security is a Big Hard Problem. And frameworks have often solved, or provided simple ways to solve, a wide variety of security issues. See http://guides.rubyonrails.org/security.html, for example.
Writing your own framework means taking responsibility for not what the software does (which you can encode in tests), but what it doesn’t do or allow. It’s always the things you don’t think of that get you in security.
How do you assess and mitigate that risk without spending a lot of time on “waste” coding - building protections you could have gotten for free from a framework?
Somebody marked this as spam? 😦 😦 Wow. I put a lot of work into explaining technical concepts in this post, and it doesn’t sell anything (unless you count Erlang, which is free).
I don’t get it.
I also know that some startups offer below average salary, because of options you get as an employee when joining a startup.
This is kind of the same thing though, isn’t it? Options are worthless until there is a liquidation event. By not calling this out, the employer is hoping you’ll add the potential value of the options to your salary to get total comp, but they’re still underpaying you.
Yes, exactly. It’s “virtual” money. It’s one of those things i don’t get why founder try to do such things to the employees.
As I once said elsewhere:
Stock options are for when you want to tightly couple your career and investing decisions at a moment when you’re excited and biased and trying to please someone who can offer or deny you a job.
Before taking options, maybe ask yourself if you’d invest in this company if you weren’t going to work there.
Personally, I consider stock options to be basically worthless, because:
Getting paid in options makes me an investor in the business, and investing in individual stocks is always risky. Getting paid in cash means I have zero risk, and can invest in mutual funds or whatever seems prudent to me.
True, the salary is lower but the expected value of the compensation isn’t usually different. Here’s how it works.
Lets say you have a job that pays you $100k a year, no bonuses, and no option to buy into the company. How much do you expect to have been paid after four years, barring any promotions or raises? That’s $100k/yr * 4yr = $400k.
Now suppose you have a different job. It only pays $90k a year, but you get to buy a .001 ownership stake in the company at negligible cost on a four year vest. There’s an 80% chance that the company will fail and go bust when it runs out of money after year four, but a 20% chance that the company could get sold outright for $200M. What do you expect to have been paid after four years?
There are two possibilities. If the company goes bust, you get $90k/yr * 4yr = $360k. If the company is successfully sold, you get your $360k salary plus an additional $200k from the sale of your share of the company, for a total of $560k.
Since you have probabilities for the two outcomes, you can calculate the expected value. You have a 20% chance of earning $560k, and an 80% chance of earning $360. (20% * $560k) + (80% * $360) = $400k.
So the salary in the second job is 10% lower, but the expected pay after four years is exactly the same.
What makes optioned offers so difficult to evaluate is the uncertainty in assigning the odds and payouts of success. Are the odds of liquidity really as high as %20, and will the company really sell for $200M? Or is it more like a 10% chance at a $500M sale? And is that better or worse? Expected value lets you weigh those options.
What makes optioned offers so difficult to evaluate is the uncertainty in assigning the odds and payouts of success. Are the odds of liquidity really as high as %20, and will the company really sell for $200M? Or is it more like a 10% chance at a $500M sale?
That’s my problem: either of those numbers could be anything.
Also, imagine the stock and job being decoupled: you don’t work at this company, but you have the chance to invest $10k, with 80% odds of losing it entirely. Would you? I wouldn’t.
you get to buy a .001 ownership stake in the company at negligible cost on a four year vest.
Complication: what happens if you want to leave the job after 2 years? So far I haven’t had a 4-year job in tech.
That’s my problem: either of those numbers could be anything.
Like a lot of things, it takes explanation and practice to get a feel for how it works, but the numbers really can’t be anything.
Its fairly common knowledge that nine out of ten startups fail outright. So a ten percent success rate is a pretty reasonable probability to assign if you know absolutely nothing else. The valuation range at liquidity is pretty limited too. A valuation higher than $500M would be outstanding, but $100M to $200M is a lot more common and a safer bet, again if you know nothing else. One way to get a better estimate than that is simply to ask what the founders think. They have to answer that question for investors all the time. Yes, it might be wildly optimistic, but at least you can use it as an upper limit.
Also, imagine the stock and job being decoupled: you don’t work at this company, but you have the chance to invest $10k, with 80% odds of losing it entirely. Would you? I wouldn’t.
Obviously if you can’t afford to be without the $10k for the investment period, or forever, you cant take that bet. But if we’re still talking about a payout on success of $200k with a confident 80% failure estimate, then the expected value math says its a good deal, and I’d certainly take it if I could afford to be without the $10k.
However, those kind of deals (small, lucrative) are usually only made available to employees, as one of the benefits for doing work with the startup. If you evaluated the company and wanted in on the deal, but for some reason didn’t want to work there, you’d have to come up with a much bigger ‘put’ than $10k to buy into it. Think ten times that amount, if they wanted funding partners at all.
Complication: what happens if you want to leave the job after 2 years? So far I haven’t had a 4-year job in tech.
Most companies that aren’t on an immanent failure course will simply exercise their “right of first buyback” on your shares. You’ll get back whatever money you paid for them, and you’ll walk away from the job having made whatever your salary was for those two years.
Let’s start with your salary. The standard workweek in the US is 40 hours a week. If you’re going to be working 70 hours a week that means you’re working 75% more hours than usual. Or, to put it another way, the company is offering to pay you 40% less than market rate for your time.
When comparing job offers, I’ve learned to calculate an hourly rate for each of them: (salary + estimated value of benefits) / hours I expect to work in a year. (More vacation means fewer expected hours.) There’s some research / guesswork (how much would insurance cost me independently?) and judgment (does going to a conference count as work or vacation? Maybe half and half?). Also, I value things besides hourly rate (eg, how nice does the team seem, and how do I feel about the company’s ethics?). But it’s still a useful comparison.
Also, contracting makes that calculation wonderfully simple: the hourly rate is… the rate they pay me.
TL;DR PostgreSQL is much less likely to destroy your data.
“Will it reliably and correctly store and query my data?” is the first question to ask about a database, IMO.
If not, speed and scalability just tell you how bad your problems will get and how fast.
TL;DR - if you source a script on your page from another domain and it changes owners, they can execute whatever code they want on your domain.
Seems like the best mitigation for this is to use subresource integrity: https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
That way, if the new owner serves a modified file, your page won’t execute it.
Can you be more specific?
At least the optimization itself would be applicable to many programming languages: Setting a good initial size for a container.
Java is dead in the sense C++ is dead. Once dominant, now one of the languages used by increasingly old guard. Of course there are still projects in Java, and even likely some people coding applets for old times sake.
But you can ignore Java at this point without handicapping your career.
I am working for start-ups in the Bay Area and I can tell you that java is very much alive and well and used for new things every day. Nobody writes GUI apps in it anymore, but in the back-end it is widely popular.
But you can ignore Java at this point without handicapping your career.
I agree with you, but I can’t think of a language that’s not true of. There are a lot of language ecosystems that don’t overlap much if at all - Java, Ruby, Python, .NET, Rust, Erlang…
I think if you don’t have some level of understanding the level of reasoning that C works at, that can be a bit of a handicap, at least from a performance standpoint. Though that’s less of a language thing than it is about being able to reason about bytes, pointers and allocations when needed.
That wasn’t true say 15 years ago. Back then if you wanted to have professional mobility outside certain niches, you had to know Java.
I’m going to respectfully disagree. 15 years ago, you had Java, and you had LAMP (where the “P” could be Perl, PHP, or Python), and you had the MS stack, and you still had a great deal of non-MS C. After all that, you had all the other stuff.
Yes, Java may have been the biggest of those, but relegating “the MS stack” to “certain niches” perhaps forgets how dominant Windows was at the time. Yes, OSX was present, but it had just come out, and hadn’t made any significant inroads with developers yet. Yes, Linux was present, but “this is the year of Linux on the desktop” has been a decades-long running gag for a reason.
MS stack was in practice still C++/MFC at the time, and past its heyday. The dotcom boom dethroned desktop, Windows and C++ and brought Java to prominence. By 2000, everyone and their dog were counting enterprise beans: C++ was still massive on Monster, but Java had a huge lead.
Then Microsoft jumped ship to .NET and C++ has not recovered even since. In mid-90s you were so much more likely to land a job doing C++ vs plain C; now it’s the opposite.
My karma shows I hurt a lot of feelings with my point, but sorry guys Java is in visible decline.
Oh, my feelings weren’t hurt, and I don’t disagree that Java is in decline. I merely disagree with the assertion that, 15 years ago, you had to know Java or relegate yourself to niche work. I was in the industry at the time. My recollection is that the dotcom boom brought perl and php to prominence, rather than java.
Remember that java’s big promise at the time was “run anywhere”. Yes, there were applets, and technically servlets, but the former were used mostly for toys, and the latter were barely used at all for a few years. Java was used to write desktop applications as much as anywhere else. And, you probably recall, it wasn’t very good at desktop applications.
I worked in a “dotcom boom” company that used both perl and java (for different projects). It was part of a larger company that used primarily C++ (to write a custom webserver), and ColdFusion. The java work was almost universally considered a failed project due to performance and maintenance problems (it eventually recovered, but it took a long time). The perl side ended up getting more and more of the projects moving forward, particularly the ones with aggressive deadlines.
Now, it may be that, by 15 years ago, perl was already in decline. And, yes, java took some of that market share. But python and ruby took more of it. A couple years later, Django and Rails both appeared, and new adoption of perl dropped drastically.
Meanwhile, java soldiered along and became more and more prominent. But it was never able to shake those dynamic languages out of the web, and it was never able to make real inroads onto the desktop. It never became the lingua franca that it wanted to be.
And now it’s in decline. A decline that’s been slowed by the appearance of other JVM languages, notably scala (and, to a lesser degree, clojure).
Incidents of Java in my life have only increased as my career has, I’m quite certain Java is far from dead and we’re all the worse for it. I’ve even worked for “hip” millennial companies that have decided they needed to switch to Java.
Java is still alive and kicking, having a language that has proven itself to be good enough with a rich ecosystem with different vendors having implemented their own JVM, we’re all the worse for that because?
Often necessity to use deep copy (in any language) signals bad design: when you don’t know for sure that some code to which you pass that object will not mutate it and its deep sub-parts. That’s overly defensive programming. I rarely encounter even shallow copy in practice (almost always it’s right before mutating such objects) and almost never encounter need for deep copy.
I’m not an FP zealot, but I work with Elixir these days, and “deep copy” is an alien concept there. If you add an item to a list, you get a new copy of the list with the new item added. It’s impossible to affect any other bit of code that had the old list.
It’s nice not to have to think about that.
I agree it’s bad practice. My moment of enlightenment was realizing the problem is isomorphic to serializing/deserializing an object graph. Example if you have machinery to completely serialize the state of an object (or an object graph), just use that for a “deep copy” ie deserialize multiple times.
I don’t agree that is bad design. Let’s say that you have a method that updates an Entity and at the end, you emit an event with the new Entity and the old one. One solution is to fetch the entity from database clone it to a $previousEntity variable and use the $entity to perform the replaces you need and then emit the event. However does not mean you should have mutable VOs but from my experience at every place I have been there are always exceptions for certain reason and it can be very easy to start having bugs and very difficult to understand where they are coming from ;)