1. 6

    I find that the combination of points 2 (evolving) & 3 (old enough to have established body of literature) is actually rather a bad thing, as there is an enormous amount of outdated literature. Some of it will teach you to write old-school code that will not be welcome in a modern C++ codebase. The rest of it will devote 80% to writing elaborate constructs to implement features which are now part of the language.

    1. 3

      I found that, too. It was especially jarring since part of point 2 is that different C++ versions can behave differently with the same code – even if that situation is rare, that knowledge makes any legacy documentation suspect.

    1. 3

      For a good laugh, look here at this PR.

      1. 17

        It’s both easier and more polite to ignore someone you think is being weird in a harmless way. Pointing and laughing at a person/community is the start of brigading. Lobsters isn’t big enough to be competent at this kind of evil, but it’s still a bad thing to try.

        1. 6

          https://github.com/tootsuite/mastodon/pull/7391#issuecomment-389261480

          What other project has its lead calmly explaining the difference between horse_ebooks and actual horses to clarify a pull request?

          1. 3

            And yet, he manages to offend someone.

            1. 4

              Can someone explain the controversy here? I legitimately do not understand. Is the individual claiming to be a computer and a person? Or do they just believe that someday some people will be computers and desire to future-proof the messages (as it alluded to in another comment)?

              1. 7

                This person is claiming they think of themselves as a robot, and is insulted at the insinuation that robots are not people.

                Posts like this remind me of just how strange things can get when you connect most of the people on the planet.

                1. 6

                  So, I tried contacting the author:

                  http://mynameiser.in/post/174391127526/hi-my-name-is-jordi-im-also

                  Looks like she believes she’s a robot in the transhumanist sense. I thought transhumanists thought they would be robots some day, not that they already are robots now.

                  I tried reading through her toots as she suggested, but it was making me feel unhappy, because she herself seems very unhappy. She seems to be going through personal stuff like breaking up from a bad relationship or something.

                  I still don’t understand what is going on and what exactly does she mean by saying she’s a robot. Whatever the reason, though, mocking her is counterproductive and all around a dick thing to do. Her request in the PR was denied, which I think is reasonable. So “no” was said to something, contrary to what zpojqwfejwfhiunz said elsewhere.

                  1. 6

                    As someone who’s loosely in touch with some of the transhumanist scene, her answer makes no sense and was honestly kind of flippant and rude to you.

                    That said, it sounds like she’s been dealing with a lot of abuse lately from the fact that this Github thread went viral. I’m not surprised, because there are certain people who will jump on any opportunity to mock someone like her in an attempt to score points with people who share their politics. In this case she’s being used as a proxy to discredit the social justice movement, because that’s what she uses to justify her identity.

                    Abuse is never okay and cases like this require some pretty heavy moderation so that they don’t spiral out of control. But they also require a pretty firm hand so that you don’t end up getting pulled into every crazy ideascape that the internet comes up with. If I was the moderator of this GitHub thread, I would have told her, “Whatever it is you’re trying to express when you say ‘I am a robot,’ the Mastodon [BOT] flag is not the right way to do it.” End of discussion, and if anyone comes around to try to harass her, use the moderator powers liberally so as not to veer off-topic.

                    Then you could get into the actual meat of the discussion at hand, which was things like “If I have a bot that reposts my Twitter onto Mastodon, could that really be said to ‘not represent a person’? Maybe another wording would be better.”

                    In the end she’s just a girl who likes to say she’s a robot on the internet. If that bugs you or confuses you, the nicest thing you can do is just take it like that and just ignore her.

                    1. 8

                      I don’t think she was rude to me. She’s just busy with other things and has no obligation to respond to every rando who asks her stuff. I’m thankful she answered me at all. It’s a bit of effort, however slight, to formulate a response for anyone.

                      1. 3

                        I mean, I can kind of see where you’re coming from, but I’d still argue that starting with “You should develop your software in accordance to my unusual worldview”, followed by flippantly refusing to actually explain that worldview when politely asked, is at least not nice.

                        Regardless, that might justify a firm hand, but not harassment, because nothing justifies harassment.

                        1. 2

                          I see this point of view too. But I’m also just some rando on the internet. She doesn’t owe me anything, If someone needed to hear her reasons, that would have been the Mastodon devs. They handled it in a different way, and I think they handled it well, overall.

                          1. 1

                            I’m inclined to agree on that last point, though it’s hard to say for sure given all the deleted comments.

                            And I do hope she can work through whatever she’s going through.

                    2. 4

                      I don’t know, personally, anyone who identifies as a robot, but I do know a bunch of people who identify as cyborgs. Some of it’s transhumanist stuff – embedding sensors under the skin, that sort of thing. But much of it is reframing of stuff we don’t think of that way: artificial limbs, pacemakers, etc, but also reliance on smartphones, google glass or similar, and other devices.

                      From that standpoint, robot doesn’t seem a stretch at all.

                      That said, I agree that the feature wasn’t intended to be (and shouldn’t be) a badge. But someone did submit a PR to make the wording more neutral and inclusive, and that was accepted (#7507), and I think that’s a positive thing.

                      1. 2

                        Actually, that rewording even seems clearer to me regardless of whether someone calls themself a robot or not. “Not a person” sounds a bit ambiguous; because you can totally mechanically turk any bot account at any time, or the account could be a mirror of a real person’s tweets or something.

                      2. 1

                        That’s unfortunate. It’s always difficult to deal with these things. I, too, understood transhumanism to be more of a future thing, but apparently at least some people interpret it differently. Thanks for following up where I was too lazy!

                      3. -6

                        American ‘snowflake’ phenomenon. The offendee believes that the rest of the world must fully and immediately capitulate to whatever pronoun they decided to apply to themselves that week, and anything other than complete and unquestioning deference is blatant whatever-ism.

                        1. 16

                          Person in question is Brazilian, but don’t let easily checked facts get in the way of your narrative.

                          1. -5

                            Thanks for the clarification. Ugh, the phenomenon is spreading. I hope it’s not contagious. Should we shut down Madagascar? :-D

                            1. 3

                              TBH I think it’s just what happens when you connect a lot of people who speak your language to the internet, and the USA had more people connected than elsewhere.

                              1. 0

                                It definitely takes a lot of people to make a world. To paraphrase Garcia, “what a long strange trip it will be”.

                          2. 3

                            She says “she” is a fine pronoun for her.

                      4. 1

                        It’s wonderful. :)

                      5. 3

                        What is happening there? I can’t tell if this is satire or reality

                        1. 2

                          That’s pretty common with Mastodon; there’s an acrid effluence that tinges the air for hours after it leaves the room. That smell’s name? Never saying no to anyone.

                          1. 12

                            Seems “never saying no to anyone” has also been happening to lobster’s invite system :(

                            People here on lobsters used to post links to content they endorse and learn something from and want to share in a positive way. Whatever your motivation was to submit this story, it apparently wasn’t that…

                            1. 4

                              The person who shared the “good laugh” has been here twice as long as you have.

                              1. 1

                                I’m absolutely not saying you’re wrong, but I’m pretty confident there’s something to be learned here. I may not necessarily know what the lesson is yet, but this is not the first or the last situation of this kind to present itself in software development writ large.

                        1. 2

                          Oh, man, improved partition removal plus updates actually moving between partitions are really big and important things. I’m excited to see postgres really mature into the bigger data world.

                          1. 18

                            I love postgres (I’m a postgres DBA), and really dislike mysql (due to a long story involving a patch-level release causing server crashes and data loss).

                            That said, there is still a technical reason to choose mysql over postgres. Mysql’s replication story is still significantly better than postgres’. Multi-master, in particular, is something that’s relatively straightforward in mysql, but which requires third-party extensions and much more fiddling in postgres.

                            Now, postgres has been catching up on this front. Notably, the addition of logical replication over the last couple major versions really expands the options available. There’s a possibility that this feature will even be part of postgres 11, coming out this year (it’s on a roadmap). But until it does, it’s a significant feature missing from postgres that other RDBMSes have.

                            1. 7

                              There’s a possibility that this feature will even be part of postgres 11

                              PG 11 is in feature freeze since April. I don’t think there was anything significant for multi-master committed before that.

                              1. 3

                                Good point. I’d seen the feature freeze deadline, but wasn’t sure if it had actually happened, and what had made it in (I haven’t followed the -hackers mailing list for a while). I was mostly speculating based on the fact that they’d announced a multi-master beta for last fall.

                                I’m not surprised it’s taking a long time – it’s a hard problem – but it means that “clustering” is going to be a weak point for postgres for a while longer.

                              2. 3

                                Once you take all the other potential issues and difficulties with MySQL into account though, surely Postgres is a better choice on balance, even with more difficult replication setup?

                                1. 5

                                  It really depends. If you need horizontally-scalable write performance, and it’s important enough to sacrifice other features, then a mysql cluster is still going to do that better than postgres. It’s possible that a nosql solution might fit better than mysql, but overall that’s a decision that I can’t make for you.

                                  I’ll add that there are bits of postgres administration that aren’t intuitive. Specifically, bloat of on-disk table size (and associated slowdowns) under certain loads can really confuse people. If you can’t afford to have a DBA, or at least a dev who’s a DB expert, mysql can be very attractive. I’m not saying that’s a good reason to choose it, but I understand why some people do.

                                  1. 1

                                    What are your thoughts on MySQL vs MariaDB, especially the newer versions?

                                    1. 3

                                      Honestly, I haven’t looked closely at MariaDB lately. The last time I did was just to compare json datatypes – at the time, both mysql and mariadb were just storing json as parsed/verified text blobs without notable additional functionality.

                                      I have to assume it’s better than mysql at things like stability, data safety, and other boring-but-necessary features. That’s mostly because mysql sets such a low bar, though, that it would take effort to make it worse.

                                    2. 1

                                      You clearly know more about databases than me, but I would question idea that MySQL is a good choice when you lack a DB expert. If anything, it is then when you shouldn’t use it. I still carry scars from issues caused by such lack of expertise at one of my previous employers.

                                1. 2

                                  I’ve been looking for a tutorial like this for years!

                                  1. 8

                                    Scrum may be the illness; it is more likely an expression of a deeper problem that may be inextricable. Either way, it has pushed more experienced, excellent developers out of the industry than I care to count.

                                    Software is the only industry in the world where people above IQ 120 with 5 years of experience (even well above 120, and well above 5 years) have to justify days and weeks– hell, sometimes even hours– of their own working time. No one capable will tolerate that if she can get something else. Scrum is a great way to end up with the Dead Sea Effect.

                                    I call Scrum the Whisky Goggles of programming. It turns the unemployable 3’s into marginally employable 5’s, but the 6+ see you as a foolish, dangerous drunk and want nothing to do with you.

                                    I also doubt that it will change. It works well enough to get to the next round of funding. Hiring massive armies of mediocre programmers has become “the way it’s done”. Now it’s reminiscent of “Nobody ever got fired for buying IBM.”

                                    Those of us in the top ~5 percent, we had our time. It seems to be over, at least in this industry. Some of us will reinvent ourselves as “data scientists” and hope to escape the hell of burndown story points and product “individuals”; some of us will go back to graduate school and either escape into academia or vie for the few remaining R&D jobs that Scrum can’t touch; many of us will just leave and do something else.

                                    1. 4

                                      I’m going to let out a big secret that I probably shouldn’t about Scrum. I’m going to quote one of your recent posts on medium because I think it is relevant.

                                      The one thing that universally turns up when people investigate workplace violence is… not race, not mental health, not disputes with co-workers or management, but exactly that type of “performance management”, which is the euphemism for this sort of psychotic, evil surveillance capitalism. It literally drives people nuts.

                                      Where are the managers in Scrum? The answer, there aren’t any. Scrum is sold to businesses as a technique to exponentially increase productivity. However, the this is a lie, a foot in the door technique. The reason the other 95% of IT workers like it is that Scrum is a worker emancipatory movement. No more managers, no more long hours, no more abuse, no more performance management.

                                      Scrum is not meant to protect the gifted 5% of engineers who worked privileged environments, who never deal with the kind of abuse other 95% of IT workers have to. It’s was never designed to make life easier for those that already have an easy life. Scrum isn’t meant for the already privileged 5%.

                                      You mention high IQ like it’s a good thing. In the words of Alan Kay, high IQ is a lead weight, your viewpoint is everything.

                                      1. 6

                                        Software is the only industry in the world where people above IQ 120 with 5 years of experience (even well above 120, and well above 5 years) have to justify days and weeks– hell, sometimes even hours– of their own working time

                                        That’s … not true at all. If you look at jobs done by actual engineers, you’ll see that they routinely have to provide time estimates. In order to make those time estimates, they have to estimate their work to the day and week, even if those detailed timelines aren’t provided to their clients. Even newly-licensed engineers already have 4 or more years of experience, since work requirements are part of the license process. Most people who are engineers have high IQ, because they’re complicated technical fields with nontrivial education requirements. And they all must pass an exam on engineering ethics.

                                        And they provide time, labor, and cost estimates to clients.

                                        Which is much like what we do in software development. We put together estimates in as detailed a way as we can, to provide an overall plan. We tell our customers the overall plan, and we track against our detailed plan so we know if we have to tell the customer the plans have changed.

                                        Actual engineers are currently far better as an industry at providing those types of estimates. If we ever want to mature as an industry, we must improve our ability to estimate our work.

                                        Claiming that we’re above the need to estimate because we’re smart and we have experience misunderstands the need. Being smart and having experience should make us more comfortable and confident providing estimates, rather than less comfortable even being asked.

                                      1. 4

                                        The distribution of programming talent is likely normal, but what about their output?

                                        The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                                        Still a good article! I think this confusion is the smallest part of what he’s trying to say.

                                        1. 6

                                          That’s an interesting backdoor you tried to open to sneak the 10x programmer back into not being a myth.

                                          1. 6

                                            They exist, though. So, more like the model that excludes them is broken front and center. Accurate position is most people aren’t 10x’ers or even need to be that I can tell. Team players with consistency are more valuable in the long run. That should be majority with some strong, technical talent sprinkled in.

                                            1. 3

                                              Is there evidence to support that? As you know, measuring programmer productivity is notoriously difficult, and I haven’t seen any studies to confirm the 10x difference. I agree with @SeanTAllen, it’s more like an instance of the hero myth.

                                              EDIT: here are some interesting comments by a guy who researched the literature on the subject: https://medium.com/make-better-software/the-10x-programmer-and-other-myths-61f3b314ad39

                                              1. 5

                                                Just think back to school or college where people got the same training. Some seemed natural at the stuff running circles around others for whatever reason, right? And some people score way higher than others on parts of math, CompSci, or IQ tests seemingly not even trying compared to those that put in much effort to only underperform.

                                                People that are super-high performers from the start exist. If they and the others study equally, the gap might shrink or widen but should widen if wanting strong generalists since they’re better at foundational skills or thinking style. I don’t know if the 10 applies (probably not). The concept of gifted folks making easy work of problems most others struggle is something Ive seen a ton of in real life.

                                                Why would they not exist in programming when they exist in everything else would be the more accurate question.

                                                1. 0

                                                  There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                                                  Partly it’s because only a small portion of programming is about raw intellectual power. A lot of it is just grinding through documentation and integration issues.

                                                  Partly it’s because there are complex interactions with other people that constrain a person. Simple example: at one of my jobs people complained a lot about C++ templates because they couldn’t understand them.

                                                  Finally, it’s also because the domain a person applies themselves to places other constraints. Can’t get too clever if you have to stay within the confines of a web framework, for example.

                                                  I guess there are specific contexts where high productivity could be realised: one person creating something from scratch, or a group of highly talented people who work well together. But those would be exceptional situations, while under the vast majority of circumstances it’s counterproductive to expect or hope for 10x productivity from anyone.

                                                  1. 2

                                                    I agree with all of that. I think the multipliers kick in on particular tasks which may or may not produce a net benefit overall given conflicting requirements. Your example of one person being too clever with some code for others to read is an example of that.

                                                    1. 3

                                                      I think the 10x is often realized by just understanding the requirements better. For example, maybe the 2 week long solution isn’t really necessary because the 40 lines you can write in the afternoon are all the requirement really required.

                                                    2. 2

                                                      There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                                                      It does not simply depends on how you measure, it depends on what you measure.

                                                      And it may be more than “raw intellectual power”. For me it’s usually experience.

                                                      As a passionate programmer, I’ve faced more problems and more bugs than my colleagues.
                                                      So it often happens that I solve in minutes problems that they have struggled for hours (or even days).
                                                      This has two side effects:

                                                      • managers tends to assign me the worst issues
                                                      • colleagues tends to ask me when the can’t find a solution

                                                      Both of this force me to face more problems and bugs… and so on.

                                                      Also such experience make me well versed at architectural design of large applications: I’m usually able to avoid issues and predict with an high precision the time required for a task.

                                                      However measuring overall productivity is another thing:

                                                      • I can literally forget what I did yesterday morning (if it was for a different customer than the one I’m focused now)
                                                      • at time I’m unable to recognize my own code (with funny effects when I insult or lode it)
                                                      • when focused, I do not ear people talking at me
                                                      • I ignore 95% of mails I receive (literally all those with multiple recipients)
                                                      • being very good at identifying issues during early analysis at times makes some colleague a bit upset
                                                      • being very good at estimating large projects means that when you compare my estimation with others, mine is usually higher (at times a lot higher) because I see most costs upfront. This usually leads to long and boring meeting where nobody want to take the responsibility to adopt the more expensive solution (apparently) but nobody want to take the risk of alternative ones either…
                                                      • debating with me tends to become an enormous waste of time…

                                                      So when it’s a matter of solving problems by programming, I’m approach the 10x productivity of the myth despite not being particularly intelligent, but overall it really depends on the environment.

                                                      1. 1

                                                        This is a good exposition of what a 10x-er might be and jives with my thoughts. Some developers can “do the hard stuff” with little or no guidance. Some developers just can’t, no matter how much coaching and guidance are provided.

                                                        For illustration, I base this on one tenure I had as a team lead, where the team worked on some “algorithmically complex” tasks. I had on my team people who were hired on and excelled at the work. I had other developers who struggled. Most got up to an adequate level eventually (6 months or so). One in particular never did. I worked with this person for a year, teaching and guiding, and they just didn’t get it. This particular developer was good at other things though like trouble shooting and interfacing with customers in more of a support role. But the ones who flew kept on flying. They owned it, knew it inside and out.

                                                        It’s odd to me that anyone disputes the fact there are more capable developers out there. Sure “productivety” is one measure, and not a good proxy for ability. I personally don’t equate 10x with being productive, that clearly makes no sense. Also I think Fred Brookes Mythical Man Month is the authoritative source on this. I never see it cited in these discussions.

                                                  2. 2

                                                    There may not be any 10x developers, but I’m increasingly convinced that there are many 0x (or maybe epsilon-x) developers.

                                                    1. 3

                                                      I used to think that, but I’m no longer sure. I’ve seen multiple instances of what I considered absolutely horrible programmers taking the helm, and I fully expected those businesses to fold in a short period of time as a result - but they didn’t! From my point of view, it’s horrible -10x code, but for the business owner, it’s just fine because the business keeps going and features get added. So how do we even measure success or failure, let alone assign quantifiers like 0x?

                                                      1. 1

                                                        Oh, I don’t mean code quality, I mean productivity. I know some devs that can work on the same simple task for weeks, miss the deadline, and be move on to a different task that they also don’t finish.

                                                        Even if the code they wrote was amazing, they don’t ship enough progress to be of much help.

                                                        1. 1

                                                          That’s interesting. I’ve encountered developers who were slow but not ones who would produce nothing at all.

                                                          1. 4

                                                            I’ve encountered it, though it was unrelated to their skill. Depressive episodes, for example, can really block someone. So can burnout, or outside stresses.

                                                            Perhaps there are devs who cannot ship code at all, but I’ve only encountered unshipping devs that were in a bad state.

                                                        2. 1

                                                          You’re defining programming ability by if a business succeeds though. There are plenty of other instances where programming is not done for the sake of business, though.

                                                          1. 1

                                                            That’s true. But my point is that it makes no sense to assign quantifiers to programmer output without actually being able to measure it. In business, you could at least use financials as a proxy measure (obviously not a great one).

                                                      2. 1

                                                        Anecdotally, I’m routinely stunned by how productive maintainers of open source frameworks can be. They’re certainly many times more productive than I am. (Maybe that just means I’m a 0.1x programmer, though!)

                                                        1. 1

                                                          I’m sure that’s the case sometimes. But are they productive because they have more sense of agency? Because they don’t have to deal with office politics? Because they just really enjoy working on it (as opposed to a day job)? There are so many possible reasons. Makes it hard to establish how and what to measure to determine productivity.

                                                    2. 3

                                                      I don’t get why people feel the need to pretend talent is a myth or that 10x programmers are a myth. It’s way more than 10x. I don’t get why so many obviously talented people need to pretend they’re mediocre.

                                                      edit: does anyone do this in any other field? Do people deny einstein, mozart, michaelangelo, shakespear, or newton? LeBron James?

                                                      1. 4

                                                        Deny what exactly? That LeBron James exists? What is LeBron James a 10x of? Is that Athelete? Basketball player? What is the scale here?

                                                        A 10x programmer. I’ve never met one. I know people who are very productive within their area of expertise. I’ve never met someone who I can drop into any area and they are boom 10x more productive and if you say “10x programmer” that’s what you are saying.

                                                        This of course presumes that we can manage to define what the scale is. We can’t as an industry define what productive is. Is it lines of code? Story points completed? Features shipped?

                                                        1. 2

                                                          Context is a huge factor in productivity. It’s not fair to subtract it out.

                                                          I bet you’re a lot more then 10X better then I am at working on Pony… Any metric you want. I don’t write much C since college, I bet you’re more then 10X better then me in any C project.

                                                          You were coding before I was born, and as far as I can tell are near the top of your field. I’ve been coding most of my life, I’m good at it, the difference is there though. I know enough to be able to read your code and tell that you’re significantly more skilled then I am. I bet you’re only a factor of 2 or 3 better at general programming then I am. (Here I am boasting)

                                                          In my areas of expertise, I could win some of that back and probably (but I’m not so sure) outperform you. I’ve only been learning strategies for handling concurrency for 4 years? Every program (certainly every program with a user interface) has to deal with concurrency, your skill in that sub-domain alone could outweigh my familiarity in any environment.

                                                          There are tons of programmers out there who can not deal with any amount of concurrency at all in their most familiar environment. There are bugs that they will encounter which they can not possibly fix until they remedy that deficiency, and that’s one piece of a larger puzzle. I know that the right support structure of more experienced engineers (and tooling) can solve this, I don’t think that kind of support is the norm in the industry.

                                                          If we could test our programming aptitudes as we popped out of the womb, all bets are off. This makes me think that “10X programmer” is ill-defined? Maybe we’re not talking about the same thing at all.

                                                          1. 2

                                                            No I agree with you. Context is important. As is having a scale. All the conversations I see are “10x exists” and then no accounting for context or defining a scale.

                                                        2. 2

                                                          While I’m not very familiar with composers, I can tell you that basketball players (LeBron) can and do have measurements. Newton created fundamental laws and integral theories, Shakespeare’s works continue to be read.

                                                          We do acknowledge the groundbreaking work of folks like Ken Ritchie, Ken Iverson, Alan Kay, and other computing pioneers, but I doubt “Alice 10xer” at a tech startup will have her work influence software engineers hundreds of years later, so bar that sort of influence, there are not enough metrics or studies to show that an engineer is 10x more than another in anything.

                                                      2. 3

                                                        The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                                                        So, it’s fairly complicated because people who will be 10X in one context are 1X or even -1X in others. This is why programming has so many tech wars, e.g. about programming languages and methodologies. Everyone’s trying to change the context to one where they are the top performers.

                                                        There are also feedback loops in this game. Become known as a high performer, and you get new-code projects where you can achieve 200 LoC per day. Be seen as a “regular” programmer, and you do thankless maintenance where one ticket takes three days.

                                                        I’ve been a 10X programmer, and I’ve been less-than-10X. I didn’t regress; the context changed out of my favor. Developers scale badly and most multi-developer projects have a trailblazer and N-1 followers. Even if the talent levels are equal, a power-law distribution of contributions (or perceived contributions) will emerge.

                                                        1. 1

                                                          I’m glad you acknowledge that there’s room for a 10X or more then 10X gap in productivity. It surprises me how many people claim that there is no difference in productivity among developers. (Why bother practicing and reading blog posts? It won’t make you better!)

                                                          I’m more interested in exactly what it takes to turn a median (1X by definition) developer into an exceptional developer.

                                                          I don’t buy the trail-blazer and N-1 followers argument because I’ve witnessed massive success (by any metric) cleaning up the non-functioning, non-requirements meeting (but potentially marketable!) untested messes that an unskilled ‘trailblazer’ leaves in their (slowly moving) wake. Do you think it’s all context or are there other forces at work?

                                                      1. 3

                                                        My understanding is that LLVM supports as a target far more platforms than rust. Is that not the case?

                                                        1. 1

                                                          Of course it does but since you can compile Rust to LLVM, it doesn’t matter.

                                                          1. 0

                                                            Rust compiles to LLVM, so by extension it inherits some of that support. See here for a full list of platforms: https://forge.rust-lang.org/platform-support.html

                                                            1. 1

                                                              … sure, but if you’re compiling to rust as an intermediate to LLVM as an intermediate to platform-binary, it’s not clear what the rust step is gaining you. Unless your new language is highly-semantically-compatible with rust in the first place, but in that case it might be better implemented as a series of rust macros.

                                                              Edit: After reading the rest of the comments more thoroughly, I realize that I’m just inching towards the same arguments that are made better by others.

                                                          1. 2

                                                            What I’m most interested in knowing is what is Reddit written in now, and what specific business or technical problems made them switch away from Lisp? Reddit is a very popular website, and if Lisp was not in fact used to get it to where it is today, that says something about how we ought to evaluate Lisp as a language.

                                                            1. 4

                                                              what is Reddit written in now,

                                                              They moved from lisp to python, though I don’t know if it’s still in python.

                                                              and what specific business or technical problems made them switch away from Lisp?

                                                              They posted a lengthy blog about why they ported away from lisp at the time. Unfortunately, it looks like the blog post is gone. There’s a bunch of discussions still around, though. Also some other evidence of the lisp community response.

                                                              Short version: it was technical. Libraries didn’t exist or weren’t sufficient, lisp implementations didn’t work cross-platform, so development was painful, and they continued to have difficult-to-debug slowdowns and site crashes. Stuff like that.

                                                              Of course, this was over a decade ago, so I strongly suspect the lisp situation has changed.

                                                              1. 3

                                                                Of course, this was over a decade ago, so I strongly suspect the lisp situation has changed.

                                                                It’s currently powering Grammarly (circa 2015) so it sounds like it.

                                                                One of the common complaints about Lisp that there are no libraries in the ecosystem. As you see, 5 libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections.

                                                                1. 1

                                                                  Looks like they’re not actually using it for a webapp, but instead for a back-end process. That’s a different use case than reddit had.

                                                                  Also, I note that they’re still using two different lisp implementations, one for production, and one for local development. That was a big issue for reddit at the time. I wonder how much energy goes into implementation difference issues.

                                                              2. 2

                                                                Lisp still powers Hacker News, afaik.

                                                                1. 1

                                                                  I thought HN was arc, Paul Graham’s take on scheme?

                                                                  1. 1

                                                                    Indeed, that’s my understanding. I should have been more clear, by “Lisp” I was talking about the Lisp family of languages, not Common Lisp specifically.

                                                                2. 1

                                                                  Core is still Python, newer components are being written in Node.js. PostgreSQL was historically the main datastore, with Cassandra serving a secondary role, but data is being re-homed to Cassandra due to scale.

                                                                1. 5

                                                                  It’s nice of them to release the sourcecode (I remember the kerfuffle when they ported to python), but, wow, the lack of documentation hurts. I remember being in the “lisp is so readable it doesn’t need comments” camp, and I may have been wrong.

                                                                  Honestly, though, a place to start might be enough. IIRC, they were using CMUCL and Hunchentoot. Anybody know/remember the build process for those?

                                                                  1. 5

                                                                    I think that was wrong because understandability often goes down as complexity and power go up. LISP is super-powerful with people applying that power in many complex ways. Especially the difference between what you see and what’s going on underneath macros. So, I’d say it needs more documentation or source control if aiming for easily-approachable or predictable codebases.

                                                                    1. 5

                                                                      It used ASDF to build (see the .asd file).

                                                                      1. 3

                                                                        At the risk of being snarky those are actively maintained projects and you can simply check their respective websites for how to install and use them.

                                                                        Hunchentoot is in quicklisp and installs easily with (ql:quickload :hunchentoot). The SBCL fork of CMUCL is more widely used than CMUCL, and can be installed with “apt-get install sbcl”, or pre-built binaries for 5 or 6 platforms can be downloaded from their website.

                                                                        1. 1

                                                                          The SBCL fork of CMUCL is more widely used than CMUCL

                                                                          Sure, but SBCL forked, like, twenty years ago, and this code is from more like 10-15. That is, I’m sure they were aware of SBCL, but I’m pretty sure they chose CMUCL instead.

                                                                          And, yes, if I’m certain they’re using CMUCL and Hunchentoot, and am familiar with both tools, and am familiar with ASDF, and know I need quicklisp, and am also familiar with that, then I suspect I wouldn’t have too much trouble scrapping together a build method that might work. However, I haven’t been in the lisp ecosystem for years, and don’t know for certain which lisp and which web framework they were using.

                                                                          Looking at the asd file, it seems they were using TBNL, which predates Hunchentoot (in name). Will it build with Hunchentoot? Not sure – how API-compatible is modern Hunchentoot with that old version of TBNL? For that matter, what version of TBNL was used? Are we certain this code can build at all?

                                                                          A note saying “you need the following versions of the following things to build this: x,y,z, …” would be useful if they don’t have the resources to put together a modern README document.

                                                                          Also more useful than the current lack of documentation would be a note saying: “this has not been built in years, we don’t know the needed library versions, and it’s not clear whether this can be built at all, but we wanted to provide the source anyway”.

                                                                          So, I appreciate the fact that some of this is discoverable, but 1- it’s only discoverable if you’re already an up-to-date lisp hacker, 2- even then it might not be fully discoverable, and 3- standards for documentation have changed over the past couple decades, and this doesn’t even meet the standards of many years ago.

                                                                        2. 2

                                                                          eh .. the whole platform use to be open source but they closed it a while back. They claimed it was simply too difficult to run the entire thing and there was no point in keeping it OSS.

                                                                          I dunno .. After Reddit started banning tons of communities, removed their warrant canary and their CEO was caught editing comments, I dismissed the entire platform. I rarely use it; maybe really specific communities.

                                                                        1. 26

                                                                          If you want to impress me, set up a system at your company that will reimage a box within 48 hours of someone logging in as root and/or doing something privileged with sudo (or its local equivalent). If you can do that and make it stick, it will keep randos from leaving experiments on boxes which persist for months (or years…) and make things unnecessarily interesting for others down the road.

                                                                          Man, yes. At a previous company I setup the whole company using an immutable deployments. Part of this was you could still log in and change stuff, but it marked the box as “tainted” and would terminate and replace it after 24hrs. This let you log in, fix a breaking and go back to bed … but made sure the “port it back to the config management tool” a #1 task for the morning.

                                                                          A second policy was no machine existed for more than 90 days.

                                                                          These two policies instilled in us a hard-lined attitude of “if it isn’t managed, it isn’t real” and was resoundingly successful in pushing us to solid deployment mechanisms which worked and survived instances being replaced regularly.

                                                                          I can’t recommend this approach enough. Thank you Rachel, for writing about this.

                                                                          1. 8

                                                                            A second policy was no machine existed for more than 90 days.

                                                                            I’m curious how you managed the stateful machines (assuming you had some). I’m a DBA, and, well, I often find myself pointing out to our sysads that stateful stuff is just harder to manage (and maintain uptime) than stateless stuff. Did you just exercise the failover mechanism automatically? How did that work downstream?

                                                                            1. 7

                                                                              Great catch! Our MySQL database cluster was excluded from the rule because of the inherent challenges of making that work, however our caching and ElasticSearch clusters were not. Caching because it is a cache, ElasticSearch because its replication and failure handling is batteries-included. Note this was with a not enormous amount of data, if our data grew to $lots we would likely stop giving ES the same treatment.

                                                                              We worked hard to architect our systems in such a way that data was not on random machines, but in very specific places.

                                                                              1. 5

                                                                                Ah, good, okay. That makes more sense.

                                                                                Currently we’re in a private cloud, so nothing’s batteries-included. Plus we’re using a virtual storage system in a way that would make traditional replica/failover structures too expensive. The result is our production DB VMs go for a very long time between reboots, let alone rebuilds.

                                                                                I agree, though, that isolation is a great way to limit that impact. Combine that with some decent data-purpose division (e.g. move the sessions out of the DB into a redis store that can be rebuilt, move the reporting data to a separate DB so we can flip between replicas during reboots, etc), and you can really cut down on the SPOFs.

                                                                            2. 1

                                                                              I’ve been in 2 different orgs where they reimaged the machine as soon as each user logged out!

                                                                              1. 1

                                                                                Aggressive! I wonder if there were escape hatches for emergencies?

                                                                                1. 1

                                                                                  What sort of emergencies are you envisioning?

                                                                            1. 5

                                                                              I can’t remember who said it, but “C is the universal assembly language.” If you want your library usable in as many environments as possible, C is the way to go. Essentially every higher-level language has a built-in FFI with C.

                                                                              1. 10

                                                                                That has more to do with Unix becoming universal than with C the language itself. It’s like the idea that “C has no runtime” – it’s more accurate to say C’s runtime is included in the operating system runtime for most/all modern operating systems.

                                                                              1. 5

                                                                                I’d heard of Redox, but not this effort. Anybody know any other rust-based OS efforts out there?

                                                                                1. 5

                                                                                  http://intermezzos.github.io/ is my project, similar to this one. It’s been dormant for a while; hoping to get back to it soon though!

                                                                                  Redox is the only serious attempt that I’m aware of. Hobby/toy kernels are legion, however.

                                                                                  1. 2

                                                                                    Nice. The teaching niche is one of those places where there can never be enough – there are as many different ways to learn something as there are people – so I’m glad you’re populating it more.

                                                                                    1. 2

                                                                                      Thanks! I agree 100%, and that’s the idea. Phil’s tutorial was what got me back into it, and we share the same base, but are going different places in different orders with different code. Hopefully we can get many more!

                                                                                1. 3

                                                                                  to this day I’m surprised that Postgres cannot be upgraded without downtime. I guess there’s maintenance windows, but it feels like so many DBs out there have uptime requirements

                                                                                  EDIT: don’t want to be too whiny about this, Postgres is cool and has a lot of stuff. I guess it’s mostly the webdev in me thinking “well yeah of course I need 100% uptime” that made me expect DBs to handle this case. But I ugess the project predates these sorts of expectation

                                                                                  1. 1

                                                                                    I don’t disagree… but just to be clear:

                                                                                    minor versions(i.e. bug fixes) do not need any downtime really, you just replace the binaries and restart. (i.e. from 9.4.6 -> 9.4.7)

                                                                                    Major versions (9.4 -> 9.5 ) do need a dump/restore of the database, which is annoying. You can avoid this almost completely now with logical replication, which is now included with PG 10 (before this version it’s available as a module back to PG9.4 I think).

                                                                                    1. 2

                                                                                      Ah thanks for the information, super helpful! Previously, when reading up on upgrading PG I got the impression I couldn’t do this on major versions.

                                                                                      1. 1

                                                                                        see: https://www.2ndquadrant.com/en/resources/pglogical/ it’s one of the use-cases.

                                                                                      2. 2

                                                                                        Major versions (9.4 -> 9.5 ) do need a dump/restore of the database

                                                                                        pg_upgrade has been available and part of the official codebase since 9.0 (7ish years). It’s still not perfect, but it’s been irreplaceable for me when migrating large (45+TB) databases.

                                                                                        1. 1

                                                                                          True, I had forgotten. I’ve been using PG since the 8.x days. pg_upgrade didn’t work for me from 9.0 -> 9.1 (or thereabouts, def. at the beginning of pg_upgrade existence) and haven’t ever tried it again. I should probably try it again, see if it works better for us!

                                                                                        2. 2

                                                                                          There have also been numerous logical replication tools (Slony for example) that allowed upgrades without downtime since at least around 8.0, but probably earlier.

                                                                                      1. 4

                                                                                        I think the question shouldn’t be “are there common purely interpreted languages”, but “are there common pure interpreters for common languages”. I’m sure there have been pure interpreters for ruby, python, perl, js, all of them! The reason why they get replaced is speed, and I’m not really sure why you’d prefer a 100x-1000x slowdown for direct ast interpretation. Maybe for the dynamism?

                                                                                        1. 2

                                                                                          Came here to say something similar. Pure interpreters are a step in (one method of doing) language development, but there’s nothing that says they’re the last step. If a language becomes popular, and performance (runtime, memory, or otherwise) becomes a pain point, you can more or less guarantee a rewrite of the interpreter into a bytecode compiler/executor, a JIT compiler, or a straight compiler.

                                                                                          I’m also curious to know why the author wants to find a pure interpreted language. If it’s AST dynamism, that’s something that lisp macros take care of handily, while still being able to compile at least somewhat. Perhaps as a learning exercise?

                                                                                        1. 9

                                                                                          I know some devs who only code at work and are fine. But if there were two identical candidates (a myth, I know) except one had side projects, guess who I pick?

                                                                                          1. 7

                                                                                            I’m not sure it’d be an easy choice for me, though a lot depends on how you resolve that bit about the “identical candidates”. To really generalize, my interactions with people who have side projects are that they learn a bunch of stuff outside of work that could be useful for work, but often would prefer to be doing those side projects too, so might be less focused and/or prone to bikeshedding. I include myself in the 2nd category, fwiw: I learn a lot of things “on my own time” but I’m not necessarily the world’s best employee if you just want to hire someone to produce code.

                                                                                            If you had people with identical starting technical skill, but one had side projects, my no-other-information guess might even be that the person without side projects would be a more productive employee initially. It’s also probably true that they’d be less likely to keep up to date and/or proactively recommend new things unless there was an explicit framework in place to make that happen on work time. But I’m not sure that’s obviously worse in terms of what a company is looking for out of an employee.

                                                                                            1. 10

                                                                                              If they have identical technical skill and only one has technical side projects, the other is obviously more talented, because they picked up identical technical skills without spending out-of-work time on it.

                                                                                            2. 3

                                                                                              The one that had hobbies that improved their ability to communicate and work in a team? Maybe even to give and receive constructive criticism, and to compromise?

                                                                                              That can be satisfied by coding projects, sure. If they’re, for example, participating in an open source project by actively participating in the mailing list or forum, and managing tickets or incoming patches. A solo side-project is the opposite of this, though. Anything where the candidate is spending their time being the sole person making decisions and in control won’t help them with teamwork. If they’re not going through code and architecture reviews, there’s an excellent chance it won’t help them be better coders, either.

                                                                                              On the other hand, board gaming, team sports, playing D&D, or any number of things will help candidates with the stuff that will make them really productive employees. The kind that isn’t just an additive part of your team, but a potential multiplicative part.

                                                                                              1. 1

                                                                                                If they’re not going through code and architecture reviews, there’s an excellent chance it won’t help them be better coders, either.

                                                                                                I don’t think this is true at all. Sure, it is a whole lot easier to improve when you have an experienced mentor pointing out ways that you can do better. But there are plenty of ways to advance as a programmer without someone else coaching you. Reading quality books and source code written by programmers that are better than yourself is a great way to fill that gap; and arguably something you should be doing even if you have a mentor. At the end of the day programming is no different than any other skill, the key to improving is practicing purposefully, practicing routinely, and taking plenty of time to reflect on that practice. If you’re not willing to do those things you’re not going to be very good even if you have someone telling you how to improve.

                                                                                                1. 3

                                                                                                  Sure. It’s even possible to improve all on your own, without books or mentors, as long as you’re consistently pushing yourself out of your comfort zone, consistently failing, and consistently reflecting on your experiences and what weaknesses are best addressed next.

                                                                                                  But that’s remarkably hard. Solo projects are great at getting you more familiar with a language, or more familiar with some specific libraries, but they’re just not the right tool if you want to improve your craft.

                                                                                                  If you want to learn how to play violin, then sure you can try buying one, trying to play, and never performing. Reading an introductory text might help a bit. But it’s going to be much faster and better to learn from someone who knows how to play the violin, to perform so you’re confronted with feedback from uninterested parties, and to go back to the drawing board and repeat the process. You can improve at chess by reading books, but if you’re not playing games your progress will be slow. If you’re only playing games against people of similar and lesser skill than you, you’re unlikely to learn much at all.

                                                                                                  Having teammates or other people who are better than you, and who are willing to thoughtfully critique your work and suggest improvements, is the most tried-and-true method of improving your skill at something.

                                                                                                  Failure and feedback are the best tools we have. And they’re usually not provided by solo projects.

                                                                                                  1. 1

                                                                                                    Oh yeah, it’s a million times harder to go at it alone. And I suppose any solo project that would provide a good platform for improving, like writing an open source framework/library or building a complex application that makes it into production and has users, will eventually become collaborative. Because once you have users you’ve got to write some sort of documentation for them and they’re going to be telling you about all the issues they run into and all of the improvements they want made.

                                                                                            1. 22

                                                                                              This article is great except for No 3: learning how hardware works. C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models. The article then mentions computer architecture and assembly are things they teach students. Those plus online articles with examples on specific topics will teach the hardware. So, they’re already doing the right thing even if maybe saying the wrong thing in No. 3.

                                                                                              Maybe one other modification. There’s quite a lot of tools, esp reimplementations or clones, written in non-C languages. Trend started getting big with Java and .NET with things like Rust and Go making some more waves. There’s also a tendency to write things in themselves. I bring it up because even the Python example isn’t true if you use a Python written in Python, recent interpreter tutorials in Go language, or something like that. You can benefit from understanding the implementation language and/or debugger of whatever you’re using in some situations. That’s not always C, though.

                                                                                              1. 14

                                                                                                Agreed. I’ll add that even C’s status as a lingua franca is largely due to the omnipresence of unix, unix-derived, and posix-influenced operating systems. That is, understanding C is still necessary to, for example, link non-ruby extensions to ruby code. That wouldn’t be the case if VMS had ended up dominant, or lisp machines.

                                                                                                In that way, C is important to study for historical context. Personally, I’d try to find a series of exercises to demonstrate how much different current computer architecture is from what C assumes, and use that as a jumping point to discuss how relevant C’s semantic model is today, and what tradeoffs were made. That could spin out either to designing a language which maps to today’s hardware more completely and correctly, or to discussions of modern optimizing compilers and how far abstracted a language can become and still compile to efficient code.

                                                                                                A final note: no language “helps you think like a computer”. Our rich history shows that we teach computers how to think, and there’s remarkable flexibility there. Even at the low levels of memory, we’ve seen binary, ternary, binary-coded-decimal, and I’m sure other approaches, all within the first couple decades of computers’ existence. Phrasing it as the original author did implies a limited understanding of what computers can do.

                                                                                                1. 8

                                                                                                  C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models.

                                                                                                  I keep hearing this meme, but pdp11 hardware is similar enough to modern hardware in every way that C exposes. Except, arguably, with the exception of NUMA and inter-processor effects.

                                                                                                  1. 10

                                                                                                    You just countered it yourself even with that given prevalence of multicores and multiprocessors. Then there’s cache hierarchies, SIMD, maybe alignment differences (memory is fuzzy), effects of security features, and so on.

                                                                                                    They’d be better of just reading on modern, computer hardware and ways of using it properly.

                                                                                                    1. 6

                                                                                                      Given that none of these are represented directly in assembly, would you also say that the assembly model is a poor fit for modeling modern assembly?

                                                                                                      I mean, it’s a good argument to make, but the attempts to make assembly model the hardware more closely seem to be vaporware so far.

                                                                                                      1. 6

                                                                                                        Hmm. They’re represented more directly than with C given there’s no translation to be done to the ISA. Some like SIMD, atomics, etc will be actual instructions on specific architectures. So, Id say learning hardware and ASM is still better than learning C if wanting to know what resulting ASM is doing on that hardware. Im leaning toward yes.

                                                                                                        There is some discrepency between assembly and hardware on highly-complex architectures, though. The RISC’s and microcontrollers will have less, though.

                                                                                                    2. 1

                                                                                                      Not helped by the C/Unix paradigm switching us from “feature-rich interconnected systems” like in the 1960s to “fast, dumb, and cheap” CPUs of today.

                                                                                                    3. 2

                                                                                                      I really don’t see how C is supposed to teach me how PDP-11 hardware works. C is my primary programming language and I have nearly no knowledge about PDP-11, so I don’t see what you mean. The way I see it is that the C standard is just a contract between language implementors and language users; it has no assumptions about the hardware. The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.

                                                                                                      1. 1

                                                                                                        As in this video of its history, the C language was designed specifically for the hardware it ran on due to its extremely-limited resources. It was based heavily on BCPL, which invented “programmer is in control,” that was what features of ALGOL could compile on another limited machine called an EDSAC. Even being byte-oriented versus word-oriented was due to PDP-7 being byte-oriented vs EDSAC that allowed word-oriented. After a lot of software was written in it, two things happened:

                                                                                                        (a) Specific hardware implementations tried to be compatible to it in stack or memory models so that program’s written for C’s abstract machine would go fast. Although possibly good for PDP-11 hardware, this compatibility would mean many missed opportunities for both safety/security and optimization as hardware improved. These things, though, are what you might learn about hardware studying C.

                                                                                                        (b) Hardware vendors competing with each other on performance, concurrency, energy usage, and security both extended their architectures and made them more heterogenous than before. The C model didn’t just diverge from these: new languages were invented (esp in HPC) so programmers could easily use them via something that gives a mental model closer to what hardware does. The default was hand-coded assembly that got called in C or Fortran apps, though. Yes, HPC often used Fortran since it’s model gave them better performance than C’s on numerical applications even on hardware designed for C’s abstract machine. Even though easy on hardware, the C model introduced too much uncertainty about programmers’ intent for compilers to optimize those routines.

                                                                                                        For this reason, it’s better to just study hardware to learn hardware. Plus, the various languages either designed for max use of that hardware or that the hardware itself is designed for. C language is an option for the latter.

                                                                                                        “ it has no assumptions about the hardware”

                                                                                                        It assumes the hardware will give people direct control over pointers and memory in ways that can break programs. Recent work tries to fix the damage that came from keeping the PDP-11 model all this time. There were also languages that handled them safely by default unless told otherwise using overflow or bounds checks. SPARK eliminated them for most of its code with compiler substituting pointers in where it’s safe to do so. It’s also harder in general to make C programs enforce POLA with hardware or OS mechanisms versus a language with that generated for you or having true macros to hide boilerplate.

                                                                                                        “ The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.”

                                                                                                        You can implement any piece of hardware as a software-level interpreter. It’s just slower. Simulation is also a standard part of hardware development. I don’t think whether it can be interpreted matters. Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                                                                                                        1. 3

                                                                                                          I admit that the history of C and also history of implementations of C do give some insight into computers and how they’ve evolved into what we have now. I do agree that hardware, operating systems and the language have been all evolving at the same time and have made impact on each other. That’s not what I’m disagreeing with.

                                                                                                          I don’t see a hint of proof that knowledge about the C programming language (as defined by its current standard) gives you any knowledge about any kind of hardware. In other words, I don’t believe you can learn anything practical about hardware just from learning C.

                                                                                                          To extend what I’ve already said, the C abstract machine is sufficiently abstract to implement it as a software interpreter and it matters since it proves that C draws clear boundaries between expected behavior and implementation details, which include how a certain piece of hardware might behave. It does impose constraints on all compliant implementations, but that tells you nothing about what “runs under the hood” when you run things on your computer; an implementation might be a typical, bare-bones PC, or a simulated piece of hardware, or a human brain. So the fact that one can simulate hardware is not relevant to the fact, that you still can’t draw practical assumptions about its behavior just from knowing C. The C abstract machine is neither hardware nor software.

                                                                                                          Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                                                                                                          What people do with hardware is directly related to knowledge about that particular piece of hardware, the language implementation they’re using, and so on. That doesn’t prove that C helps you understand that or any other piece of hardware. For example, people do study assembly generated by their gcc running on Linux to think about what their Intel CPU will do, but that kind of knowledge doesn’t come from knowing C - it comes from observing and analyzing behavior of that particular implementation directly and behavior of that particular piece of hardware indirectly (since modern compilers have to have knowledge about it, to some extent). The most you can do is try and determine whether the generated code is in accordance with the chosen standard.

                                                                                                          1. 1

                                                                                                            In that case, it seems we mostly agree about its connection to learning hardware. Thanjs for elaborating.

                                                                                                    1. 2

                                                                                                      I’ve heard a great deal of buzz and praise for this editor. I’ve got a couple decades’ experience with my current editor – is it good enough to warrant considering a switch?

                                                                                                      1. 3

                                                                                                        What do you love about your current editor?

                                                                                                        What do you dislike about it?

                                                                                                        What are the things your editor needs to provide that you aren’t willing to compromise on?

                                                                                                        1. 2

                                                                                                          It probably isn’t, but it’s maybe worth playing around with, just to see how it compares. It’s definitely the best behaved Electron app I’ve ever seen. It doesn’t compete with the Emacs operating system configurations, but it does compete for things like Textmate, Sublime, and the other smaller code-editors. It has VI bindings(via a plugin) that’s actually pretty good(and can use neovim under the hood!). I still don’t understand Microsoft’s motivation for writing this thing, but it’s nice that they dedicate a talented team to it.

                                                                                                          It’s very much still a work in progress, but it’s definitely usable.

                                                                                                          1. 3

                                                                                                            Here’s the story of how it was created[1]. It’s a nice, technical interview. However, the most important thing about this editor is that it marked an interesting shift in Microsoft’s culture. It appears that is the single most widely used open source product originating by MS.

                                                                                                            https://changelog.com/podcast/277

                                                                                                            1. 1

                                                                                                              Thanks for linking that show up.

                                                                                                          2. 2

                                                                                                            It’s worth a try. It’s pretty good. I went from vim to vscode mostly due to windows support issues. I often switch between operating systems, so having a portable editor matters.

                                                                                                            1. 1

                                                                                                              It’s pretty decent editor to try it out. I’ve personally given up because it’s just too slow :| The only scenario in which I tolerate slowness, is a heavy-weight IDE (e.g., IntelliJ family). For simple editing I’d rather check out sublime (it’s not gratis, but it’s pretty fast).

                                                                                                              1. 1

                                                                                                                It doesn’t have to be a hard switch, I for example switch between vim and vs-code depending on the language and task. And if there is some Java or Kotlin to code then I will use Intellij Idea, simply because it feels like the best tool for the job. See your text editors more like a tool in your toolbelt, you won’t drive in a screw with a hammer, won’t you? I see the text editors I use more like a tool in my toolbelt.

                                                                                                                1. 1

                                                                                                                  I do a similar thing. I’ve found emacs unbearable for java (the best solution I’ve seen is eclim which literally runs eclipse in the background), so I use intellij for that.

                                                                                                                  For python, emacs isn’t quite as bad as it is with java, but I’ve found pycharm to be much better.

                                                                                                                  Emacs really wins out with pretty much anything else, especially C/++ and lisps.

                                                                                                                  1. 1

                                                                                                                    VS Code has a very nice python module (i.e. good autocomplete and debugger), the author of which has been hired by MS to work on it full time. Not quite PyCharm-level yet but worth checking out if you’re using Code for other stuff.

                                                                                                                1. 3

                                                                                                                  Also in response to the article, two different people wrote color-identifiers-mode and rainbow-identifiers-mode for emacs.

                                                                                                                1. 2

                                                                                                                  There was a period of time when I was all about OCaml. I appreciate the purity of Caml and StandardML more though, for whatever reason, just from an aesthetics point (the object-orientedness of OCaml just seems shoehorned in to me).

                                                                                                                  The sad truth, though, is that in my limited time I have to focus on the stuff I need for work. The only languages I’m truly fluent in anymore are C, Python, SQL, Bourne shell, and…I guess that’s it. I can get around in Lua if I need to, but I haven’t written more than a dozen lines of code in another language in at least five years.

                                                                                                                  (That’s not to say I don’t love C, Python, SQL, and Bourne shell, because I do.)

                                                                                                                  I’ve been messing around with Prolog (the first language I ever had a crush on) again just for fun, but I’m worried I’m going to have to put it down because of the aforementioned time issue. Maybe I can start writing some projects at work in ML. :)

                                                                                                                  1. 7

                                                                                                                    SML is probably my favorite language. It’s compact enough that you can keep the whole language (and Basis libraries) in your head fairly easily (compared to, say, Haskell, which is a sprawling language). I find strict execution much easier to reason about than lazy, but the functional-by-default nature remains very appealing.

                                                                                                                    Basically, it’s in a good sweet spot of languages for me.

                                                                                                                    But, it’s also a dead language. There is a community, but it’s either largely disengaged (busy writing other languages for work), or students who have high engagement but short lifespans. There are a few libraries out there, and some are good but rarely/never updated, and some are not good and rarely/never updated.

                                                                                                                    I still think it’s a great language to learn, because (as lmmm says) being fluent in SML will make you a better programmer elsewhere. Just know that there aren’t many active resources out there to help you actually write projects and whatnot.

                                                                                                                    1. 2

                                                                                                                      Everything that you said, plus one thing: Standard ML, unlike Haskell or OCaml, realistically allows you to prove things about programs — actual programs, not informally described algorithms that programs allegedly implement. Moreoever, this doesn’t need any fancy tools like automatic theorem provers or proof assistants — all you need is simple proof techniques that you learn in an undergraduate course in discrete mathematics and/or data structures and algorithms.

                                                                                                                      1. 3

                                                                                                                        Absolutely. I think the niche for languages with a formal specification is fairly small, but it is irreplacable in that niche.

                                                                                                                        1. 1

                                                                                                                          Just out of curiosity, do you have any reading recommendations on formal proofs for ML programs?

                                                                                                                          1. 3

                                                                                                                            Let me be upfront: When I said “prove” in my previous comment, I didn’t mean “fully formally prove”. The sheer amount of tedious but unenlightening detail contained in a fully formal proof makes this approach prohibitively expensive without mechanical aid. Formal logic does not (and probably cannot) make a distinction between “key ideas” and “routine detail”, which is essential for writing proofs that are actually helpful to human beings to understand.

                                                                                                                            With that being said, I found Bob Harper’s notes very helpful to get started, especially Section IV, “Programming Techniques”. It is also important to read The Definition of Standard ML at some point to get an idea of the scope of the language’s design, because that tells you what you can or can’t prove about SML programs. For example, the Definition doesn’t mention concurrency except in an appendix with historical commentary. Consequently, to prove things about SML programs that use concurrency, you need a formalization of the specifics of the SML implementation you happen to be using (which, to the best of my knowledge, no existing SML implementation provides).

                                                                                                                      2. 3

                                                                                                                        OCaml is yet another mainstream-aiming language full of dirty compromises and even outright design mistakes:

                                                                                                                        • The types of strict lists, trees, etc. are not really inductive, due to OCaml’s permisiveness w.r.t. what can go on the right-hand side of a let rec definition.
                                                                                                                        • It has an annoying Common Lisp-like distinction between “shallow” and “deep” equality.
                                                                                                                        • Moreover, either kind of equality can be used to violate type abstraction.
                                                                                                                        • Mutation is hardwired into several different language constructs (records, objects), rather than provided as a single abstract data type as it well should be.
                                                                                                                        • Applicative functors with impure bodies are leaky abstractions.
                                                                                                                        1. 3

                                                                                                                          Many complaints about OCaml here are justified in a way (I use it in my day job), so I’ve run into a number of issues myself. It is a complex language, especially the module language.

                                                                                                                          the object-orientedness of OCaml just seems shoehorned in to me

                                                                                                                          I think that’s a commonly repeated myth but OCaml OOP is not really like Java. Objects are structural which gives it a quite interesting spin compared to traditional nominal systems, classes are more like templates for objects and the object system is in my opinion not more shoehorned than polymorphic variants (unless you consider those shoehorned as well).

                                                                                                                          1. 4

                                                                                                                            …OCaml…(I use it in my day job)

                                                                                                                            So how’s working at Jane Street? :)

                                                                                                                            Objects are structural which gives it a quite interesting spin compared to traditional nominal systems…

                                                                                                                            Oh no, I get that. It’s a matter of having object-oriented constructs at all. It’s like C++ which is procedural and object-oriented, and generic, and functional, and and and. I like my languages single-paradigm, dang it! (I know it’s a silly objection, but I’m sometimes too much of a purist.)

                                                                                                                          2. 1

                                                                                                                            I work full-time in Scala, and I credit Paulson with teaching many of the foundations that make me effective in that language. Indeed even when working in Python, my code was greatly improved by my ML experience.

                                                                                                                            1. 1

                                                                                                                              How is Scala? I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

                                                                                                                              I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

                                                                                                                              1. 2

                                                                                                                                How is Scala?

                                                                                                                                It’s really nice. More expressive than Python, safer than anything else one could get a job writing.

                                                                                                                                I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

                                                                                                                                There’s a mismatch but there are libraries at every point along the path, so it gives you a way to get gradually from A to B while remaining productive.

                                                                                                                                I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

                                                                                                                                Nowadays there are pure-Scala libraries for most things, you only occasionally have to fall back to the Java “FFI”. It made for a clever way to bootstrap the language, but is mostly unnecessary now.

                                                                                                                                1. 1

                                                                                                                                  Very informative, thank you.