I’d like to provide a more sympathetic outside perspective.
There are a few common complaints about Elm and Elm community:
With regards to blocking discussion, I think the logic is something like this:
I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.
With regards to insufficient communication, I think the main things to understand is that Elm is an experiment in doing things differently, and it’s causing a clash with conventional understanding. Elm is about getting off the upgrade treadmill. So, for example, when a new release like Elm 0.19 comes out, it happens without a public alpha and beta phases, and it’s not actually the point where you go and immediately migrate your production code to it! It’s only the point to start experimenting with it, it’s the point where library and tool authors can upgrade and so on. (There was quite a bit of activity prior to release anyway, it just wasn’t advertised publicly.)
Finally, the most contentious example of a “feature” getting removed is the so called native modules (which basically means the ability to have impure functions written in JS in your Elm code base). As far as I can tell (having followed Elm since 0.16), native modules were always an internal implementation detail and their use was never encouraged. Nevertheless, some people started using them as a shortcut anyway. However, they were a barrier to enabling function-level dead code elimination which is the main feature of the 0.19 release, so the loophole was finally closed. Sure, it’s inconvenient for people who used them, but does anyone complain when, say, Apple removes an internal API?
Ultimately, Elm is just an open source project and the core maintainers don’t really owe anybody anything - no contracts are entered into and no funds are exchanged. They can do whatever they want.
Of course, there is a question of the long term effects this approach is going to have on the community. Will it alienate too many people and cause Elm to wither? Will Elm remain a niche language for a narrow class of applications? That remains to be seen.
but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time.
Over the years, I have come to believe this is a vital part of building a community. Using draconian tactics to stomp out annoying comments is using power unwisely and worse yet – cripples your community in multiple ways.
The first thing to remember is that when a comment (entitled, uninformed or otherwise) comes up repeatedly – that is a failure of the community to provide a resource to answer/counter/assist with that comment. That resource can be a meme, a link, an image, a FAQ, a full on detailed spec document, whatever. This type of thing is part of how a community gets a personality. I think a lot of the reason there are a bunch of dead discourse servers for projects is too stringent policing. You should have a place for people to goof off and you have to let the community self police and become a real community. Not entirely, obviously, but on relevant topics.
This constant repetition of questions/comments is healthy, normal, it is the entrance of new people to the community. More importantly, if gives people who are just slightly deeper in the community someone to help, someone to police, someone to create resources for, even to a degree someone to mock (reminding them they aren’t THAT green anymore) – a way to be useful! This is a way to encourage growth, each “generation” of people helps the one that comes after them – and it is VITAL for building up a healthy community. In a healthy community the elders will only wade in occasionally and sporadically to set the tone and will focus on the more high minded, reusable solutions that move the project forward. Leave the minor stuff be done by the minor players, let them shine!
Beyond being vital to build the community – it is a signal of where newcomers are hurting. Now if documentation fixes the problem, or a meme… terrific! But if it doesn’t, and if it persists … that is a pain point to look at – that is a metric – that is worth knowing.
Yeah, each one of these people gives you a chance to improve how well you communicate, and to strengthen your message. But shutting down those voices then run the risk of surrounding yourself with ‘yes people’ who don’t challenge your preconceptions. Now, it’s entirely up to the Elm people to do this, but I think they are going to find it harder to be mainstream with this style of community.
Note that I’m perfectly fine with blocking and sidelining people who violate a CoC, or are posting hurtful, nonconstructive comments. You do have to tread a fine line in your moderation though. Being overly zealous in ‘controlling the message’ can backfire in unpredictable ways.
Anyway, I continue to follow Elm because I think the designer has some excellent ideas and approaches, even if I do disagree with some of the ways the way the community is managed.
even if I do disagree with some of the ways the way the community is managed.
I don’t think the two jobs (managing the community and managing the project) should necessarily be done by the same person. I actually think it probably shouldn’t. Each job is phenomenally challenging on its own – trying to do both is too much.
But, they do it on his behalf? This policy of locking and shutting down discussions comes from somewhere. That person directly or indirectly is the person who “manages” the community, the person who sets the policies/tone around such things.
I personally have no idea, I am not active in the Elm community.
I’ll add the perspective of someone who loved Elm and will never touch it again. We’re rewriting in PureScript right now :) I’m happy I learned Elm, it was a nice way of doing things while it lasted.
In Elm you may eventually hit a case where you can’t easily wrap your functionality in ports, the alternative to native modules. We did, many times. The response on the forum and other places is often to shut down your message, to give you a partial version of that functionality that isn’t quite what you need, to tell you to wait until that functionality is ready in Elm (a schedule that might be years!), or until recently to point you at native modules. This isn’t very nice. It’s actually very curious how nice the Elm community is unless you’re talking about this feature, in which case it feels pretty hostile. But that’s how open source rolls.
Look at the response to message linked in the story: “We recently used a custom element to replace a native modules dealing with inputs in production at NoRedInk. I can’t link to it because it’s in a proprietary code base but I’ll be writing an speaking about it over the next couple months.”
This is great! But I can’t wait months in the hope that someone will talk about a solution to a problem I have today. Never mind releasing one.
Many people did not see native modules as a shortcut or a secret internal API. They were an escape valve. You would hit something that was impossible without large efforts that would make you give up on Elm as not being viable. Then you would overcome the issues using native modules which many people in the community made clear was the only alternative. Now, after you invest effort you’re told that there’s actually no way to work around any of these issues without “doing them the right way” which turns out to be so complicated that companies keep them proprietary. :(
I feel like many people are negative about this change because it was part of how Elm was sold to people. “We’re not there yet, but here, if we’re falling short in any way you can rely on this thing. So keep using Elm.”
That being said, it feels like people are treating this like an apocalypse, probably because they got emotionally invested in something they like and they feel like it’s being changed in a way that excludes them.
You’re right though. Maybe in the long term this will help the language. Maybe it will not. Some people will enjoy the change because it does lead to a cleaner ecosystem and it will push people to develop libraries to round out missing functionality. In the short term, I have to get things done. The two perspectives often aren’t compatible.
I’m personally more worried about what will happen with the next major change where Elm decides to jettison part of its community. I don’t want to be around for that.
If people encouraged you to use native modules, then that was unfortunate.
I’m not sure I understand the issue with custom elements. Sure, they’re a bit complicated and half baked but it certainly doesn’t require a research lab to use them (in fact, I’ve just implemented one now).
I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.
I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.
I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.
I’ll go one further and say I’m quite glad those discussions get locked. Once the core team has made a decision, there’s no point in having angry developers fill the communication channels the community uses with unproductive venting. I like the decisions the core team is making, and if those threads didn’t get locked, I’d feel semi-obligated to respond and say that I’m in favor of the decision, or I’d feel guilty not supporting the core devs because I have other obligations. I’m glad I don’t have to wade through that stuff. FWIW, it seems like the community is really good at saying “We’re not going to re-hash this decision a million times, but if you create a thread about a specific problem you’re trying to solve, we’ll help you find an approach that works” and they follow through on that.
I don’t have a lot of sympathy for folks who are unhappy with the removal of the ability to compile packages that include direct JS bindings to the Elm runtime. For as long as I’ve been using Elm the messaging around that has consistently been that it’s not supported, it’s just an accidental side effect of how things are built, and you shouldn’t do it or you’re going to have a bad time. Now it’s broken and they’re having a bad time. This should not be a surprise. I also think it’s good decision to actively prohibit it. If people started using that approach widely, it would cause a lot of headaches for both the community and hamstring the core team’s ability to evolve the language.
I’m quite glad those discussions get locked
and
I like the decisions the core team is making
Do you believe your perspective would change if you didn’t agree with the developers decisions? Obviously I have a different perspective but I am curious if think you would still have this perspective if you were on the other side?
Additionally, just because the core team has “made a decision” doesn’t mean it wasn’t a mistake, nor that it is permanent. Software projects make mistakes all the time, and sometimes the only way to really realize the mistake is the hear the howls of your users.
I’m pretty confident I wouldn’t change my position on this if I wasn’t in agreement with the core team’s choices. I might switch to PureScript or ReasonML, if I think the trade-offs are worth it, but I can’t see myself continuing to complain/vent after the decision has been made. I think appropriate user input is “I have this specific case, here’s what the code look like, here’s the specific challenge with any suggested alternative” If the core team decides to go another way after seeing their use cases, it’s clear we don’t have the same perspective on the trade-offs for those decisions. I can live with that. I don’t expect everybody to share my opinion on every single technical decision.
As an example, I use Clojure extensively at work, and I very much disagree with Rich Hickey’s opinions about type systems, but it’s pretty clear he’s thought through his position and random folks on the internet screaming differently isn’t going to change it, it’ll just make his job more difficult. I can’t imagine ever wanting to do that to someone.
sometimes the only way to really realize the mistake is the hear the howls of your users
It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl. They can usually speak pretty clearly about how decisions impact their work and are able to move on when it’s clear there’s a fundamental difference in goals.
It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl.
We fundamentally disagree on this point (and the value of the noisy new users), and I don’t think either of us is going to convince the other. So, I think this is a classic case of agree to disagree.
I think what bothers me the most about the core team’s approach to features is not that they keep removing them, but that for some they do not provide a valid alternative.
They’ll take away the current implementation of native modules, but coming up with a replacement is too hard, so even though the core libraries can use native code, us peasants will have to do without.
They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard, so even though the base library has a few magic typeclasses for its convenience, us peasants will have to make do with mountains of almost duplicated code and maybe some code generation tool.
So where does that leave Elm right now? Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language, because of all the breaking changes between releases, all the things that haven’t been figured out yet, and how the response to requests for ways to do things that are necessary in real code is either “you don’t need that”, which I can live with most of the time, or “deal with it for the moment”, which is unacceptable.
I think Elm should make it more clear that it’s ostensibly an unfinished project.
They’ll take away the current implementation of native modules, but coming up with a replacement is too hard
They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard
I don’t think this is a fair characterization of the core team’s reasons for not supporting those features. I’ve read/watched/listened to a lot of the posts/videos/podcasts where Evan and other folks discuss these issues, and I don’t think I’ve ever heard anyone say “We can’t do it because it’s too difficult.” There’s almost always a pretty clear position about the trade-offs and motivations behind those decisions. You might not agree with those motivations, or weigh the trade-offs the same way, but it’s disingenuous to characterize them as “it’s too hard”
I exaggerate in my comment, but what I understood from the discussions around rank n polymorphism I’ve followed is basically that Evan doesn’t think any of the existing solutions fit Elm.
I understand that language design, especially involving more complex features like this, is a hard issue, and I’m sure Evan and the core team have thought long and hard about this and have good reasons for not having a good solution yet, but the problem remains that hard things are hard and in the meantime the compiler can take an escape hatch and the users cannot.
Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language
I always struggle with this line of questioning because “incomplete and broken” describes pretty much all of the web platform in the sense that whenever you do non-trivial things, you’re going to run into framework limitations, bugs, browser incompatibilities and so on.
All you can do is evaluate particular technologies in the context of your specific projects. For certain classes of problems, Elm works well and is indeed better than other options. For others, you’ll have to implement workarounds with various degrees of effort. But again, I can say the same thing for any language and framework.
Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.
Taking a tangent, the main problem is that Elm is being built on top of the horrifically complex and broken foundation that is the web platform. It’s mostly amazing to me that anything works at all.
To me the problem is that Elm is not conceptually complete. I listed those issues specifically because they’re both things that the compiler and the core libraries can do internally, but the users of the language cannot.
But at least Elm is no worse than anything else.
No, Elm is a language, and not being able to do things in a language with so few metaprogramming capabilities is a pretty big deal compared to a missing feature in a library or a framework, which can easily be added in your own code or worked around.
But how is this different from any other ecosystem? The compiler always has more freedom internally. There are always internal functions that platform APIs can use but your library cannot. Following your logic, we should condemn the Apple core APIs and Windows APIs too.
No, what I meant is that the core libraries use their “blessed” status to solve those problems only for themselves, thus recognizing that those problems effectively exist, but the users aren’t given any way to deal with them.
Ports are very limiting and require much more work to set up than a normal library, and I haven’t used custom elements so I can’t speak for those.
There’s also no workaround for the lack of ad-hoc polymorphism. One of the complaints I hear the most about Elm is that writing json encoders and decoders is tedious and that they quickly become monstrously big and hard to maintain; often the json deserialization modules end up being the biggest modules in an Elm project.
This is clearly a feature the language needs (and already uses with some compiler magic, in the form of comparable, appendable, and so on).
Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.
Having worked with ClojureScript on the front-end for the past 3 years, I strongly disagree with this statement. My team has built a number of large applications using Reagent and whenever new versions of ClojureScript or Reagent come out all we’ve had to do was bump up the versions. We haven’t had to rewrite any code to accommodate the language or Reagent updates. My experience is that it’s perfectly possible to build robust and stable tools on top of the web platform despite its shortcomings.
I think the ease of upgrades is a different discussion. There is a tool called elm-upgrade which provides automated code modifications where possible. That’s pretty nice, I haven’t seen a lot of languages with similar assistance.
My point was, you cannot escape the problems of the web platform when building web applications. Does ClojureScript fully insulate you from the web platform while providing all of its functionality? Do you never run into cross-browser issues? Do you never have to interoperate with JavaScript libraries? Genuinely asking - I don’t know anything about ClojureScript.
My experience is that vast majority of issues I had with the web platform went away when my team started using ClojureScript. We run into cross-browser issues now and then, but it’s not all that common since React and Google Closure do a good job handling cross-browser compatibility. Typically, most of the issues that we run into are CSS related.
We interoperate with Js libraries where it makes sense, however the interop is generally kept at the edges and wrapped into libraries providing idiomatic data driven APIs. For example, we have a widgets library that provides all kinds of controls like data pickers, charts, etc. The API for the library looks similar to this to our internal widgets API.
Sounds like a great development experience!
Let me clarify my thinking a bit. For a certain class of problems, Elm is like that as well. But it certainly has limitations - not a huge number of libraries etc.
However, I think that pretty much everything web related is like that - limitations are everywhere, and they’re much tighter than I’d like. For example, every time I needed to add a date picker, it was complicated, no matter the language/framework. But perhaps your widgets library has finally solved it - that would be cool!
So I researched Elm and got a feel for it’s limitations, and then I could apply it (or not) appropriately.
I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.
I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.
Yeah, I agree that the main question is around the state of Elm. If the message is that Elm isn’t finished, and don’t invest into it unless you’re prepared to invest time into keeping up, that’s perfectly fine. However, if people are being sold on a production ready language that just works there appears to be a bit of a disconnect there.
It’s obviously important to get things right up front, and if something turns out not to work well it’s better to change it before people get attached to it. On the other hand, if you’re a user of a platform then stability is really important. You’re trying to deliver a solution to your customers, and any breaking changes can become a serious cost to your business.
I also think it is important to be pragmatic when it comes to API design. The language should guide you to do things the intended way, but it also needs to accommodate you when you have to do something different. Interop is incredibly important for a young language that’s leveraging a large existing ecosystem, and removing the ability for people to use native modules in their own projects without an alternative is a bit bewildering to me.
I have the opposite experience. Team at day job has some large CLJS projects (also 2-3 years old) on Reagent and Re-Frame. We’re stuck on older versions because we can’t update without breaking things, and by nature of the language it’s hard to change things with much confidence that we aren’t also inadvertently breaking things.
These projects are also far more needlessly complex than their Elm equivalents, and also take far longer to compile so development is a real chore.
Could you explain what specifically breaks things in your project, or what makes it more complex than the Elm equivalent. Reagent API had no regressions in it that I’m aware of, and re-frame had a single breaking change where the original reg-sub was renamed to reg-sub-raw in v 0.7 as I recall. I’m also baffled by your point regarding compiling. The way you develop ClojureScript is by having Figwheel or shadow-cljs running in the background and hotloading code as you change it. The changes are reflected instantly as you make them. Pretty much the only time you need to recompile the whole project is when you change dependencies. The projects we have at work are around 50K lines of ClojureScript on average, and we’ve not experienced the problems you’re describing.
I was hoping to read other points of view on that matter, thanks for taking the time writing down yours!
The response from the moderators of the Elm subreddit to this post is perhaps a good example of some of the things described by the author. Characterization of this post as abusive is something that sticks out.
One interesting thing I immediately noticed about that is that it isn’t a single moderate (despite them having them) – it is a collective, the “elm_mods”. This feels like a shield allow the mods to be harsher than they might be as an individuals because no individual is ever responsible for the posts from “elm_mods”.
I adopt a simpler approach: I’m Ads-Adverse.
The more you try to sell me something, the less I’m going to buy it.
This came from realizing that useful products do not really need much marketing.
I also learned to spot the manipulations of ads and joke about them, making people realize they do not need the products either.
Finally I’m teaching my daughters to do the same, and it’s incredible how good they are at this!
As for IoT my approach is even simpler: do not buy anything whose software I cannot recompile from sources and reinstall.
This came from realizing that useful products do not really need much marketing.
But, they do need some. You have to be aware a product exists, outside of shifting power to an intermediary between you and the product – what do you consider acceptable?
It’s interesting that you say that. I try to consciously avoid paying attention to ads or seeing them at all. I don’t watch broadcast television, for example.
But ads undeniably get in there anyway, and think about it: think of a fast food place. Think of a brand of toothpaste. Ads aren’t just about yelling at you to go buy a particular thing, they’re also about brand awareness. There are the really obvious ads like the Coke ads where they have people drinking coke and looking like they’re happy, but they’re quite transparent. What I really hate is the pervasive advertising of brands just so that you, when you think of a product, think of their brand. And that’s hard to stop, even if you’re consciously aware of it.
The trick indeed is not to forget the Ads.
It’s to avoid the products you remember.
It’s to effectively demage their brand by ridiculizing their campaigns.
And to teach people (particularly children) to do the same (and trust me, they are great! :-D)
I’m sometimes astonished by just how fantastic kids are at that sort of thing. They just don’t give a shit, if you’ll excuse my French.
Advertising has always been about manipulating people into buying stuff they likely don’t need. It’s a bit coercive. In the days of mass media, advertisers used mass psychology to sell product to a mass audience. The media landscape has changed. With the arrival of personalized media, advertisers are going to try to build a psychological profile of the target (that means you and me) and use individual psychology to try and sell stuff that the target most likely doesn’t need. Same game, different techniques. The difference between this and advertising in the age of mass media is the sheer invasiveness and ubiquity that advertising now takes.
It has also been about announcing the availability of products and making innovations known outside eg. journals. Stuff people may need.
I’m not defending modern advertising, or all old advertising, but the ads of old were a lot more understandable in how and what kinds of reactions they were trying to evoke. You could reason your way around an ad for cigarettes, but you can’t reason your way around invisible yet targeted ad networks.
So this communist-Cuba approach to advertising is understandable, but calling it just another technique is a bit harsh and underestimates the audience.
I’d even go so far as to say the inefficiency of classifieds and such are a self-regulatory system, which should not be removed lest more and more people do believe they need a new microwave oven every year.
“but calling it just another technique is a bit harsh and underestimates the audience.”
I don’t know. They’d have done it in the past if they were allowed. They’ve usually been about whatever gets them the most dollars now or later. Tech and people’s habits finally let them do what they always dreamed of doing.
Sure, I suppose, maybe.
Low-tech ads have not had highway billboard tracking or any of that stuff. So if Charles Babbage had constructed the ad networks of today for newspapers in the 1800s, the case could still be argued for a dumber way of handling the better minority of advertising.
It is good that tech has also given us better ways of spreading information and reviews about products, just if we could keep the ad networks at bay.
It is good that tech has also given us better ways of spreading information and reviews about products, just if we could keep the ad networks at bay.
This creates a new problem. Lets say we suddenly lived in a advertising free world. So, how would you find out about products … various intermediates. These intermediates are easy to pay off, as they are few in number and you STILL have to get your message to them. So instead of advertising to the masses, you advertise by putting your product announcement in the trunk of a new Telsa and sending a Telsa to each of the meaningful reviewers… cheaper and higher impact.
If anyone is interested in a more detailed history of the topic, I can recommend Curtis’ The Century of the Self. Even though it’s quite informative, it’s easy to follow along.
I think practically all “Why You Should…” articles would be improved if they became “When You Should…” articles with corresponding change of perspective.
An even better formulation would be “Here is the source code for an app where I didn’t use a framework. It has users, and here are my observations on building and deploying it”.
In other words, “skin in the game” (see Taleb). I basically ignore everyone’s “advice” and instead look at what they do, not what they say. I didn’t see this author relate his or her own experience.
The problem with “when you should” is that the author is not in the same situation as his audience. There are so many different programming situations you can be in, with different constraints, and path dependence. Just tell people what you did and they can decide whether it applies to them. I think I basically follow that with http://www.oilshell.org/ – I am telling people what I did and not attempting to give advice.
(BTW I am sympathetic to no framework – I use my own little XHR wrapper and raw JS, and my own minimal wrapper over WSGI and Python. But yes it takes forever to get things done!)
His earlier books are also good. It is a lot of explaining the same ideas in many different ways, but I find that the ideas need awhile to sink in, so that’s useful.
He talks about people thinking/saying one thing, but then acting like they believe its opposite. I find that to be painfully true, and it also applies to his books. You could agree with him in theory, but unless you change your behavior then you might not have gotten the point :-)
Less abstractly, the worst manager I ever had violated the “skin in the game” rule. He tried to dictate the technology used in a small project I was doing, based on conversations with his peers. That technology was unstable and inappropriate for the task.
He didn’t have to write the code, so he didn’t care. I was the one who had to write the code, so I’m the one with skin in the game, so I should make the technology choices. I did what he asked and left the team, but what he asked is not what the person taking over wanted I’m sure.
In software, I think you can explain a lot of things by “who has to maintain the code” (who has skin in the game). I think it explains why the best companies maintain long term software engineering staff, instead of farming it out. If you try to contract out your work, those people may do a shitty job because they might only be there for a short period. (Maybe think of the healthcare.gov debacle – none of the engineers really had skin in the game.)
It also explains why open source code can often be higher quality, and why it lasts 30+ years in many cases. If the original designer plans on maintaining his or her code for many years, then that code will probably be maintainable by others too.
It also explains why “software architect” is a bad idea and never worked. (That is, a person who designs software but doesn’t implement it.)
I’m sure these principles existed under different names before, and are somewhat common sense. But they do seem to be violated over and over, so I like to have a phrase to call people on their BS. :-)
Yeah, the phrase works as a good lens and reminder. Interestingly, as most parents will attest to - the “do as I say not as I do” is generally unsuccessful with kids. They are more likely to emulate than listen.
I definitely agree with this change. It’d get more people thinking architecturally, something that’s sorely needed.
I’ve yet to see a web framework I truly enjoy using. Most of them don’t even try to tame the incidental complexity of the web, preferring to heap on even more. I think this is because the type of people that make web frameworks often love the web to the point where they’re blind to the incidental complexity.
These frameworks seem to take special delight in taking over every aspect of your application, because ‘convention.’ Apparently, one of the greatest evils of software development is that there is no standard directory for models unless we institute The Right Way. Meanwhile, massive coupling makes testing difficult, causing years of “Fast tests using this one weird trick” presentations continue to be given.
The best libraries are the ones you can lock away somewhere and forget about.
Enjoyability seems to me to be a bad criterion to judge a tool on. I may enjoy one hammer more than another, but I still need to use specific ones for specific tasks and the ones I enjoy less are no less functional and suitable.
I don’t think the remainder of your post depends on your remark about ‘enjoying’ a web framework, which I feel is extra evidence for that not mattering.
Enjoyability seems to me to be a bad criterion to judge a tool on.
Also “Enjoyability” is based on what timeline you measure it on. If you measure it based on day 1 enjoyability, day 100 enjoyability or day 1000 enjoyability. Stuff like unit testing and fuzzing might not be very enjoyable on day 1 – might be far more enjoyable on day 100 and put you a state of absolute bliss with same testing and fuzzing on day 1000.
Of course web frameworks are not optimal. However, I take the leaky abstraction here and there any time over the mess I have seen with non-framework code. I did Python starting with Python 2.3 which was released in the early 2000s. Back then, I didn’t do Python web development much, yet every now and then I wrote something or looked at options how to do things. This was the time of mod_python and still cgi. Nowadays we have Django, Pyramids and if you feel like having a bit more freedom - Flask. I must say I wouldn’t want to go back.
Potentially, if you have very special requirements, that actively go against the typical patterns, no framework is an option, but other wise it isn’t. at least I wouldn’t like to take over maintenance of such a codebase.
That’s actually quite a nice indicator: “If someone would use that advice, would I like to take over maintenance of that code base?”.
Just kakoune.
• No interwebz in text editor (because why? it’s stupid)
• Fast as hell (it contains own regexp engine instead of std::regex from Boost)
• Not resource-heavy (still able to render in what-your-screen-needs FPS when loaded with 4GB XML file)
• Easy to sandbox (can be built as static binary, has a client-server model so you can control it from anywhere)
• Respects user privacy (it doesn’t disclose any data for 3rd parties, developer is not a twat, but very friendly and helpful person)
• Totally not malware (has a user-friendly Clippy to help you all the time)
I have slowly migrated from vim to kakoune. The design document spells out a lot of what I like about it, which I will repeat a bit here.
In addition to what is great about kakoune is what I have grown to dislike about vim. Neovim added a terminal to it, which I thought was a great fit for it and I had hoped vim would stick to being “just an editor” – but it didn’t – it chased neovim despite mocking such things as a terminal in an editor right in its help file. /rant
Anyway, kakoune isn’t perfect, and it isn’t even 1.0. But the author of it has very firm goals for what he wants it to do – and as importantly what he doesn’t want it to do.
So things I love (again, a lot of this is also in the design document):
It isn’t all great (yet) from my perspective:
Limitations: it doesn’t manage windows, tmux or i3 or (other thing) does; it doesn’t try to be multithreaded; it isn’t a file manager; it isn’t a terminal; it doesn’t have its own scripting language; it doesn’t support binary plugins; it avoids being “clever” based on context confusing the user. It makes no attempt to be all things to all people, it is a fantastic editor… this ties into composability.
I like this, except that I want my text editor to give me some way to manage buffers. Managing splits in Vim is infinitely nicer than managing splits in Tmux.
If I remember right, this is the main reason I dropped Kakoune. So if this has improved (as in, there’s some implementation somehow that works closer to Vim), then I can probably give it a serious try again.
It has lots of ways to manage buffers, it just offloads the windowing/pane stuff to others, you can use :tmux-new-horizontal or :tmux-new-vertical, and :tmux-repl-vertical, and there is similar stuff for x11 windows, i3 windows (via a small plugin) etc.
Bind to the same keys you would bind to in Vim and barely tell the difference (except now you can have a properly tmux pane in the bottom right and the other three be kakoune). I haven’t missed anything from vim in terms of window management.
I think the problem was that I wanted the tmux splits and the editor splits separated, but still in the same window. Say in Vim I have 3 horizontal splits and then in Tmux there’s two horizontal splits, one with Vim and its splits and the other with a terminal.
Tmux is really not good at managing the splits, or at least not when I got into it, so if it’s all in tmux then it’s horrible (in my experience).
So in the case of tmux, it’s more a limitation of the medium for me.
I would have to know your specific complaints about tmux’s pane/window handling to be able to respond. But on the upside you aren’t limited to tmux!
I personally once I got over the learning curve really like tmux handling, specifically stuff like choose-tree which is amazing if you have many tmux windows and panes, window and pane rotation, etc. It has a learning curve like Vim or Kakoune – but similar to them, the curve has non-trivial payoffs.
V.v.V! Awesome.
When I first started using tmux I really didn’t dive in and learn it properly and it made me hate it a bit. But then when I started to dig deeper and found like choose-tree, and then that choose-tree had a search mode inside it, and I could bind a key to put me right in search mode… then I started making bindings to like fire up ranger in a way that made sense, yada… I was hooked.
Along these lines, I also found micro.
I’ve been using Acme for the past few months and it works fine. It’s pretty bare, but it works. No completion, no syntax highlighting, no nothing. Still enjoy it. Also there’s like no updates, especially not automated. If you need modern features, I recommend Emacs. Spacemacs is pretty neat.
The nice thing about Acme is that it’s command set is so minimal and quasi-irreductible (except for the window management stuff, that could be handled by another program).
And “modernity” is a very relative term, so all I’ll say is that it encourages you to use unix as your ide and gives you a simple interface to do so with. So as long as your unix-like system is secure and up to date, you’ll always be using top of the line tools.
Well. I’m really a big fan of the plumber. I’m definitely looking to use that more in my general workflow. I’m already a big fan of accumulating scripts in $HOME/bin, and that idiom has a nice synergy with Acme too. I also don’t miss syntax highlighting as much as I thought I would. Or completion.
What mouse are you using with it? A lot of the Acme loyalists get good old three button mice.
I use the contour mouse.
I have a mouse that has a mouse wheel, which doubles as a middle-clicky-thing. My office one is a bit sensitive for my taste, and also does a weird web-page-navigation thingie when you press the wheel to the left or right, which I profoundly hate (and renders middle-clicking a bit harder).
My home mouse is much better, but it’s also a run-of-the-mill wheely mouse. Less sensitive to scroll action, no left-or-right action. Works perfectly fine for my purposes. I’ve learned over time that Acme is also decently easy to manoeuvre with a trackpad, since you can simulate middle-clicks with Ctrl (even in chording, at least with the Left->middle chord (which is useful, because, like, copy-paste)). Doesn’t work with middle->left though.
Wow… maybe tone down the shade of red in the progress bar?
The color signals “WARNING ALL ACCOUNTS COMPROMISED!”, even though it’s consistent with the color of boiled lobster…
Could use the one referenced here -> https://www.thesun.co.uk/fabulous/5647842/lobster-emoji-issue/ maybe?
I have mostly omitted videos that were mentioned in other comments.
Nontechnical:
Douglas Adams - Parrots, the Universe, and Everything
Technical:
Ryan Dahl - first Node.js presentation
Guy Steele - Growing a Language
Sandi Metz - The Magic Tricks of Testing
Joshua Bloch - How to Design a Good API and Why It Matters
CppCon 2014 - Chandler Carruth - Efficiency with Algorithms, Performance with Data Structures
CppCon 2016 - Chandler Carruth - High Performance Data Structures 201 : Hybrid Data Structures
CppCon 2014 - Mike Acton - Data-Oriented Design
Code Clinic 2015 - Mike Acton - How to Write Code the Compiler Can Optimize
Cliff Click - A Crash Course in Modern Hardware
Douglas Crockford - Programming Style and Your Brain
Douglas Crockford - Monads and Gonads
J.B. Rainsberger - Integrated Tests Are a Scam
Andrei Alexandrescu - Fastware
Andrei Alexandrescu - Writing Quick Code in C++, Quickly
Andrei Alexandrescu - There’s Treasure Everywhere
NWCPP - Herb Sutter - Machine Architecture - Things Your Programming Language Never Told You
You got most of what I was going to post!
Mike Acton’s talk about Data Oriented Design has so much great stuff on how to approach development generally (despite being a CppCon talk). It is probably the single talk that had the biggest impact on me as a developer, wish he gave it a decade earlier.
Silent mode at all times and finer-grained control offered by the OS
What is finer grained on iOS? It was my (mis)understanding that this is one of the places Android was still fairly far ahead with multiple tiers of notification levels tweak-able by app or person.
Watching this migration happen in real time has been terrific. It seems like it was mostly low drama (here and on IRC) and the “new mods” are focusing on what matters. Focusing on links/comments/people (I might have put “people” as #1) is absolutely where the community needs to be.
Sincere thanks to both @jcs and all the people who stepped up. I am not looking for big changes, just continued existence. The cleanup of private messages and deleted emails goes above and beyond.
Hi jcs,
In an attempt to preserve a community which has been a large part of our lives for a better part of the last few years, @angersock @pushcx @355e3b @alynpost and a few other of the IRC folks feel that we can take over running the website. @alynpost will be able to provide the hosting in Santa Clara, CA under pgrmr’s infrastructure. @pushcx will assume the role of head administrator and take over the domain name along with the Twitter account. @355e3b and @aleph- will take over the care and feeding of the Rails codebase.
We will not be making any moderation changes at this time—continuity is the important thing.
Our transition plan is as follows:
This is solely to ensure continued hosting and maintenance of the website, and a continuation of the community. Long-term, if the existing moderators wish to step down, @pushcx will be responsible for picking new candidates.
We would also like to thank you for all of your years of work put into this.
― #lobsters IRC regulars (aka the clawlateral committee)
And I assume @tedu will be in charge of the TLS certificates?
That sounds like a great plan, thanks for putting that together. I’ll feel better knowing the site will be managed by a group instead of falling all on one person.
Glad to see your approval. :)
/u/pushcx should be the central point of contact for the migration deets. We’ll keep the community updated!
Great! We’re really happy to step up and take good care of a community we love.
And, for the community: the first update is that I just started an email discussion with me, jcs, and alynpost to handle the technical details of the migration. I’ve migrated barnacl.es a few times, so I’m familiar with the procedure. My guess for a timeline is two weeks, but that’ll be adjusted if needed. I’ll post a comment in this thread when we’ve picked a date or there’s otherwise news.
I got back from talking to the people planning out the transition (aleph, push, socky, goodger, alyn, 355, irenes) on Mumble and IRC - they’ve all been wonderful people putting in their best to ensure the community will experience a smooth transition and avoid any turmoil.
Awesome, glad to have regulars and good people taking things over.
I would strongly recommend, and as a lobste.rs regular personally request that as a group you take a bit of time to define some basic agreement about decision making and ownership, so that it is clear between you all, and also to the community.
This is not a problem when there’s one guy in charge - it’s simple and clear and whether you agree with them or not you have consistency and stability (thanks @jcs !)
When there’s more than one, you need extremely strong value alignment and high levels of trust. If you guys have not known each other for 5+ years and can meet in the same bar to share a beer, you need to talk about and get down some basics. Who makes decisions, how, when; who is in control of the domain / hosting / features / community management.
Personally, I like the ‘benevolent dictator’ situation. It reduces ambiguity and facilitates short sharp clear decisions. Greater than 2 people needs work to define that recognises that you will eventually have a conflict, that some of you will come and go, and that there is no way you can all have perfect understanding of what each other wants for this community and what your values are.
Not doing this is a valid choice too; equal to commitment to cede to whoever has ‘root’ and control of the hosting and then domains if a conflict happens, and requiring proactively thinking about forking / commuity splits.
Is that what you’re thinking too @pushcx ?
That’s the current plan I’m executing on, yes. I want to continue this excellent community. Lobsters is in a good place: we have a healthy, active userbase, the code is stable, bug-free, and has little need for new features, and I’m on sabbatical so I have plenty of time and attention to devote to a smooth transition.
After the migration is complete I think it’s worth having a new meta thread about if we want to shift to a new community governance model. I’m comfortable being BD for years if not indefinitely, but there’s enough folks talking about community models that I want to have a dedicated discussion to explore examples and consider the option.
One of the guiding principles we talked about a lot during the clawlateral committee meeting was that we wanted to stray as little as possible from the existing governance structure for the time being–the site has done well in its current incarnation, and @pushcx is we believe a good steward to carry on the precedent set by @jcs.
The plan explicitly has redundancy in roles (think failover) for all important things you mentioned. We also tried to follow a principle of least-trust and a little bit of separation of powers for the failover folks, so that continuity of service is easy but forking and hijacking is hard.
[Comment removed by author]
So what moderation changes will you make later?
The first rule of intelligent tinkering is to keep all the pieces. When we say we will not be making any moderation changes at this time, we mean that we have no moderation changes to make. This group volunteered to operate lobste.rs because we like the way the website has been run. We will moderate with the same principles the site has always operated on. The moderation log is available for public inspection. Changes to the site, just like the one announced here, will be discussed in their own meta thread.
Thank you all. I work a lot, don’t know Rails, and don’t really have anything constructive to contribute, but this is far and away the best signal to noise community I’m involved with and I really appreciate it.
If throwing money at the problem will help the new maintainers along please consider setting something up and I’ll chip in.
Does this mean we can finally get an @angersock plushie?
You guys were my first thought when I saw this post lol. Thanks for your continued commitment to the community ~
Thanks @angersock, @pushcx, @355e3b, @alynpost!
I’d hate to see lobsters die!
I love how fast this plan was put together and I feel it will be in good hands. I was scared seeing this post and am excited to see the community I love will keep going and be in good hands!
[Comment removed by author]
You’re saying that ST was great 4-5 years ago, but apart from the langserver, which one of your points didn’t apply back then as much as it does now? You say that “today there are better editors”, but surely vim is much older than 4-5 years and basically didn’t change.
[Comment removed by author]
The primary reason I stick with Sublime Text is that Atom and VSCode have unacceptably worse performance for very mundane editing tasks.
I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime.
I thought VSCode was supposed to be very quick. Haven’t experimented with it much myself, what mundane editing tasks make it grind to a halt? I am well aware Atom has performance issues.
Neither Atom nor VSCode grind to a halt for me, but I can just tell the difference in how quicky text renders and how quickly input is handled.
I’m not usually one of those people who obsesses about app performance, but editors are an exception because I spend large chunks of my life using them.
I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime
This is the reason who I stay with vim, unable to replicate vim functionality in other editors.
Yeah, fortunately NeoVintageous for Sublime does everything I need for vim-style movement and editing.
I think the really ground-breaking feature that ST introduced was multi-cursor editing. Now most editors have some version of that. Once you get used to it, it’s very convenient, and the cognitive overhead is low.
As for the mini-map, I suppose it’s a matter of taste, but I found it very helpful for scanning quickly through big files looking for structure. Visual pattern recognition is something human brains are ‘effortlessly’ good at, so why not put it to use? Of course, I was using bright syntax hilighting, which makes code patterns much more visible in miniature. Less benefit for the hilight-averse.
I’ve been using ST3 beta for a few years as my primary editor. I tried using Atom and (more recently) VS Code, but didn’t like them as much: the performance gap was quite noticeable at start-up and for oversized data files. The plug-in ecosystems might make the difference for some folks, but all I really used was git-gutter and some pretty standard linters. For spare-time fun projects I still enjoy Light Table, but it’s more of a novelty. I’m gradually moving away from the Mac and want a light-weight open-source editor that will run on any OS.
So now, as part of my effort to simplify and get better at unix tools, I’m using vis. I’m enjoying the climb up the learning curve, but I think that if I stick with it long enough, I’ll probably end up writing a mouse-mode plugin. And maybe git-gutter. Interactive structural regexps and multi-cursor editing seem like a winning combination, though.
You might enjoy exploring kakoune as well. http://kakoune.org | https://github.com/mawww/kakoune
I’ve never used Sublime Text, but I’ve used multiple-cursors in vis and Kakoune, and it beats the heck out of Vim’s macro feature, just because of the interactivity.
With Vim, I’d record a macro and bang on the “replay” button a bunch of times only to find that in three of seventeen cases it did the wrong thing and made a mess, so I’d have to undo and (blindly) try again, or go back and fix those three cases manually.
With multiple cursors, I can do the first few setup steps, then bang on the “cycle through cursors” button to check everything’s in sync. If there’s any outliers, I can find them before I make changes and keep them in mind as I edit, instead of having my compiler (or whatever) spit out syntax errors afterward.
Also, multiple cursors are the most natural user interface for [url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url], and being able to slice-and-dice a CSV (or any non-recursive syntax) by defining regexes for fields and delimiters is incredibly powerful.
[url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url]
This might be the first attempt at BBCode I’ve seen on Lobsters. Thanks for reminding me how much I hate it.
I agree with you. I use Vim, and was thinking about switching until I realized that a search and repeat (or a macro when it’s more complex) works just as well. Multiple cursors is a cute trick, but never seemed as useful as it first appeared.
I thought multiple cursors was awesome. Then I switched to using Emacs, thanks to Spacemacs. Which introduced to me [0] iedit. I think this is superior to multiple cursors. I am slowly learning Emacs through Spacemacs, I’m still far away from being any type of guru.
[0] https://github.com/syl20bnr/spacemacs/blob/master/doc/DOCUMENTATION.org#replacing-text-with-iedit
I’ve started using vim for work, and although I’ve become quite fast, I find myself missing ST’s multiple cursors.
I might try switching to a hackable editor like Yi. I’ve really enjoyed using xmonad recently for that reason.
I long ago gave up on trying to maintain stuff myself, but the impulse is still there. I just recently experimented with dumping IrcCloud and using WeeChat (via Relay) and Pushover for notification (which I found via this very site), but likely changing back shortly, WeeChat Relay clients on Android leave me longing for something better.
Communication.
Office-ish:
Source:
Payment:
Mail:
Automation:
Website Features:
Supportive:
Random:
Woah, I use a lot of 3rd party stuff!
Maybe your single page app is different, but the ones that I know break most of my browser’s features, such as the back and forward buttons, page refresh, bookmarking, sending a link, or opening a link in a new window or tab.
Eh. I’ve been building SPAs for four years now, and not even my first hacks prototypes suffered from these issues. They’ve been solved for a long time.
I only rarely encounter SPAs that have these problems (usually some business app originally built more than ten years ago), but frequently see ones that have been competently built to handle those cases.
In general I agree, but Twitter for some reason likes to load wrong tweets when I follow a shared link. I know they are wrong because username in URL is clearly not the same as tweet’s author.
Luckily I don’t have to do that often and this behaviour comes and goes. Still, while listed problems may be overstated I think they still exist.
Yeah — pretty much all the SPA tooling that exists includes a URL routing library/component/thingy. I literally never saw a public web app where URL stuff is broken.
As an example: Twitter. If you click on an image, but it is still to small to view it and then right click (“View image”) it to be able to see it at the original size. Then use the back button in your browser to go back it will jump to the start of your timeline instead of the place where you started from. Very annoying!
I still don’t come into contact with too many SPAs (that I am aware of at least) in most of my browsing / web usage.
I can never tell if this is the result of different sites or different expectations. Do people who write SPAs expect other sites to work like theirs without realizing that other users haven’t internalized the same model?
I don’t think people are making up these complaints. I’d have to do a lot more research than I’m inclined to do to discover issues that might go wrong but never actually do. When I say an errant click causes a view transition I can’t reverse with the back button, it’s because that’s happened to me.
I don’t think they are making up their complaints, but I suspect that they often don’t notice SPAs unless they are broken in obvious ways. And a lot of the things that get attributed to SPAs, I see more frequently on apps that are partially traditional server apps with lots of jQuery soup added to make things more dynamic.
This stuff is no better than Phrenology, a post-hoc rationalisation of pre-existing prejudice. It’s sad that the self-styled “smart guys” are falling into the oldest and dumbest of bias-traps.
There’s a bit of a difference between Phrenology and the sorts of scientific studies carried out on this topic today. Further, everybody claiming there isn’t a difference is either a) ignorant of the state of current research or b) pushing a false equivalence to further their own agendas for what they believe is correct.
Scientists have voiced that the science is accurate, others have put up much more thoroughly sourced articles. The science isn’t in question, really, by people that actually know the science.
This leaves us with the policy conclusions. The conclusions of:
Without spiraling off into “but but but muh soggy knees”, any reader should be able to look at those conclusions and say “Those are pretty reasonable things to consider, we can have a productive discussion about them even if we disagree”.
The fact that we can’t do so, because people immediately zoom off into talking about institutional bias and pseudoscience and namecalling and so forth, should be alarming to you. It is alarming to many of us, but most of us are keeping quiet having seen both the attacks on that person and the tacit support in our industry for those attacks.
You’re going to get upvotes, of course, because the rot is clearly set in even in this community but that doesn’t make your statement factually accurate–not that that matter anymore in public discourse. ¯\_(ツ)_/¯
Th e science isn’t in question, really, by people that actually know the science.
You’ve missed the point entirely. Even if there are consistently repeatable physiological differences (and spoiler warning, the science on this is absolutely not settled), why is that a good excuse for manifesto guy to denigrate his co-workers? Why is it Good Actually™ for him to use a 0.002% difference in spatial reasoning in a study from 1971 in a contest of domination over his peers?
Let’s put it a different way: there are major physiological differences between you and I, and anyone else in this thread. Does that mean some of us shouldn’t be in this industry?
And that’s the connection to Phrenology, the misapplication of science in service of naked prejudice.
There are also female engineers within google who are massively more skilled than manifesto guy. should they be made to feel unwelcome in their own profession because of the bizarre psyco-sexual prejudices of a junior engineer?
why is that a good excuse for manifesto guy to denigrate his co-workers?
It wouldn’t be one, but that isn’t what he wrote. Please cite, with direct quotes, where he does so. Here, have a copy with the hyperlinks and figures intact.
Does that mean some of us shouldn’t be in this industry?
Of course not. Then again, that wasn’t the point the author made in his memo. Please cite where he makes that point.
[…] should they be made to feel unwelcome in their own profession because of the bizarre psyco-sexual [sic] prejudices of a junior engineer?
I don’t know…if they choose to interpret that memo as unwelcoming (you know, instead of noting all the parts that say things like “hey, maybe we can reward pair programming and work to make the environment less stressful”) there’s not much to be done. Again, please cite the bits that you find unwelcoming and factually incorrect.
His thesis is that google is too diverse. Or trying to be too diverse. Right? How can a company be too diverse unless some of the diverse people don’t belong?
Pair programming sounds great, but how does it work if there aren’t any women to pair with? If you divide up into pairs with a guy who codes and a woman who talks, you still need an equal number of women.
“Strawman” is one of the most overused arguments online, but this is a classical case. Absolutely nobody claims there are no differences between men and women.
Except of course, lots of people do claim there is no difference between men and women(’s brains):
One could easily be forgiven by a cursory scan of these articles to assume that is EXACTLY what is being claimed, not some strawman setup here, but a legitimate out in the world idea.
Of course they do not. I read the first link you posted - did you read it? Here is the first paragraph.
The study, published in the journal PNAS, argues that if there were really such a thing as male and female brains, there wouldn’t be much overlap in the characteristics of the two—people would show either only male or only female characteristics. However, after examining the brains of 1400 people aged 13 to 85 years old in terms of their composition of gray matter, white matter, and connections, the researchers found that very few people were clustered on the extreme ends of the spectrum of features typically associated with males and females. Rather, there was a lot of overlap. *While some features were more common in female brains and others in males, most people have a mix of the two. *
People who don’t know any science or statistics are often confused by the difference between distributions of traits and absolute classifications. Is that your issue here?
My issue is you claimed the poster used a “strawman” (an intentionally misrepresented proposition that is set up because it is easier to defeat than an opponent’s real argument). I don’t believe it is one, I have been in the room / had it sincerely argued to me.
That doesn’t mean I think it is at all accurate, it just means that it is the sincerely held belief of some, not a strawman. The fact that there are literally hundreds of articles about it and fierce debate around it I think validates this perspective.
Really, I have yet to read a single claim that men and women are identical and nobody has been able to link to one. Probably someone believes it, there is a believer in anything, but this argument is not about whether men and women are identical, it is about policies to increase representation of women in engineering and management. Such policies are not based on the theory that there are no difference - the name “diversity” indicates a belief in differences. It is really annoying to see repeated citation or arm wave towards results that show there are differences between men and women as if such, totally uncontroversial information, had any bearing on the efficacy or value of diversity programs.
Where in the post you are replying to does the word “strawman” appear? ctrl+f fails me.
Unless you are…you know…arguing a point I hadn’t posited there. What was the term for that again?
Also, note that the “difference” I’m referring to, in context, was about phrenology and contemporary research:
There’s a bit of a difference between Phrenology and the sorts of scientific studies carried out on this topic today. Further, everybody claiming there isn’t a difference is either a) ignorant of the state of current research or b) pushing a false equivalence to further their own agendas for what they believe is correct.
No I was pointing out that your argument uses a strawman: the imaginary people who argue that men and women are identical.
“The science isn’t in question, really, by people that actually know the science”
And yet, the blog post by SSC you cite begins with an attempt to minimize the conclusions of a peer reviewed article by Hyde.
IMHO, it absolutely matters when in the hiring process. It should be after screening. It should be for people on the short list, and it should be paid. Our project was done by in house people in about 4 hours. We send that home with candidates as the 2nd to final step of the process. When they arrive with the code, we give them a cashiers check for $500 and then we do a group code review. Worst case they made $500 and are made to feel foolish in a code review. Best case they have $500 and a new job. Additionally after the code review we give them a exact time for hire / no-hire decision call.
For the curious, it turns out that this is just below the IRS’s $600 threshold which would require you to file a 1099 form for the payee. (They are still supposed to report the income, though.)
Just makes me wonder how many people use mercurial at there company?
Edit: from company I mean at work.
Which company? Octobus? As per their website, there are only 2 people, and Pierre-Yves David (“marmoute”) is a well know core Mercurial contributor. The company itself is about providing commercial support for Mercurial, along with other stuff like Python contracting.
I think mercurial definitely has a niche in a corporate space. It’s easier to train new people on than git, scales better for monorepo setups, is more easily extensivle via python, and allows richer customization.
I am curious about this – while mercurial definitely has less initial surface area and a far more consistent way of interacting. It also tends to have lots of customizations that add a lot of complexity right back in – and mix and match them in ways that are often unique per mercurial setup.
Git while far uglier, also has more training resources both professional and free. Additionally, while git is far less consistent in terms of interaction, to a far large degree once you know it – you know it. You are unlikely to go to a site where git has lots of customizations making it behave different than the “git” you used at your last organization.
Well you pretty much summed it up :) Mercurial is nicer/easier to use, but Git has more resources out there. I think at that point one being better than the other for a particular person or team will then depend less on the pros/cons of each tool, and more on the person/team’s mindset/culture/available support/etc.
I’d add that Git having more resources, while helpful, is as much a proof of its success as a symptom of one of its main problems. Having to look up help pages and other tutorial pages on a regular basis becomes tedious quickly, and they still need to fix the core problem (they can’t quite fix the broken CLI at this point, but I did note several helpful messages being added to the output in the last few versions, so there’s progress).
Finally, yeah Mercurial has a problem with the amount of customization they force on user because of their very strict backwards compatibility guarantees (resulting in a good portion of new features being turned off by default). This tends to be mitigated by the fact that teams will generally provide a central
.hgrcthat sets up good defaults for their codebase. Also, Mercurial extensions almost never change Mercurial’s behaviour (evolveis an outlier there but is still considered “experimental”) – they just add to it, so I’ve never come across (so far) any Mercurial resource that was potentially invalidated by an extension (feel free to point to counter-examples!).I suspect my issue might be more in my head (and my unique experience) than in reality. I have contracted with lots of git shops – and a fair number of mercurial ones. Most of the git shops worked very similarly, they differed in master dev versus feature branch dev, mono-repo or multi-repo – but they all felt similar and I could use very minor changes to my workflow to integrate with them, which is great for contracting.
Each mercurial shop has been a wild adventure in unique workflow, and brand new extensions I have never seen or used. One used TWO different sub-repo extensions, another one used THREE configuration extensions! On top of that, most of them had annoying/wonky authentication mechanisms (some hand-rolled). The reason I use those examples (which is only a fraction of what I have seen) is that are all basically non-optional. I needed to use them to be able to work on the project… and of course mq versus non-mq. Never used evolve (yet).
During the “will mercurial or git win?” – I was firmly on the mercurial side because I did work on Windows and git early on was non-function on it. But now when I hear a client is a mercurial shop, I dread it. But, I realize that is probably just my unique experience.
Huh, well it’s very probable I’m just not aware of all the wild things people do out there with Mercurial. I frankly had no idea there were sub-repo extensions (outside of the core subrepo feature), and I don’t know why anybody would do custom authentication when SSH works everywhere (although I understand people might want to setup ActiveDirectory for Windows-only environments instead, but that’s it). What do you mean by “configuration extensions”? As for MQ, I don’t think it matters for the central repo, no? It should only matter for local workflows?
According to https://www.mercurial-scm.org/wiki/UsingExtensions – there are at least 6 sub-repo extensions. And, yes, ActiveDirectory logins, other SSO variations and then on top of those multiple ACL layers.
As for MQ – absolutely you can avoid it with others tools that can produce the same sort of history… rebase, graft, strip, etc. The issue being if all the “how we work” docs are written in MQ style – it is a bit of mental gymnastics to convert over.
Ah I see. And yeah I never really scrolled down past the non-core extensions :) (The only non-core extensions I have are a couple I wrote myself…)
you… you are part of the problem! runs scared hehe
Haha but that’s fine, I don’t think anybody besides myself are using them :)
Might it instead be the other way around: that customization-seeking companies are more likely to choose Mercurial? This could be either because adventurousness promotes both non-Git and customization, or because Mercurial has the better architecture when you need to customize. IIRC the latter is true for both Mozilla and Facebook. Anyway, at my second job we used vanilla Mercurial, and we did fine. It was basically the same as any Git workflow, for that matter.
Absolutely. Additionally, Mercurial is just more accessible in terms of customization. On top of that more than a handful of these shops had heavy Python contingents internally.
Haha, yes, knowing the language certainly makes it easier to stray off the common path and into the woods of in-shop customization :-D
I use Mercurial at work. My company uses Git, but I use Mercurial and clone, push, and pull transparently thanks to hg-git. I’ve noticed I am generally more aware than my Git-using colleagues of recent changes to the repo, because I’ve got a
pre-pullhook set up to runhg incoming(with a tweak to avoid double network talk).