For me at least, the question doesn’t have a very clear-cut answer. At first blush, my answer would be “yes, I love to write code”, but then I think of all of the code that I don’t enjoy writing. I rarely enjoy the code that I write for work; it’s often rote uninteresting glue, to solve problems that I’m rarely convinced truly need to be solved, in languages that make solving problems far more painful than they need to be. So if such a large portion of the code I write is code I don’t enjoy writing, it leads me to think that maybe I don’t enjoy writing code, but do enjoy particular parts of problem solving with code. For me, I enjoy building and designing abstractions- either building APIs, languages, or tools that other developers user to build their software. I’m sure many other people who would say they like coding are like me, there’s probably some domain or type of problem that revolves around writing code that they enjoy- but I’m dubious that for most people the act of writing code, in and of itself with no surrounding context, is actually joyous (at least not for long).
To that end, I don’t think you are really much worse off than the majority of us who love to code. We may all be dissatisfied with our jobs, and do them because they pay well and are inside and all of the other nice things that come with being a software developer right now. The fact that in our free time we might pursue different hobbies that involve writing very different types of code, for very different reasons, than what people will pay us for is largely moot.
I wouldn’t call myself a programmer, but I spend my days writing code for biomechanics research. Much like you described, I don’t enjoy the writing of code, but I enjoy the results (and pretty graphs) and the challenge and problem solving aspects of it. Equally I’m not too concerned about the outcomes of my research, I enjoy the development of methods more than anything, so maybe I do like programming. I guess it depends on the definition of writing code…
While I agree that the article is probably true, the biggest problem with Electron, and a lot of modern software development, is that “Developer happiness” and “Developer Efficiency” are both arguments for electron, but “user happiness” and “user efficiency” aren’t.
Electron developers are incentivized to develop applications that make users happy in the small- they want something that looks nice, has lots of features, is engaging. The problem is that in their myopic pursuit of this one-and-only goal too many apps (and electron is a vanguard of this trend, but not the only culpable technology by far) forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
That’s where electron as a model breaks down. Electron apps are performant enough, and don’t use too much memory, when they are used by themselves on a desktop or powerful docked laptop- but I shouldn’t have to be killing slack and zoom every time I unplug my laptop from a power source because I know they’ll cut my battery life in half. I shouldn’t have to ration which slack teams I join lest I find other important processes swapping or getting oom-killed.
Even without those concerns, Electron apps selfishly break the consistency of visual design and metaphors used in a desktop experience, calling attention to themselves with unidiomatic designs.
We do need easier and better ways of developing cross-platform desktop applications. Qt seems to be the furthest along in this regard, but for reasons not entirely clear to me it’s never seemed to enter the wider developer consciousness - perhaps because of the licensing model, or perhaps because far fewer people talk about it than actually use it and so it’s never been the “new hotness”.
the author specifically calls out what the problem with QT is.
Native cross-platform solution like Qt tend to consider themselves more a library, less a platform, and have little to offer when it comes to creating auto-updating software, installers, and App Store packages.
Don’t be so dismissive of peoples choices with the ‘new hotness’ criticism.
I think you misunderstand what I’m saying. My claim isn’t that Qt would solve every problem that people are looking to electron to solve if only it were more popular. My claim is merely that of the cross-platform native toolkits, Qt seems to be both the furthest along in terms of capability, and also seems to be one of the less recognized tools in that space (compared to Wx, GTK, Mono, Unity, heck I’ve seen seen more about TK and FLTK than Qt lately). I suspect that Qt could grow and support more of what people want if it got more attention, but for whatever reason of the cross-platform native toolkits it seems to be less discussed.
Just to be clear, this is the workflow I have currently if I’m targeting Electron. Can you show me something comparable with Qt?
This is an overly simplistic argument that misses the point. Desktop app development has not changed significantly in the past five years, and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. You can’t talk about “not optimizing for user happiness” when the alternative is these apps just not existing.
I don’t like the Slack app, it’s bloated and slow. I wouldn’t call myself a JavaScript developer, and I think a lot of stuff in that world is too ruled by fashion. But this posturing and whining by people who are “too cool for Electron” is just downright silly.
Make a better alternative. It’s not like making an Electron app is morally worse than making a desktop app. When you say “we need to make desktop app development better” you can’t impose an obligation on anyone but yourself.
without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today.
I don’t really remember having a problem finding desktop applications before Electron. There seems to be relatively little evidence for this statement.
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron. That’s absurd. You also conveniently ignored the part of my sentence where I say “cross-platform”.
Obviously we can’t turn back the clock and rewrite history, so what evidence would suffice for you? Maybe it would be the developers of cross-platform apps like Slack, Atom, and VS Code writing about how Electron was a boon for them. Or it could be the fact that the primary cross-platform text editors we had before Electron were Vim and Emacs. Be reasonable (and more importantly, civil.)
I think Vim and Emacs, traditional tools of UNIX folks, propped up as examples of what Slack or VS Code replaced is also a fallacy you’re using to justify a need for Electron. Maybe better comparisons would be Xchat/HexChat/Pidgin, UltraEdit or SlickEdit for editor, and NetBeans or IntelliJ IDEA for IDE. So, those products sucked compared to Electron apps for reasons due to cross-platform technology used vs other factors? Or do they suck at all?
Nah, if anything, they show these other projects couldve been built without Electron. Whether they should or not depends on developers’ skills, constraints, preferences, etc on top of markets. Maybe Electron brings justifiable advantages there. Electron isnt making more sophisticated apps than cross-platform native that Ive seen, though.
I think you and the other poster are not making it very clear what your criterion for evidence is. You’ve set up a non-falsifiable claim that simply depends on too many counterfactuals.
In the timeline we live in, there exist many successful apps written in Electron. I don’t like many of them, as I’ve stated. I certainly would prefer native apps in many cases.
All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. I’m really not sure why we are debating about whether a implies b and b implies c means a implies c.
You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.
“Electron is just fashion, Slack and VS Code aren’t really useful to me so there aren’t any useful Electron apps” is not a productive belief and not a reasonable one. I don’t like Slack and I don’t particularly like VS Code. But denying that they are evidence that Electron is letting developers create cross-platform apps that might not have existed otherwise and that are useful to many people requires a lot of mental gymnastics.
“You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.”
You argued something about Electron vs cross-platform native by giving examples of modern, widely-used apps in Electron but ancient or simplistic ones for native. I thought that set up cross-platform native to fail. So, I brought up the kind of modern, widely-used native apps you should’ve compared to. The comparison then appeared to be meaningless given Electron conveyed no obvious benefits over those cross-platform, native apps. One of the native apps even supported more platforms far as I know.
“All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. “
It actually doesn’t unless you similarly believe we should be writing business apps in COBOL on mainframes. Visual Basic 6, or keeping the logic in Excel spreadsheets because those developers or analysts were doing it saying it was easiest, most-effective option. I doubt you’ve been pushing those to replace business applications in (favorite language here). You see, I believe that people using Electron to build these apps means it can be done. I also think something grounded in web tech would be easier to pick up for people from web background with no training in other programming like cross-platform native. This much evidence behind that as a general principle and for Electron specifically. The logic chain ends right here though:
“then this implies that Electron has made it easier for developers to make useful cross-platform apps.”
It does not imply that in general case. What it implies is the group believed it was true. That’s it. All the fads that happen in IT which the industry regretted later on tells me what people believe was good and what objectively was are two different things with sadly little overlap. I’d have to assess things like what their background was, were they biased in favor of or against certain languages, whether they were following people’s writing who told them to use Electron or avoid cross-platform native, whether they personally or via the business were given constraints that excluded better solutions, and so on. For example, conversations I’ve had and watched with people using Electron have showed me most of them didn’t actually know much about the cross-platform native solutions. The information about what would be easy or difficult had not even gotten to them. So, it would’ve been impossible for them to objectively assess whether they were better or worse than Electron. It was simply based on what was familiar, which is an objective strength, to that set of developers. Another set of developers might have not found it familiar, though.
So, Electron is objectively good for people how already know web development looking for a solution with good tooling for cross-platform apps to use right now without learning anything else in programming. That’s a much narrower claim than it being better or easier in general for cross-platform development, though. We need more data. Personally, I’d like to see experiments conducted with people using Electron vs specific cross-platform native tooling to see what’s more productive with what weaknesses. Then, address the weaknesses for each if possible. Since Electron is already popular, I’m also strongly in favor of people with the right skills digging into it to make it more efficient, secure, etc by default. That will definitely benefit lots of users of Electron apps that developers will keep cranking out.
Hey, I appreciate you trying to have a civilized discussion here and in your other comments, but at this point I think we are just talking past each other. I still don’t see how you can disagree with the simple logical inference I made in my previous comment, and despite spending some effort I don’t see how it at all ties into your hypothetical about COBOL. It’s not even a hypothetical or a morality or efficacy argument, just transitivity, so I’m at a loss as to how to continue.
At this point I am agreeing with everything you are saying except on those things I’ve already said, and I’m not even sure if you disagree with me on those areas, as you seem to think you do. I’m sorry I couldn’t convince you on those specifics, which I think are very important (and on which other commenters have strongly disagreed with me), but I’ve already spent more time than I’d have preferred to defending a technology I don’t even like.
On the other hand, I honestly didn’t mind reading your comments, they definitely brought up some worthwhile and interesting points. Hope you have a good weekend.
Yeah, we probably should tie this one up. I thank you for noticing the effort I put into being civil about it and asking others to do the same in other comments. Like in other threads, I am collecting all the points in Electron’s favor along with the negatives in case I spot anyone wanting to work on improvements to anything we’re discussing. I got to learn some new stuff.
And I wish you a good weakend, too, Sir. :)
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron
And if you read what I said, I did not claim that you believed there were no desktop apps before Electron. If you’re going to complain about straw men, please do not engage in them yourself.
My claim was that there was no shortage of native applications, regardless of the existence of electron. This includes cross platform ones like xchat, abiword, most KDE programs, and many, many others. They didn’t always feel entirely native on all platforms, but the one thing that Electron seems to have done in order to make cross platform easy is giving up on fitting in with all the quirks of the native platform anyways – so, that’s a moot point.
Your claim, I suppose, /is/ tautologically true – without electron, there would be no cross platform electron based apps. However, when the clock was rolled back to before electron existed and look at history, there were plenty of people writing enough native apps for many platforms. Electron, historically, was not necessary for that.
It does let web developers develop web applications that launch like native apps, and access the file system outside of the browser, without learning new skills. For quickly getting a program out the door, that’s a benefit.
No one is saying there was a “shortage” of desktop applications; I’m not sure how one could even ascribe that belief to someone else without thinking they were completely off their rocker. No one is even claiming that without Electron none of these apps would exist (read my comment carefully). My claim is also not the weird tautology you propose, and again I’m not sure why you would ascribe it to someone else if you didn’t think they were insane or dumb. This is a tactic even worse than straw manning, so I’m really not sure you why you are so eager to double down on this.
Maybe abstracting this will help you understand. Suppose we live in a world where method A doesn’t exist. One day method A does exist, and although it has lots of problems, some people use method A to achieve things B that are useful to other people, and they publicly state that they deliberately chose method A over older methods.
Now. Assuming other people are rational and that they are not lying [1], we can conclude that method A helped people achieve things B in the sense that it would have been more difficult had method A not existed. Otherwise these people are not being rational, for they chose a more difficult method for no reason, or they are lying, and they chose method A for some secret reason.
This much is simple logic. I really am not interested in discussing this if you are going to argue about that, because seriously I already suspect you are being argumentative and posturing for no rational reason.
So, if method A made it easier for these people to achieve things B, then, all else equal, given that people can perform a finite amount of work, again assuming they are rational, we can conclude that unless the difference in effort really was below the threshold where it would cause any group of people to have decided to do something else [2], if method A had not existed, then some of the things B would not exist.
This is again seriously simple logic.
I get it that it’s cool to say that modern web development is bloated. For the tenth time, I agree that Electron apps are bloated. As I’ve stated, I don’t even like Slack, although it’s ridiculous that I have to say that. But don’t try to pass off posturing as actual argument.
[1]: If you don’t want to assume that at least some of the people who made popular Electron apps are acting intelligently in their own best interests, you really need to take a long hard look at yourself. I enjoy making fun of fashion-driven development too, but to take it to such an extreme would be frankly disturbing.
[2]: If you think the delta is really so small, then why did the people who created these Electron apps not do so before Electron existed? Perhaps the world changed significantly in the meantime, and there was no need for these applications before, and some need coincidentally arrived precisely at the same time as Electron. If you had made this argument, I would be a lot more happy to discuss this. But you didn’t, and frankly, this is too coincidental to be a convincing explanation.
then why did the people who created these Electron apps not do so before Electron existed?
…wut.
Apps with equivalent functionality did exist. The “Electron-equivalent” apps were a time a dozen, but built on different technologies. People creating these kinds of applications clearly did exist. Electron apps did not exist before electron, for what I hope are obvious reasons.
And, if you’re trying to ask why web developers who were familiar with a web toolkit running inside a browser, and unfamiliar with desktop toolkits didn’t start writing things that looked like desktop applications until they could write them inside a web browser… It’s easier to do something when you don’t have to learn new things.
There is one other thing that Electron did that makes it easier to develop cross platform apps, though. It dropped the idea of adhering fully to native look and feel. Subtle things like, for example, the way that inspector panels on OSX follow your selection, while properties dialogs on Windows do not – getting all that right takes effort.
At this point, I don’t really see a point in continuing, since you seem to consistently be misunderstanding and aor misinterpreting everything that’s been said in this entire thread, in replies to both me and others. I’m not particularly interested in talking to someone who is more interested in accusing me of posturing than in discussing.
Thank you for your time.
I am perplexed how you claim to be the misunderstood one when I have literally been clarifying and re-clarifying my original comment only to see you shift the goalposts closer and closer to what I’ve been saying all along. Did you even read my last comment? Your entire comment is literally elaborating on one of my points, and your disagreement is literally what I spent my entire comment discussing.
I’m glad you thanked me for my time, because then at least one of us gained something from this conversation. I honestly don’t know what your motives could be.
I find it strange that you somehow read
I don’t really remember having a problem finding desktop applications before Electron
as implying that you’d said
no desktop apps existed before Electron
@orib was simply saying that there was no shortage of desktop apps before Electron. That’s much different.
…That’s absurd… Obviously we can’t turn back the clock and rewrite history… …Be reasonable (and more importantly, civil.)
You should take your own advice. @orib’s comment read as completely anodyne to me.
I find it strange that you’re leaving out parts of my comment, again. Not sure why you had to derail this thread.
Please, please stop continuing to derail this conversation. I am now replying to your contentless post which itself was a continuation of your other contentless post which was a reply to my reply to orib’s post, which at least had some claims that could be true and could be argued against.
I’m not sure what your intentions are here, but it’s very clear to me now that you’re not arguing from a position of good faith. I regret having engaged with you and having thus lowered the level of discourse.
Please, please stop continuing to derail this conversation… I regret having engaged with you and having thus lowered the level of discourse.
Yeah, I wouldn’t want to derail this very important conversation in which @jyc saves the Electron ecosystem with his next-level discourse.
My intention was to call you out for being rude and uncivil and the words you’ve written since then only bolster my case.
What is even your motive? Your latest comment really shows you think this whole thing is some sort of sophistic parlor game. I have spent too much time trying to point out that there may even exist some contribution from a technology I don’t even like. I honestly hope you find something better to do with your time than start bad faith arguments with internet strangers for fun.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative. For a technical audience, sure. I can choose to, grudgingly, use some bloated application that I know is going to affect my performance, and I’m technical enough to know the tradeoffs and how to mitigate the costs (close all electron apps when I’m running on battery, or doing something that will benefit from more available memory). The problem is for a non-technical audience who doesn’t understand these costs, or how to manage their resources, the net result is a degraded computing experience- and it affects the entire computing ecosystem. Resource hog applications are essentially replaying the tragedy of the commons on every single device they are running on, and even as the year-over-year gains in performance are slowing the underlying problem seems to be getting worse.
And when I say “we” should do better, I’m acknowledging that the onus to fix this mess is going to be in large part on those of us who have started to realize there’s a problem. I’m not sure we’ll succeed as javascript continues to eat the world, but I’ll take at least partial ownership over the lack of any viable contenders from the native application world.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative.
I think this and your references to a “tragedy of the commons” and degrading computing experiences are overblowing the situation a bit. You may not like Slack or VS Code or any Electron app at all, but clearly many non-technical and technical people do like these apps and find them very useful.
I agree 100% that developers should be more cautious about using user’s resources. But statements like the above seem to me to be much more like posturing than productive criticism.
Electron apps are making people’s lives strictly worse by using up their RAM—seriously? I don’t like Electron hogging my RAM as much as you, but to argue that it has actually made people’s lives worse than if it didn’t exist is overdramatic. (If you have separate concerns about always-on chat apps, I probably share many of them, but that’s a separate discussion).
but clearly many non-technical and technical people do like these apps and find them very useful.
If you heard the number of CS folks I’ve heard complain about Slack clients destroying their productivity on their computers by lagging and breaking things, you’d probably view this differently.
If you also heard the number of CS folks I’ve heard suggest you buy a better laptop and throwing you a metaphorical nickel after you complain about Slack, you’d probably view it as futile to complain about sluggish Web apps again.
Dude, seriously, the posturing is not cool or funny at this point. I myself complain about Slack being bloated, and IIRC I even complained about this in my other post. Every group I’ve been that has used Slack I’ve also heard complaints about it from both technical and non-technical people.
I’ll leave it as an exercise for you to consider how this is not at all a contradiction with what you quoted. My God, the only thing I am more annoyed by at this point than Electron hipsterism is the anti-Electron hipsterism.
Not posturing–this is a legitimate problem.
Dismissing the very real pain points of people using software that they’re forced into using because Slack is killing alternatives is really obnoxious.
People aren’t complaining just to be hipsters.
Dude, at this point I suspect you and others in this thread are trying to imagine me as some personification of Electron/Slack so that you can vent all your unrelated complaints about them to me. For the last time, I don’t even like Electron and Slack that much. What is obnoxious is the fact that you are just ignoring the content of my comments and using them as a springboard for your complaints about Slack which I literally share.
You seriously call this account @friendlysock?
Your latest comment doesn’t add anything at all. Many users, perhaps even a majority of users, find Slack and other Electron software useful. I don’t and you don’t. I don’t like Slack’s business practices and you don’t either. Seriously, read the damn text of my comment and think about how you are barking up the entirely wrong tree.
“and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. “
What that’s actually saying is that people who envision and build cross-platform apps for their own satisfaction, fame, or fortune would stop doing that if Electron didnt exist. I think the evidence we have is they’d build one or more of a non-portable app (maybe your claim), cross-platform app natively, or a web app. That’s what most were doing before Electron when they had the motivations above. Usually web, too, instead of non-portable.
We didnt need Electron for these apps. Quite a few would even be portable either immediately or later with more use/funds. The developers just wanted to use it for whatever reasons which might vary considerably among them. Clearly, it’s something many from a web background find approachable, though. That’s plus development time savings is my theory.
I agree that many people might have ended up building desktop apps instead that could have been made even better over time. I also agree with your theory about why writing Electron apps is popular. Finally, I agree that Electron is not “needed”.
I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults: assuming the developers of these apps are rational, clearly they thought Electron was the best choice for them. Anyone “sufficiently motivated” would be willing to write apps in assembler; that doesn’t mean we should be lamenting the existence of bloated compilers.
Is saying developers should think about writing apps to use less resources productive? Yes. Is saying Electron tends to create bloated apps productive? Definitely. Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.
“I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults”
Maybe that was meant for a different commenter. I haven’t done any JS bashing in this thread that I’m aware of. I even said Electron is good for them due to familiarity.
“ Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.”
Maybe that claim was also meant for a different commenter. I’d not argue it at all since those using Electron built some good software with it.
I’ve strictly countered false positives in favor of Electron in this thread rather than saying it’s all bad. Others are countering false negatives about it. Filtering the wheat from the chaff gets us down to the real arguments for or against it. I identified one, familiarity, in another comment. Two others brought up some tooling benefits such as easier support for a web UI and performance profiling. These are things one can make an objective comparison with.
forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
Quoted for truth.
Always assume that your software is sitting between your user and what they actually want to do. Write interactions accordingly.
We don’t pay for software because we like doing the things it does, we pay so we don’t have to keep doing those things.
perhaps because of the licensing model
I also think so. It’s fine for open source applications, but the licensing situation for proprietary applications is tricky. Everyone who says you can use Qt under LGPL and just have to dynamically link to Qt, also says “but I’m not a lawyer so please consult one”. As a solo developer working on building something that may or may not sell at some point, it’s not an ideal situation to be in.
I think the big caveat to this is that for a great many of the applications I see that have electron-based desktop apps, they are frontends for SAAS applications. They could make money off a GPL application just as easily as a proprietary one, especially since a lot of these services publish most of the APIs anyway.
Granted, I’d love to see a world where software moved away from unnecessary rent-seeking and back to actually selling deliverable applications, but as long as we’re in a SAAS-first world the decision to release a decent GPL-ed frontend doesn’t seem like it should be that hard.
The situation is more nuanced than that. Because Electron provides developers with a better workflow and a lower barrier to entry that results in applications and features that simply wouldn’t exist otherwise. The apps built with Electron might not be as nice as native ones, but they often solve real problems as indicated by the vast amount of people using them. This is especially important if you’re running Linux where apps like Slack likely wouldn’t even exist in the first place, and then you’d be stuck having to try running them via Wine hoping for the best.
While Qt is probably one of the better alternatives, it breaks down if you need to have a web UI. I’d also argue that the workflow you get with Electron is far superior.
I really don’t see any viable alternatives to Electron at the moment, and it’s like here to stay for the foreseeable future. It would be far more productive to focus on how Electron could be improved in terms of performance and resource usage than to keep complaining about it.
I never claimed that it doesn’t make life easier for some developers, or even that every electron app would have been written with some other cross-platform toolkit. Clearly for anyone who uses Javascript as their primary (or, in many cases, only) language, and works with web technology day in and day out, something like electron is going to be the nearest to hand and the fastest thing for them to get started with.
The problem I see is that what’s near to hand for developers, and good for the individual applications, ends up polluting the ecosystem by proliferating grossly, irresponsibly inefficient applications. The problem of inefficiency and the subsequent negative affect it has on the entire computing ecosystem is compounded by the fact that most users aren’t savvy enough to understand the implications of the developers technology choices, or even capable of looking at the impact that a given application is having on their system. Additionally, software as an industry is woefully prone to adopting local maxima solutions- even if something better did come along, we’re starting to hit an inflection point of critical mass where electron will continue to gain popularity. Competitors might stand a chance if developers seemed to value efficiency, and respect the resources of their users devices, but if they did we wouldn’t be in this situation in the first place.
Saying that developers use Electron simply because don’t value efficiency is absurd. Developers only have so much time in a day. Maintaining the kinds of applications built with Electron using alternatives is simply beyond the resources available to most development teams.
Again, as I already pointed out, the way to address the problem is to look for ways to improve Electron as opposed to complaining that it exists in the first place. If Electron runtime improves, all the applications built on top of it automatically get better. It’s really easy to complain that something is bloated and inefficient, it’s a lot harder to do something productive about it.
but I shouldn’t have to be killing slack and zoom every time I unplug my laptop
Yes, you shouldn’t. But that is not Electron’s fault.
I’ve worked on pgManage, and even though ii is based on Electron for the front-end, we managed to get it work just fine and use very little CPU/Memory*. Granted, that’s not a chat application, but I also run Riot.im all day everyday and it show 0% CPU and 114M of memory (about twice as much as pgManage).
Slack is the worst offender that I know of, but it’s because the people who developed it were obviously used to “memory safe” programming. We had memory issues in the beginning with the GC not knowing what to do when we were doing perfectly reason able things. But we put the effort in and made it better.
We have a strong background in fast C programs, and we applied that knowledge to the JS portion of pgManage and cut down the idle memory usage to 58M. For this reason, I’m convinced that C must never die.
* https://github.com/pgManage/pgManage/blob/master/Facts_About_Electron_Performance.md (Note: the version numbers referred to in this article are for Postage, which was later re-branded pgManage)
*Edit for spelling*
“But that is not Electron’s fault.”
It happens by default with a lot of Electron apps. It doesnt so much with native ones. That might mean it’s a side effect of Electron’s design. Of course, Id like to see more data on different use-cases in case it happens dor some things but not others. In your case, did you have to really work hard at keeping the memory down?
Edit: The Github link has some good info. Thanks.
It happens by default with a lot of Electron apps.
I see where your coming from, and you’re right, but if more JS devs had C experience (or any other non-memory-managed language), we would all be better for it. The GC spoils, and it doesn’t always work.
It doesnt so much with native ones.
Yes, but I think that greatly depends on the language, and how good the GC is.
That might mean it’s a side effect of Electron’s design.
Maybe, but if pgManage can do it (a small project with 5 people working on it), than I see absolutely no reason why Slack would have any difficulty doing it.
In your case, did you have to really work hard at keeping the memory down?
Yes and no. Yes it took time (a few days at most), but no because Electron, and Chrome, have great profiling tools and we were able to find most issues fairly quickly (think Valgrind). IIRC the biggest problem we had at the time was that event listeners weren’t being removed before an element was destroyed (or something like that).
One thing I’ll note, look at the ipc ratio of electron apps versus other native apps. You’ll notice a lot of tlb misses and other such problems meaning that the electron apps are mostly sitting there forcing the cpu to behave in ways it really isn’t good at optimizing.
In the end, the electron apps just end up using a lot of power spinning the cpu around compared to the rest. This is technically also true of web browsers.
You may use perf on linux or tiptop to read the cpu counters (for general ipc eyeballing i’d use tiptop): http://tiptop.gforge.inria.fr
These are exactly the kind of articles we need more of in Haskell. Short, neither overly bogged down in theory nor afraid of talking about actual language features, and focused on helping people understand how to bridge the gap into building real Haskell applications.
As a somewhat-new-to-Haskell person, I’m of the opinion that this article could use either a bit more explanation or links to explanations for the terms it uses. For example: I understand that a monad is used to track state in a functional way and I have written a couple, but some of the language the first couple paragraphs in Layer 2 makes me feel like I am not reading English.
Although I think the author does point out some very valid weaknesses in CoCs, both theoretically and in how they are currently implemented, I think the author overlooks, or chooses not to address, a few important points in favor of CoCs.
The largest factor to me is that, regardless of any particular problems with CoCs, the choice to not have a CoC has, in many cases, become a sort of dogwhistle for events that are explicitly inclusive of harassers, and have no intention of running a broadly welcoming event. In essence, the choice to not have a CoC these days is tantamount to an open invitation to the worst of the people in our industry. That’s not to say that good people don’t also attend these events, or even that the organizers are knowingly trying to create that sort of environment, but ignorance of the message you’re sending doesn’t necessarily change the message itself.
I also think the author is getting rather bogged down in the nitty gritty specifics of the language of the CoC. As a community organizer and moderator, I do find having specific CoC terms useful from time-to-time, but by and large the goal I think a CoC is less about rules and more about values. A CoC as a statement of community values can serve two important purposes:
First, a CoC can help a community manage and be intentional about it’s growth. Small communities may not see any need for a CoC, because in small groups there is often enough social pressure to prevent toxic jerks from dominating the community, but as a community grows and the social graph becomes less fully connected, the opportunities for toxic and abusive corners of the community to appear, and without intentionally managing these, an entire community can devolve. The CoC in this case can provide a shared vision for the values of the community and help to slow the festering of some of these more toxic community elements.
Second, a CoC lets members outside of the community know what the shared value of a community are, and helps people to make an informed decision about whether their values align with the community, and if the community is worth engaging with.
Thanks for a thoughtful response to the submission. :)
A few thoughts:
In essence, the choice to not have a CoC these days is tantamount to an open invitation to the worst of the people in our industry.
I would suggest that that status, and the dogwhistle effect you mentioned earlier, is perhaps due just as much to repeated memes about coded messaging and hostility as any of the actual behavior in those communities. At some point, repeated public shaming starts to create pathological behavior on its own.
I also think the author is getting rather bogged down in the nitty gritty specifics of the language of the CoC.
The issue is that, if you expect fair enforcement of rules, you need to use language as clearly as possible. I understand your viewpoint (of CoC as values and not as rules documents) doesn’t take that approach, but for a great many folks a Code of Conduct is taken at face value to be a “Hey, if you do , then happens, so please consider instead.”
a CoC lets members outside of the community know what the shared value of a community are
The problem with treating CoC as signalling documents is that it undermines their efficiency as behavior guidelines (because you have to include language and statements whose purpose is aligned more with value expression than on expressing permissible behavior). Separating the “rules” documents (“hey folks, if you harass somebody, you will be ejected”) from the “values” documents (“we believe that everybody should be secure all the time”) lets a community be more explicit in both areas.
Secondarily, I’d argue that the great benefit of technology is that it works regardless of the value and belief systems of the folks using it. Both a Marxist and a libertarian can sit down at, say, a numerical methods conference on optimization and expect to find something useful for planning out resource expenditure. Neonazis and Antifa both benefit from a good lightning talk on OPSEC. Both Republican and Democrat workers benefit from sharing knowledge about how to use AWS properly.
Using a CoC to try and shoo away folks who have ideological differences runs directly counter to the free exchange of ideas, and serves to limit the utility of those communities by ostracizing others. If we lived in a world where, say, you could point to somebody and say that “This person is a Nazi” with 99.999% accuracy, maybe I’d feel differently–but abuse of terms and the public square by overzealous if well-meaning people (some of whom are even users on this site!) has caused me to severely doubt the reliability of our mechanisms.
Thanks for your thoughts as well. A few follow-on thoughts to your notes:
I would suggest that that status, and the dogwhistle effect you mentioned earlier, is perhaps due just as much to repeated memes about coded messaging and hostility as any of the actual behavior in those communities. At some point, repeated public shaming starts to create pathological behavior on its own.
I think there’s probably some truth to this, especially the problems with public shaming. There is an unfortunate tendency to shame people in a way that I think makes them double-down on problematic behaviors. That aside, I don’t think we can ignore the effects of something, dog whistling in this case, regardless of it’s original cause.
The issue is that, if you expect fair enforcement of rules, you need to use language as clearly as possible. I understand your viewpoint (of CoC as values and not as rules documents) doesn’t take that approach, but for a great many folks a Code of Conduct is taken at face value to be a “Hey, if you do , then happens, so please consider instead.”
Writing clear rules intended to be taken literally that cover all forms of potentially allowed and disallowed behavior is essentially a fools errand. Ultimately there must be some human arbiters of behavior that will have to interpret the intention of the rules, look at the behavior, and decide if a violation occurred. You’re basically talking about a miniature domains-specific legal problem here, and if you look at the complexity of most modern legal systems it’s clear that you can’t really have a clearly written exact set of rules that will apply without human interpretation.
I do agree with your suggestion that having a separate rules and values document can help. In the communities that I help moderate we do exactly that- we have a set of rules that are more specific, and have specific consequences, along with a broader values document that outlines the types of behavior we want to see, and how people should behave. The rules document still requires some level of human judgement.
Secondarily, I’d argue that the great benefit of technology is that it works regardless of the value and belief systems of the folks using it. Both a Marxist and a libertarian can sit down at, say, a numerical methods conference on optimization and expect to find something useful for planning out resource expenditure. Neonazis and Antifa both benefit from a good lightning talk on OPSEC. Both Republican and Democrat workers benefit from sharing knowledge about how to use AWS properly. Using a CoC to try and shoo away folks who have ideological differences runs directly counter to the free exchange of ideas, and serves to limit the utility of those communities by ostracizing others. If we lived in a world where, say, you could point to somebody and say that “This person is a Nazi” with 99.999% accuracy, maybe I’d feel differently–but abuse of terms and the public square by overzealous if well-meaning people (some of whom are even users on this site!) has caused me to severely doubt the reliability of our mechanisms.
This is one set of values, but I think it’s not the only one. There’s a place for venues that are focused on allowing all people regardless of ideology or background, but it’s perfectly reasonable to want to run or participate in other types of communities.
In communities that I moderate, I am unapologetically uninterested in excluding some ideologies. Ultimately, there are some ideologies that simply cannot coexist, and focusing on equalizing them just favors the more aggressive of the ideologies. I’m okay with excluding Nazis, because the alternative is to exclude people who are targeted by Nazis. This is one of the ways a CoC can help signal values- by being explicit about the values of the community it is quite quickly clear on which side of this particular chasm an organizations values fall.
In communities that I moderate, I am unapologetically uninterested in excluding some ideologies. Ultimately, there are some ideologies that simply cannot coexist, and focusing on equalizing them just favors the more aggressive of the ideologies. I’m okay with excluding Nazis, because the alternative is to exclude people who are targeted by Nazis. This is one of the ways a CoC can help signal values- by being explicit about the values of the community it is quite quickly clear on which side of this particular chasm an organizations values fall.
Exactly. There’s always the “why can’t you be tolerant of my (intolerant) views???” mock innocence, or the “don’t be so easily offended, it was just a joke” mock confusion.
There’s a place for venues that are focused on allowing all people regardless of ideology or background, but it’s perfectly reasonable to want to run or participate in other types of communities.
Yet, regrettably, the attempt to run those venues gets a great of slander and libel about dogwhistling–as you yourself point out earlier. So, clearly, there isn’t a place for them, if they don’t wish to be tarred by folks who feel they aren’t sufficiently repressing some outgroup.
I’m okay with excluding Nazis, because the alternative is to exclude people who are targeted by Nazis.
You could exclude neither, and let them sort it out themselves elsewhere. Indeed, seeing each other in a context that doesn’t constantly reinforce their ideology might serve to build bridges and mellow both sides–and while your example is a bit stacked, one could apply the same argument to fundamentalists and secular folks, to Israelis and Palestinians, to feminists and men’s rights activists, and so forth.
Assuming that people from different backgrounds will never get along is a very pessimistic view of humanity.
EDIT: Anyways, I’m happy to continue this via PM or email if you’d like to go back and forth more…I don’t mean to clutter up the main thread too much. :)
You could exclude neither, and let them sort it out themselves elsewhere.
One important lesson of the (waving my hands here) social media information age is that this strategy is not viable, because it always results in a “win” for the trolls. Communities are both empowered and obliged to stamp out this form of sociopathy with prejudice, because failing to do so means ceding the public square to the extremists.
Free speech and free expression are wonderful goals in the absence of context, but they aren’t trump cards that outweigh all other factors, they’re variables in a complex equation that, when solved, should (among other things) minimize human suffering.
Exactly. If our Code of Conduct bans violence, but doesn’t exclude, say, explicit white supremacist clothing, the end result is that black people aren’t going to feel comfortable showing up to the con if there’s a bunch of skinheads with swastikas all over the place.
“But if the skinheads do something to the black patrons, they’ll get kicked out!”
Sure, but there’s a concept of making people feel comfortable at an event open to the public. The white supremacists are welcome (in theory) to come to the con, but they need to keep it to themselves.
The CoC is almost a courtesy to the skinheads in that example. The owner of the venue (or the lessee) is almost always allowed to make people leave. At least in New York, if you’re told to leave and then don’t, it becomes criminal trespass. Codes of Conduct don’t matter in any practical sense when you get to that point.
I think instead what they’re useful for is what you say elsewhere in this thread, which is setting a tone: is your con t-shirt and jeans, or jacket and tie? Is it for some political goal or for advancing professional development?
That complex equation comes down to value judgments. You’re not likely to know the ultimate effects of your actions. For instance: affirmative action is not colorblind, but it might lead to genuinely colorblind outcomes some generations from now.
If you’re using deontic ethics instead and your sense of duty requires you to defend freedom of speech, that doesn’t necessarily yield a result worse in terms of human suffering. Utilitarianism’s core problem is that although you can look at the immediate outcome, you don’t know the ultimate yield.
I think these ideas are somewhat compatible. At some point, the question becomes “freedom for whom” – if you can’t get people to show up to your con because of extremism, how much speech did you facilitate? I think there’s something more to championing freedom of speech than not prohibiting things.
You could exclude neither, and let them sort it out themselves elsewhere. Indeed, seeing each other in a context that doesn’t constantly reinforce their ideology might serve to build bridges and mellow both sides–and while your example is a bit stacked, one could apply the same argument to fundamentalists and secular folks, to Israelis and Palestinians, to feminists and men’s rights activists, and so forth.
I know this is a little late to the conversation, but your examples are full of grossly false equivalences. I’m pointing this out not to attack you, because I think you just haven’t really thought it through or are unaware of the context for the statements you’re making, but because spreading them is bad for society.
Start with “fundamentalists and secular folks”. Fundamentalists are radical theocrats, and in the United States, are identified by believing things like homosexuality is sinful, women must submit to their husbands, etc., and in general being radically intolerant of other peoples’ private business. “Secular folks” are “everyone else”, in terms of values.
Feminists believe that women are as human and as entitled to agency and dignity as men are; MRAs believe that women are inferior to men and should be enslaved.
There is no meeting halfway with them. Their values are bad, and any social currency they might gain by publicly participating in high-prestige, “neutral” contexts, like tech conferences, will be used to further their heinous agendas. Ignoring this is how Nazis take over; it creates safe spaces for them, and once they’re in, the space is unsafe for everyone else.
Feminists believe that women are as human and as entitled to agency and dignity as men are; MRAs believe that women are inferior to men and should be enslaved.
There are some folks that identify with MRAs that believe that, and they’re scum. There are also some feminists that cannot share a room or conversation with a man because they view men as needing to be eliminated (for example, Solanas). Ignoring the shades of belief and judging groups by the most offensive members is in fact what puts all discourse in peril.
This is all quite off-topic for Lobsters. If you want to argue, hit me up on DM. :)
It’s slightly off-topic for lobste.rs, but not for this thread, and I don’t want to minimize the point that you cannot meet Nazis halfway.
So, again, your equivalence between MRAs, any MRAs at all, even the most milquetoast “I think society needs to nicer to men” whiner, and even the most extreme misandrist feminist activist is false, because there is no large-scale issue with cultural and institutional misandry, but there is cultural and institutional misogyny. One viewpoint is laughably absurd and harmless (“men should be eliminated”; it’s not a credible threat to men), the other is simply, “Keep things as they are or make them even shittier to women,” which is extremely credible as a threat to the well-being of women. See also below, re: the President brags about sexually assaulting women.
Going back to, and again I need to emphasize that we’re talking about literal Nazis, given that the most powerful single official of the most powerful nation on the planet is an unapologetic white supremacist and rapist, it’s insane to say, “Let’s just set politics aside and welcome anyone.” The presence of Nazis is a threat to public safety and well-being, whether or not they’re in uniform or are being “polite”. Failure to deal with them as the manifest threat they are, given the friendly political environment for them, is spineless abdication of moral duty. There is no “both sides” argument to be made.
I’m telling you this not to accuse you of cowardice, but to help you understand what you’re actually arguing and who would benefit from it, so that you may stop being part of the problem, and start being part of the solution.
One viewpoint is laughably absurd and harmless (“men should be eliminated”; it’s not a credible threat to men)
Well, except for the fact that the author shot two men and attempted to shoot a third, and was on record for being “dead serious” about her manifesto.
the other is simply, “Keep things as they are or make them even shittier to women,” which is extremely credible as a threat to the well-being of women.
There are certainly some folks claiming membership that push for misogyny, but the actual stuff asked about is things like genital mutilation, how domestic abuse of men is handled (when it is recognized at all) and what support networks they have, how divorce and custody is handled, and so forth. You grossly misstate reality here. That’s forgivable, because people tend to be fuzzy with terms these days, but still.
we’re talking about literal Nazis
Somebody hide the Sudetenland! Quick, warn Poland! Buy stock in Volkswagen (and IBM )! That’s what a literal Nazi is about. If you want to talk about neo-Nazis, white supremacists, or even the (poorly-grouped) alt-right, I’m happy to criticize positions they have (most of which range from garbage to odious). Using incorrect terminology makes it hard to talk about a thing productively.
Why does this matter? We can’t defend or even relate to literal Nazis following orders liquidating a ghetto. Some poor white trash who had his job outsourced to Shenzhen though? Somebody who has strong opinions about how blacks are attacking police (despite growing up in a rural town with no African-Americans at all, and a police force which consists of like a county sheriff and a couple of deputies)? Those folks we can reach and educate, if we stop lumping them in with perpetrators of one of history’s biggest genocides.
given that the most powerful single official of the most powerful nation on the planet is an unapologetic white supremacist and rapist
That power is why he’s able to maintain such a solid Department of State, why Congress is doing whatever he wants, why he has met such acclaim and success in his dealings, and why he has been able to dismiss all of the court cases and suits brought against him. Alternately, he’s a boogeyman inflated into vast proportions by people looking to be scared about something.
There is no “both sides” argument to be made.
There is, I’ve made it, you don’t buy it because you’re invested in demonizing and dehumanizing the side you don’t like, life goes on, history will be on the side of tolerance and the dialing back of polarization–or we’ll be shooting at each other and fighting over cans of food in a generation.
This line of discussion is not on-topic for lobsters, and is quite divorced from even the original question of codes-of-conduct.
Welp, you have clearly stated your desire to do nothing in the face of evil and refuse to even name it, so, you’re correct, we will never meet on this.
When you say, “All are welcome,” what you are really saying, and what is heard loud and clear by both aggressors and victims, is, “This is a safe space for Nazis.” Or rapists. Or slavers. Or killers. You get the picture. So do they.
Writing clear rules intended to be taken literally that cover all forms of potentially allowed and disallowed behavior is essentially a fools errand. Ultimately there must be some human arbiters of behavior that will have to interpret the intention of the rules, look at the behavior, and decide if a violation occurred. You’re basically talking about a miniature domains-specific legal problem here, and if you look at the complexity of most modern legal systems it’s clear that you can’t really have a clearly written exact set of rules that will apply without human interpretation.
I completely agree, and this is my biggest problem with the whole “code of conduct” paradigm: it creates a promise of clear, formal rules that can’t possibly be delivered on. Talking in terms of values and moderation policies is a more useful framing that puts the human subjectivity front-and-centre and guides us towards thinking about questions (Who’s going to moderate? What process will they follow? Who are they accountable to?) that are really quite central to dealing with conduct issues in communities, but are swept under the carpet by thinking in terms of a “code” that a project can simply adopt.
Im ok with a higher false positive rate for ‘Is this person a Nazi’ test if it means fewer false negatives. The beauty of technology is not that it’s value less, but that it’s an expression of human value. Technology is anything people make and which things people make is a huge signal of what they value. While some technology are useful tools regardless of value (i can use the butt of a gun to hammer a nail) we can make a pretty good statement about what that society values based on their technology.
While I don’t inherently disagree, I find that a lot of CoC’s that get pushed out are rather restricting. I find it’s better to interpret them as guidelines, not rules, rules lead to toxic individuals getting wiggle room through loopholes.
I’ve also been to events in Germany that don’t have any CoC at all and I don’t hear many complaints from other events around here either. If you’re being a jerk you get thrown out, end of story.
The CoC provisions on offensive speech are usually interpreted broadly benefiting certain groups over others. In other words, it works the opposite of the general rule where these give enforcers lots of leverage over large groups of people. The wiggle room is theirs.
Personally I think any ruleset is only good if it’s applied equally to everyone, same for guidelines.
If a CoC is provisioned and enforced it must be done with the same rigour and accuracy as one would apply law in a proper court.
Personally I think any ruleset is only good if it’s applied equally to everyone, same for guidelines.
That runs into the whole “why can’t you tolerate my intolerance?” problem though. If they say that “hate speech is not allowed” and you interpret a gay married couple discussing their honeymoon is hateful towards Christians (note that not everyone feels this way, just using an example), then who wins? The decision is up to the organizers of the con, but in general most these days are going to side with the married couple (as they should, IMNSHO).
If a CoC is provisioned and enforced it must be done with the same rigour and accuracy as one would apply law in a proper court.
Absolutely not. They’re not laws. They’re rules that a private entity made up for participation in a private function. The correct interpretation of the CoC is exactly whatever the organizers of the con want it to be and nothing else. Generally, they’re going to interpret it in whatever way furthers the goal of the con (e.g. more attendees, higher quality talks) or the values of the organizers (more diversity in gender or ethnicity or whatever).
I would interpret “no hate speech” as strict as Section 1 §130 StGB of german law;
“1. […] against any national, race, religious or ethnic group, against parts of the population or a single person based on predetermined groups or incite parts of the population to violence or despotism or 2. the dignity of another human being, based on a predetermined group, parts of the population or membership in a predetermined group or a specific part of the population insults, maliciously attacks or frames […]” (excuse my crude translation)
Section 2 covers any transmission of anything mentioned in Section 1.
I think that about covers it in terms of “hate speech”. In the specified case, the couple wins since they’re part of a predetermined group of the population.
They’re not laws. They’re rules that a private entity made up for participation in a private function
I think they should be handled like laws. Rigour, precision, efficiency and accuracy are important. The organizers of a con should therefore word their rules such that any violation will be absolutely clear in either word or spirit of the rules without a doubt. If anyone breaks these rules and spreads hate speech then there will be no doubt by anyone involved they crossed the line. There will be no need to extensively discuss it or any wasting of time on people who want to wiggle around the rules.
I would love if some organizer did precisely this.
I cannot see how anybody could interpret that as hate speech. In an attempt to overcome by own biases, can you flip that example on its head somehow so I can relate to it?
Trust me, people can and do. The whole “I’m fine with gay people but do they have to throw it in my face??” because they have a picture of their significant other on their desk or something, whereas the person in question wouldn’t bat an eye at a heterosexual person having a picture of their spouse on their desk.
I’m having trouble coming up with an opposite example, which is my fault.
Oh, I know that there are people who would find that offensive. But the bar for hate speech is higher than merely being offensive.
Opposite? How about being annoyed that something says husband and wife. Or taking offense at something like a father and daughter event because nobody in your family is technically a father.
Fair enough. I was trying to come up with an example from a right-wing perspective (“opposite” in that regard), but the thought process is alien to me so it’s hard.
(This is just for the sake of the argument, we’re already off the track so I’ll roll with it) One might interpret Christian couples taking PR actions against abortion as hateful against its supporters.
http://www.cbc.ca/news/politics/summer-jobs-abortion-images-ccbr-1.4523255
[Justin Trudeau] called flyers depicting bloodied, aborted fetuses used by the Calgary-based Canadian Centre for Bio-ethical Reform (CCBR) “hateful.”
Joyce Arthur, executive director of the Abortion Rights Coalition of Canada, said she believes those images … should be outlawed as hate propaganda.
[emphasis mine]
They’re not laws. They’re rules that a private entity made up for participation in a private function. The correct interpretation of the CoC is exactly whatever the organizers of the con want it to be and nothing else. Generally, they’re going to interpret it in whatever way furthers the goal of the con (e.g. more attendees, higher quality talks) or the values of the organizers (more diversity in gender or ethnicity or whatever).
A CoC is legalistic by its very nature. I’m fine with an organisation adopting formal rules that are interpreted as rigorously as actual law; I’m fine with an organisation using the subjective judgement of its human moderators. But adopting an ambiguously-worded “code” that is in practice subject to interpretation is the worst of both worlds: it reduces moderators’ flexibility, but doesn’t offer participants enough clarity to be useful.
In essence, the choice to not have a CoC these days is tantamount to an open invitation to the worst of the people in our industry. That’s not to say that good people don’t also attend these events, or even that the organizers are knowingly trying to create that sort of environment, but ignorance of the message you’re sending doesn’t necessarily change the message itself.
Equally, the message I get from the choice to have a CoC, as someone generally perceived as white and male, is that I’ll be held to a double standard and if the wrong person takes a dislike to me then I’ll be thrown out, regardless of my actions. That’s probably not a fair reflection of the organisers’ intentions, but it is the message.
Apart from that concern, which I totally agree with, I also try to stay clear of projects that boast a CoC because it shows me that their priorities lie in politics, rather than in technical matters. It’s a waste of my time to spend any effort on endeavors like that.
It frequently doesn’t though.
Almost any large project with a code of conduct has it precisely because they want to focus on the technology more than the politics, and without a code of conduct, or with too loose a code of conduct, they end up being controlled by the loudest jerk.
See the replies to my comment up thread for people who are advocating for CoCs for nakedly political reasons that have nothing to do with technology.
It’s my experience that the only people who whine about this are better left excluded, because somehow, white dudes are still abundantly present and everyone has a nice time.
White dudes? Yes. Working-class people, or even just any kind of conservatives? Often not. People have a nice time yes, but people tend to have a nice time in homogeneous spaces - everyone having a nice time is, if anything, even more common at events attended solely by white dudes. So equally I could say the only people who whine (your term) about diversity/inclusivity/… are better left excluded.
Some thoughts, from someone running a con that has chosen each year to not have a CoC, but is in the process of developing an alternative model:
The largest factor to me is that, regardless of any particular problems with CoCs, the choice to not have a CoC has, in many cases, become a sort of dogwhistle for events that are explicitly inclusive of harassers, and have no intention of running a broadly welcoming event.
It’s not a dogwhistle for events that are explicitly inclusive of harassers. A lack of something is not the same as explicitly including the opposite. The author explicitly covers the lack of effectiveness in many conference CoCs under gray zones and enforcements.
As someone who organises a conference, attends lots of events around the world and spends a bit of time sharing stories with organisers, I have yet to see a conference with the resources to properly ensure that all participants know and understand the CoC, how to use and enforce it. Such an event may exist, I haven’t seen it. I’ve experienced harassment and stalking at events myself, and watched it fumble wherever I’ve reported it.
That’s not to say that CoCs are useless, some events may find them useful, but for the majority of events I’ve attended they have caused more problems than they solve in themselves.
Second, a CoC lets members outside of the community know what the shared value of a community are, and helps people to make an informed decision about whether their values align with the community, and if the community is worth engaging with.
Nearly every CoC I encounter is a variant of or asserts to be inspired by the same geek feminism-based CoC. I have argued against it repeatedly on the grounds that anyone using it has not properly considered the purpose, scope and enforcement of such a document, nor the complications it presents. I have had event organisers flat out admit that they’re using it because people who don’t come to their event will choose not to come to their event if they don’t.
Frankly, if someone feels that the presence of a CoC is the determining factor in whether they attend an event. Maybe an event without a CoC isn’t the event for them. A copy-paste geek feminism sample CoC is a dog whistle to say, “We’re virtue signalling our CoC but don’t really care enough to do it properly”.
Organisers should focus on their existing community and welcoming new arrivals at the event rather than people who won’t turn up if there isn’t a universally ignored and unenforced document put up everywhere to make existing people feel that little bit shittier.
Nearly every CoC I encounter is a variant of or asserts to be inspired by the same geek feminism-based CoC
If you are a geek and not a feminist, what are you? I’m a male, geek, and a feminist.
I’m surprised anyone working in high technology would choose to not be a feminist and prefer to live in the last century. Fortunately I don’t meet many of those people. They seem to only exist on the internet.
I reject the label feminist for many of the reasons outlined here: https://necpluribusimpar.net/the-trouble-with-feminism/ (and frankly I wouldn’t call myself a geek either, especially not if people are going to use it as an excuse to say that I am obliged to hold one or another political opinion. It’s never been a label I particularly cared for in any case).
I don’t have a problem with reasonable Codes of Conduct in principle, but in practice, as stevelord states, they are specifically feminist advocacy, and I think that many vocal strains of modern feminism are hostile to values I think are important and want to see reflected in the culture around technological work. A succinct way of putting it is, I would be fine with any Code of Conduct that mentioned James Damore by name as someone whose speech would be unambiguously permissable in a project or convention - and if a Code of Conduct was designed by people who want James Damore’s words to be grounds for expulsion (as they were for him in the technological community of Google engineering employees), I don’t want that Code of Conduct in force in any space I care about.
I would argue that this is an example of doing exactly what CoCs are intended to do. If you think a project should have people like Damore driving away people who don’t want to be made out to be novelties or second-class class contributors, then frankly I don’t want you in my community.
You’re free to give your “diverse utopia” a shot on your own turf, but the moment you try to co-opt or subvert an existing community or project imagined, initiated, and implemented by (as your side frequently points out) utterly un-“diverse” (i.e. white male) contributors you are throwing the first punch.
This kind of subversion has already occurred - repeatedly - so the bed has been made and all you can do now is lie in it. I, and I’d bet most people in tech, did not expect our field of work to be made into a political battlefield, but hey, solving problems is what we do. We’ll solve this one too.
It’s interesting that you yourself point out how many existing projects and communities have adopted more inclusive policies. The fact is that culture is shifting toward inclusion, and even non-idealistic communities are realizing that broad and inclusive policies attract more and better contributions, and the benefits more than outweigh the technical contributions that would have been made by hateful and toxic community members. It’s not like those of us who value and appreciate CoCs and otherwise inclusive policies have any particular power to dictate the rules and structure of existing projects. Communities are broadly recognizing the value of CoCs and adopting them because the people there want to make their communities better.
If I can just point you at an example of CoCs causing significant damage to communities, I’d point you at FreeBSD’s huggate scandal.
That’s what everyone needs to avoid. CoCs mustn’t be entered into lightly. They have to be properly considered, debated and set up to enhance rather than detract from a community.
You really believe James Damore’s anti-intellectualism is a benefit to technological work? His contributions had nothing to do with technological work and seemed to create a huge distraction away from technological work. I would love to see an argument from you detailing how James Damore’s speech was constructive to technological work.
If you are confused about why I called James Damore’s speech anti-intellectual, I would hint here that empiricism is no substitute for thinking.
Thanks for your comment. Just one thing. I didn’t state they’re specifically feminist advocacy, it’s the blanket adoption of the geek feminism wiki CoC template I was rallying against.
To be clear:
Part of running a decent conference is accepting that there will be people there with different views to you. Your job as an organizer is to create a fun and friendly event, not arbitrarily provoke people (I do enough of that in my spare time :)).
Sorry, perhaps I wasn’t clear. By geek feminism-based CoC I specifically mean this template and it’s wholesale adoption verbatim or almost verbatim.
That one has exactly the kind of politically-motivated and dominating stuff I aim to block in CoC proposals:
“‘Reverse’ -isms, including ‘reverse racism,’ ‘reverse sexism,’ and ‘cisphobia’ (because these things don’t exist)”
My emphasis added. The start is denial victims exist in white, male, or straight groups operating in environments where minority members dominate the power structure. This is dictated by proponemts’ political beliefs that are controversial even among minority members they claim to he about protecting. The next move the ideology brings is not allowing them a say in things or allowing statements/actions toward them that would be offensive/banned if done to other groups. The next is ridicule or ejection as a response to dissent.
All starts with accepting the sophist definitions and rules of a tiny few intended to dominate their opponents in larger groups that they enshrine into a CoC they’ll tell groups is just about civility and stopping bad behavior. No it isn’t: it’s ideological subversion of groups’ norms to enforce the pushers’ beliefs. They’ll put down minorities resisting those beliefs as quickly as anyone else, too.
Your counter has no evidence. So nobody should believe it. That simple.
Unfortunately, I still have to go into a workplace where many groups are dominated by politics benefiting one type of minority (blacks) at everyone else’ expense. That’s in social groups, work assigned, and promotions. Sounds like the definition of structural racism to me. If ever applied to non-whites, the two sentences above would be all the proof they needed that they’re victims of structural racism. You’d agree with them. Logically, there the same function or algorithm for determining structural bias that just have different inputs or outputs. As the sophistry goes, those on the linked article and now you if backing them redefine their own term for just these conversations so it can’t apply to a white person.
With the magic of political bias and agendas, the same definition can be two, contradicting things so that one group is villainized whether delivering or receiving damage interacting with other groups. Such sophistry is not just illogical: it’s inhumane given the damage it supports to decent people in the target group. So, I’d fight a CoC or agenda that starts with a declaration that non-whites in positions of power would never abuse their power against whites. Likewise, women never abusing men. Both are insane statements in light of both recorded history and minority members’ own incessant claims about how other minorities mistreat them at work, school, etc.
The logical response is banning and addressing every instance of group X uses their power to discriminate against group Y with who X and Y are varying case by case, place by place, issue by issue. That protects the most people with the most fairness. It also takes hardly any additional effort in event white or male discrimination is as rare (“nonexistent”) as my opponents believe. Most work would probably still benefit their preferred groups as well given that’s where most of the discrimination is right now.
Note: I should also point out to anyone reading along that even a sub-Reddit on feminism had a list showing they recognized male-specific biases and discrimination. It’s done from their viewpoint but has points that corroborate my claims. Clearly, it’s only some feminists I’m battling with these claims rather than all feminists.
Unfortunately, I still have to go into a workplace where many groups are dominated by politics benefiting one type of minority (blacks) at everyone else’ expense. That’s in social groups, work assigned, and promotions. Sounds like the definition of structural racism to me. If ever applied to non-whites, the two sentences above would be all the proof they needed that they’re victims of structural racism. You’d agree with them. Logically, there the same function or algorithm for determining structural bias that just have different inputs or outputs. As the sophistry goes, those on the linked article and now you if backing them redefine their own term for just these conversations so it can’t apply to a white person.
Sorry for the late reply.
So, we don’t really know each other. I have some ideas about you: male, 20s, information security professional. Probably not in California. From what you’ve previously posted, it sounds like you had it not so easy growing up.
Here’s where I go out on a limb a little bit: you believe that the most important thing about doing your job is professional competence, like ability to debug an issue or design a feature to be safe and easy to use. You also hate that there are exploitable systems in place in society; exploitable systems are bad.
Now, it sounds to me like your boss thought other things were important, like smelling nice or getting along with other teams, were also important, and when some of your friends, who you thought were very good at their jobs, were fired or laid off, they were replaced by people your boss, who is not white, knew or approved of.
Am I way off base?
Anyway, I’m saying all this to demonstrate that I have some idea of how things are, based on my decades of professional interaction with infosec teams, and working in IT, and being online.
So, you say above, “My black boss favors other black people over white people, sounds like the definition of structural racism to me.” But your error is so fundamental that explaining how wrong you are is such a huge task. It requires you to understand:
To this point, consider how you feel about the decision to exclude the fascist Urbit dude (Moldbug) from a conference, where presumably, he was going to talk about his idea for a feudal internet and recruit people to support him and its development. You’re mad because you think the details of his software, developed as a reification of his values, is not a political issue, merely a technical one, and you don’t think any status given to him for speaking at that conference will carry over to his Moldbug persona, and no one who he thinks should be subjugated will stay away as a result of his presence. I leave the absurdity of that belief to stand on its face.
To the second point, for example, and to bring up something you said previously in a different thread, there is no study of the unfair discrimination done by female bosses against men because there are so few female bosses.
And so, that brings us to the final thing you need to understand before you could understand why I said your statements were baseless and false:
You have a black boss who favors people who are like her, and this offends you. To say, “this is structural racism in action,” though, is to ignore the fact that 90% of the bosses are white men who also favor people like them, and that the current real cultural and political landscape favors people like them in terms of access to education, and jobs, and wealth opportunities, and protection by police, and ability to relocate to some other place where the people there will probably be friendly to them. And most of them are hostile to the idea of changing that to make it more equitable.
And I don’t think you’ll ever understand that, or at least, not here and now. So when you say something unfounded and ignorant like, “reverse racism is just as bad as real racism”, which is false and baseless because it ignores nearly every relevant factor in favor of “any exploitation of an exploitable system is offensive and bad”, the most legitimate response is a one-liner like the one I gave.
“So, we don’t really know each other. I have some ideas about you: male, 20s, information security professional. Probably not in California. From what you’ve previously posted, it sounds like you had it not so easy growing up. Here’s where I go out on a limb a little bit: you believe that the most important thing about doing your job is professional competence”
I appreciate you attempting to understand where I’m coming from. Unfortunately, it gave a great example of the kind of projection I’m talking about that certain types of politics depend on to prop up myths or suppress alternative views. The profile doesn’t even match some of my comments on Lobsters about my job, how people get promotions, or what to expect in businesses. It is common among people that push a specific type of politics or CoC’s. Almost every one of them that profiles me says same thing you did. So, let’s get a better picture.
I grew up in areas where dominant groups were very different from me: black school with pervasive racism against whites; rural areas with rednecks that look down on “nerds” and tech; mixed, suburban school that was great in comparison, all groups anti-nerd in nicer way, and a nerd/outcast crowd that was cool (yay!); businesses and other organizations with different makeups. I’m in 30-40 range. I’m not currently in tech or information security as main job: I went into operational side of a company that does mix of high-volume sales and service activities. My job mixes both: sometimes moving product, othertimes handing customers. Our customer base is as diverse as they get with me interacting with, serving, taking abuse from, or being praised by at least 22,000 people face-to-face on record with high, satisfaction rate from those surveyed. With many, especially in groups, I’m required to listen to or make conversation with them to make a pleasant experience. I also observe and listen to what they say to each other just being curious of how they act and what they think. On Facebook, I also created a diverse crowd to see all the things people could teach me about popular topics that I’d otherwise miss due to a filter bubble.
My long time effectively being a white person in both minority-dominated environments and effective slave to mix of people showed me they act effectively the same over sample size of twenty to thirty thousand people in many circumstances with tens to hundreds of thousands of interactions with them among my coworkers and I. Most exploit our company’s level of service to get what they can out of us. Most sound polite, some neutral, and a few ugly with almost all apathetic to burdens or damage they cause. Last subset will use their power in ways that seriously disrupt the company or cause employees harm. Some have used race/gender cards against white males but root cause of exploiting power with misinformation or threats is something all groups do to us. The cards are rarely necessary given our vulnerability. The bosses, which come and go a lot, are mostly either folks wanting a safe bet at blue-chip company with upward mobility or opportunists wanting a ticket punching opportunity move laterally into better pay at another company. At upper levels, it’s almost always politics over performance with team supervisor level being a mix of performance and politics leaning toward performance if it’s about at least keeping the job.
As part of my work, I constantly ask my customers of all groups questions about their jobs, lives, and even politics with no judgment or argument: I just tell them I’m curious, I like hearing others opinions, and thank them for whatever they tell me. Depending on how I assess them, I’ll either politely decline further engagement or carefully ask questions making sure I don’t step past their boundaries. Down-to-earth, non-judgmental or just fun-loving customers I’m more open with or do my comedic approach I do with coworkers for their enjoyment. Those few being non-threatening to my career means I can self-censor less and be myself more. Work style is goofball/satirist/wiseguy who has everyone’s back or gives headaches to those team decides needs it.
That leads me to the next thing. I’d be willing to bet that neither you nor most people advocating some of these views, CoC’s, etc have been under the power of large numbers of minorities or interviewed hundreds to thousands in diverse area for their views without leading questions that reinforce your own beliefs. The comments you see me make on here are often compatible with many of them I’ve talked to. That’s despite some’s attempts to censor them saying that’s about “protecting” minorities or blocking what offends “them.” There’s a huge gap between what piles of black people tell me and what some liberals (including whites) tell me that pretty much all black people think. For instance, most black people I interview in the Mid-South think racism is something every group can do, that it can happen at many levels, and black people can be racist, too. There are plenty that think the other way but they aren’t majority I’ve encountered. When the latter are in control, the views that disagreeing blacks espouse about definition or nature of racism being a general thing are not allowed despite coming from minority members. The standards/rule promoting groups claim to advance or protect minorities while systematically excluding all of them with dissenting views from participation. And then they have a problem with whites making similar claims, too.
Your longer comment might make more sense if you were responding to what a white male with minimal social interaction would believe after a few brushes with run-of-the-mill discrimination. Thing is, my posts are a summary of position of whites, blacks, men, and women who believe these things based on their lifetimes of interaction with their groups and others with many of us under power of other groups in organizations they control. So, we’ve gotten to see it both ways. We’re a very diverse crowd. We shouldn’t be pigeonholed into these projections that pretend it’s one or two subsets of demographics, we have limited experience with other groups in control, we must be social idiots who don’t know The Game at work that gets promotions, and so on. We’re a mix of minority members and white males who understand people, have tons of experience with them, and disagree with your position based on those experiences. Seeing how minority members disagreed among themselves on topics of race, gender and so on reinforced my fight against any group dictating one set of beliefs/practices being acceptable or not. My own group saying it could just be bias but many of them concluding similar things from different backgrounds hinted it might be greater truth.
“there is a difference between small, local, private social spaces,”
It turns out this is true but circumstantially rather than fundamental. The experience I’ve had with thousands of people (esp minorities), observing many more groups controlled by them, listening to minority members in structures controlled by majority or minority types, and so on indicates minority members act just like whites or males. They reward those like them, discriminate against those substantially different, and mostly don’t care about other groups in day-to-day speech or actions. This trend is supported by data from most groups going back most of human history. Any country that has a certain majority with power will have its members come to dominance mostly rewarding their group or a privileged few penalizing others. There’s usually common enemies, too, to unify them. African countries under black control had same traits. Over here, it was mostly white males in power reinforcing their preferences which perpetuated that cycle. So, that’s the majority of the problem at the national level. Switch to cities, organizations, etc that blacks control, you see a reversal of the effect where they boost their own group more and battle/minimize politics of others.
From there, how do we react? Well, if it’s a universal phenomenon, then we need to define it as a universal phenomenon rather than definitions or practices that only villify specific groups for others’ gain. The honest definition costs us nothing: we just note bias, expect each group to combat theirs, and assess it in al group activities by default. Minority members that agreed with me and I are all already doing it to varying degrees. So, it’s not hypothethical. From there, we’d expend most of our effort on whatever is most prevalant in our locale and the national level. I’d expect most of that to be combating white racism or male sexism at national level. At local one, it will usually depend on the group with white dominated areas having mostly white racism we gotta fight, non-white dominated areas having non-white racism we have to fight, and occasional weird ones you’d not expect if just using checklist-like approach to who is oppressors or victims. It will vary as the demographics and beliefs vary among the various power and social structures.
For instance, our [huge] company has different types of -ism’s in different groups depending on their makeup. The executive and senior levels are definitely biased for whites and mostly males with promotions all politics. Middle started from there to get much more mixed with mosty same politics plus some new. On lower levels of management, there’s been a shift in my area toward blacks benefiting only certain types of blacks in two to three groups, white women in three, white men in two others (one biased for women), and one was mixed before ejecting a scapegoated, white dude recenty to get a black guy. Last one in flux. The black-controlled groups even wanted to poach me to boost their numbers but my bosses and I prevented it. I’m still forced to help them once a day or so but that’s driven by cost-cutting and politics, not racial issues.
Blocking transfer was good since turnover is at record high now in their groups, even among black men and women, since leadership’s favoratism discriminates across three attributes (race/gender/age) instead of one or two we’re used to dealing with in the South. High-performing workers with great, social skills who were mostly white, Asian, and one Pacific Islander were given unpopular grunt work with older, black women given better work or promoted. They talk to them differently, too. The advantaged blacks ranged from low performers that transferred to that group (the older women) to a few high performers so good I’d personally invest time in if they asked. Two, a younger male and woman, were exceptions to get advantaged with older women. Women to male ratio in general for advantaged positions is around 8-10 to 1 with ratio among high-performers 1 to 1 to around 3 to 1 depending on what skills you want and whose coming/going. Their personnel decisions don’t make sense unless structural discrimination and/or politics is at play.
Which is what I expect by default and combat for all types. And we end with:
“And I don’t think you’ll ever understand that, or at least, not here. So when you say something unfounded and ignorant like, “reverse racism is just as bad as real racism”
You’re really saying you think all the minority members I’ve listened to or worked with who agree with my position couldn’t possibly understand because of (white male stuff here). It makes no sense because they’re not white males but share my position. Majority are women, too, with many sharing positions on women topics some labeled sexist or something on various forums. You’re right that I can’t understand why only one set of views about minority matters is allowed or often reported, a good chunk preaching them being white, when minorities themselves have an interesting, diverse range of views. I’ve learned a lot from listening to them. They helped shape what I think on tolerance, true inclusiveness, and so on where rejecting certain views on false pretenses (eg only ignorant or hateful white dudes say that) would lead me to systematically discriminate against or suppress minority members with those views in large numbers.
That would be racist and sexist like my white, male executives who only tolerate their type of people, views, and practices. I’m not like that. So, I avoid it and fight it when people who do it want to make any form of it a standard practice to force everyone to think, talk and act like them. Usually have minority members backing me up in most places, too, except on these tech forums. Since they’re not present and invisible to my opponents, I have to speak up on their behalf to let people know they and their beliefs exist. They wouldn’t want to be dismissed with labeling and/or censored.
Welp, at least I’m way off-base in my projection, though your childhood was what I meant by rough time (you brought it up in a different comment) :)
So, that’s a lot to reply to, and I don’t want you to think I’m ghosting or don’t appreciate the long and thoughtful reply. I do, and I thank you. But I’m about to walk out the door and won’t be able to reply in kind until tomorrow. Or, if you don’t want to continue publicly, I am happy to DM. Or if you’re sick of me and my shit, I respect that.
But I mean, I’m surprised you’re not in the industry, when you’re so passionate about what it’s like for people who are in it.
These discussions take a lot of energy as I aim for strong accuracy minimizing effects of my biases. They don’t piss me off or anything except for few times I’m straight up attacked in a clear way. I might not reply just to get back on other stuff like tech or better job, but I’ll definitely read and think on whatever your reply is. :)
Far as IT or INFOSEC, I assumed you’d assume I was in it because it’s a reasonable assumption. I didn’t hold that against you so much as use it to illustrate we come here and to our beliefs from many backgrounds that might surprise you. Most people online can’t believe I’m not in INFOSEC. Some have accused me of lying about that to protect my identity at some defense contractor. Yeah, I’m living in movie True Lies lol…
minority members dominate the power structure.
If only there was a movement that wanted to eliminated dominating power structures…. I just can’t put my finger on it. Or that’s it! Welcome comrade, have you googled The Bread Book?
Lol. I do try to keep them in check. I stay away from communism, though. Utilitatianism via incentives, regulation, and individual action are my preferred methods.
It seems like a reasonable CoC. Your argument that organizers copy and paste it is strange. 50% of software is GPL, and the remaining are copy-paste licenses like MIT, Apache, and BSD. Likely less than 1% have licenses that are custom. Would you say the same thing about software people choosing a license and copy-pasting it? It seems to me most projects do make a big deal about which license they choose.
A well designed CoC like the one you linked seems reasonable to re-use.
Software licences are not the same as CoCs. Software licences police the use of software. CoCs police peoples public behaviour. Using a boilerplate template is a clear indicator that people are posting it to say they’re good people rather than properly looking at how they use or enforce them.
You may think the Geek feminism CoC template is fine. I find it deeply problematic for most events in my size, location and cultural bracket for events I’m involved in. That’s not to say it doesn’t make interesting points, but it’s better that event organisers consider them (along with everything the wiki has to say about CoCs) when preparing their own.
I think enforcement is less important than setting expectations. CoC is priming people on what kind of behavior is expected. This priming will have a positive consequence where people will likely act better. There is good psychological research behind priming that you should read. Enforcement is not the primary purpose of a CoC.
I think enforcement is less important than setting expectations.
If you’re setting expectations withough an ability to enforce them then this will be your outcome.
I’m discontinuing this thread with you as you’re no longer adding anything to the discussion.
I’m neither a geek, nor a feminist. I strongly resent any association between people who claim they are geeks and myself. I am also an anti-feminist. Unlike modern feminists I believe in equality; men and women should have the same rights, just like blacks and catholics and whatever other people from any group you can think of. Modern feminists don’t think that (EttiCosmocrepe provided a link below) and that’s why the are my enemy.
Last I checked feminists want the sexes to be equal. Sounds like you are a “traditional feminist”.
I’m neither a geek, nor a feminist.
I am also an anti-feminist
hence
Sounds like you are a “traditional feminist”.
I believe your reasoning really speaks for itself, hence no further comment is necessary.
I think your pet definition of feminism has clouded your reasoning. Example: You said you want men and women to be equal. That’s what feminism is! In the same breath you call yourself an anti-feminist because you have this strange idea of what feminism is.
Technically someone who wants men and women to be equal is an egalitarian. Feminism and Masculism are mostly concerned with equality for the respective side.
Until you decide to explain which version of feminism (or even generation of feminism, grossly speqking) I don’t think it’s easy to have a productive conversation.
You can’t have both. They’re mutually exclusive. Even given 100% equal opportunity there will always be differing outcomes. You can optimize for either, but you reach a point where optimizing for one will always displace the other.
This isn’t to say we don’t live in a deeply unequal world in either sense, just that what you want isn’t possible.
Notice I didn’t say equal outcomes, I said equitable
Notice I didn’t say equal outcomes, I said that your two options were mutually exclusive and implied a trade-off between them.
You can’t have both. They’re mutually exclusive. Even given 100% equal opportunity there will always be differing outcomes.
When I said the outcomes need to be equitable, I obviously recognize equal opportunity will result in differing outcomes. This is not a contradiction. This is simply your failure to recognize dynamic systems. Imagine a system using dead reckoning . We have our simple model of system behaviour (equal opportunity leads to equal outcome). Efforts for equitable outcomes is a course correction after applying a Kalman filter from our expected simplified model. What really happened is equal opportunity resulted in different outcomes because people are different. So we apply equitable distribution to course correct. This is a self correcting system.
The truth is “equal opportunity brings more equal than no equal opportunity outcomes, but obviously not equal outcomes” is a complex model. We can simplify with “equal opportunity brings more equality” and then course correct with equitable distribution.
It’s absolutely strange that computer people fall back on “logic” instead of dynamic systems to deal with an obviously dynamic system (society).
What would an equitable outcome look like, I’m genuinely interested in what you exactly mean with it.
Society should decide what people need at a minimum (housing, food, healthcare, access to internet, etc) to live a decent life and provide it to them. Some people will need to be provided more than others depending on their differences (people with disabilities may need more help, etc).
In other words, from each according to their ability, to each according to their need.
I like the term David Graeber defined here called “everyday communism”
If it’s public, I’m curious to hear more words or links about your alternative model for a CoC. What do you see as key differences do you see in problems to be addressed, approach to solving them, enforcement, administration, education, etc.?
CoCs assert to set expectations for behaviour, but in reality they tend to focus on harassment. We have had harassment at our event (and I have no doubt that plenty goes unreported), but our most common problems are theft, fighting and damage caused by drunkenness. I’ve never seen a CoC address this.
Another common (at least to us) area CoC’s tend to fall completely short is disruption of talks. BSidesScotland’s Code of Conduct is very good in this respect.
Now some people might think that all we need to do is make a multi-page document outlining what we can and can’t do with a 4 page section on harassment and we’ll be fine. We won’t. In the interim we’ve settled on Wheaton’s Law as the equivalent of a CoC, along with some light rules about enforcement. However, we still have theft, damage, violence etc. on occasion.
The current (non-public) iteration is something we’re calling house rules - a one-pager that goes up on the site, at our event that everyone’s supposed to abide by focusing on actions, not opinions. People who think that a person’s value is defined in some aspect of identity can attend. Act on that, and they’re getting thrown out. People who believe that it’s right to punch people dumb enough to think worth is related to skin colour are also getting thrown out. People who steal or try to steal are getting thrown out.
To make this work we’re going to hold training sessions with ops leads and all the crew, and we’re going to make sure attendees know the house rules through a mix of mailshots, entry in the brochure and possibly (although we’re not sure yet) having the house rules printed up and put up probably around the registration area. On top of this we’re looking into first aid training for ops leads and a bunch of safeguarding education so we can improve our responses.
This assumes that we get this ready in time for this year’s event. Previous iterations have failed due to opposition to identity-based CoCs, mostly from female members of the crew and female attendees. The feedback that I’ve had is that anything that singles people out on the basis of gender or identity is unfair and uncomfortable for them, and introduces an ugly element to our culture that previously wasn’t part of it.
More than anything else, we’re trying to specifically avoid a re-run of donglegate, the FreeBSD debacle(s), and make sure we’re ready to properly support a very severe incident. Ultimately we just want our event to be the same great event it’s always been, to make sure people have a good time and to be welcoming to everyone.
Any post that calls electron ultimately negative but doesn’t offer a sane replacement (where sane precludes having to use C/C++) can be easily ignored.
There’s nothing wrong with calling out a problem even if you lack a solution. The problem still exists, and brining it to people’s attention may cause other people to find a solution.
There is something wrong with the same type of article being submitted every few weeks with zero new information.
Complaining about Electron is just whinging and nothing more. It would be much more interesting to talk about how Electron could be improved since it’s clearly here to stay.
it’s clearly here to stay
I don’t think that’s been anywhere near established. There is a long history of failed technologies purporting to solve the cross-platform GUI problem, from Tcl/tk to Java applets to Flash, many of which in their heydays had achieved much more traction than Electron has, and none of which turned out in the end to be here to stay.
Thing is that Electron isn’t reinventing the wheel here, and it’s based on top of web tech that’s already the most used GUI technology today. That’s what makes it so attractive in the first place. Unless you think that the HTML/Js stack is going away, then there’s no reason to think that Electron should either.
It’s also worth noting that the resource consumption in Electron apps isn’t always representative of any inherent problems in Electron itself. Some apps are just not written with efficiency in mind.
Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.
In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.
Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.
It’s always been insane, you can tell by the fact that those programs “crashing” is regarded as normal.
In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.
Shipping a cross-platform native app written in Python with PyQt or similar is a royal pain. Possibly no real technical work would be required to make it as easy as electron, just someone putting in the legwork to connect up all the pieces and make it a one-liner that you put in your build definition. Nevertheless, that legwork hasn’t been done. I would lay money that the situation with Smalltalk/Racket/Factor is the same.
Java Swing has just always looked awful and performed terribly. In principle it ought to be possible to write good native-like apps in Java, but I’ve never seen it happen. Every GUI app I’ve seen in Java came with a splash screen to cover its loading time, even when it was doing something very simple (e.g. Azureus/Vuze).
Writing C++ has been insane for decades, but not for the reasons you mention. Template metaprogramming is a weird lispy thing that warps your mind in a bad way, and you can never be sane again once you’ve done it. I write C++ professionally in fintech and wouldn’t use anything else for achieving low latency; and I can’t remember the last time I had a crash in production. A portable GUI in C++ is so much work though that it’s not worth the time spent.
C++ the language becomes better and better every few years– but the developer tooling around it is still painful.
Maybe that’s just my personal bias against cmake / automake.
I think the survey as constructed overlooks the demographic of people like me who knew several languages, used ruby, and then went back to mostly using other languages that we already knew before learning ruby.
I was a haskeller who learned ruby mostly because of metasploit, and realized it was a fine language for quick scripts, and I still pick it up now and again, but I’ve gone back to mostly using Haskell because I liked it much better.
Thanks for the criticism!
I tried to balance many things while aiming to still keep it short & sweet. Before I “set the survey free” I was adding a sentence about also checking the boxes if you did something before and then went back to it/renewed interest in it. I decided it might clutter it too much and lots of people don’t really read the text.
So yeah, definitely - maybe/hopefully I find another/better way next time.
That’s exactly me. I know a variety of other languages, but I learned them all prior to Ruby. The only new ones I’ve done anything with are Elixir and Go.
This is me also, sort of. I never started a real project in Ruby, but have contributed to Ruby projects. The reason I never did much else with it is that it isn’t a viable option for the things I enjoy doing.
Pair programing is not about productivity. It’s about reducing the bus factor, spreading knowledge across the team and allowing junior developers to increase their skills.
My personal experience with a team who did full-time pairing was that, by the time I’d left, I had never in my life been part of a team where I understood less of the code or infrastructure. While I had exposure to large swaths of the system, the opportunity to actually dig in and grok the things was severely hampered by the pairing process. Combined with the cognitive overhead of constant pair switching (we pair switched between 1 and 8 times per day), and to simplify every single thought I had so it could be communicated verbally, there wasn’t the mental bandwidth left to think deeply about things even if I’d been allowed to derail my pair long enough to dig into something I hadn’t yet understood. I could see how people who are satisfied with a superficial understanding of things might be mislead into thinking that pairing reduces the bus factor, but I don’t think it’s actually the quality understanding that you’d want if people really did leave the team.
I had never in my life been part of a team where I understood less of the code or infrastructure
I had the same exact experience. Watching other people code is not how I learn.
This is incorrect. See my comment above.
If we extend the definition of pair programming to mean a situation where two people work together then the discussion will become meaningless as it’s obvious that collaboration is required in many (most?) cases in all industries. What we’re discussing here is a specific programming technique that was first advocated by proponents of Extreme Programming. You can read more on the Extreme Programming website. A few quotes:
There are other ways of reducing the bus factor without pair programming. The most obvious way is to let people work on more than one thing (not necessarily simultaneously).
There are others, sure - but as a method of spreading knowledge through a small group , 1:1 tutelage is pretty effective.
To be honest, I can’t believe that anyone who advocates pair programming has ever actually pair programmed. It is a dismal slog—even when you’re lucky enough to get someone you like as a pair. If you don’t, then it probably could constitute torture.
It can be fun if you are lucky enough for it to feel like two friends trying to crack a puzzle, but when it is two people who barely like each other being forced to work together then it probably sucks.
I’ve had the experience you’re describing but the issue with “two friends trying to crack a puzzle” is that it actually requires a puzzle for the description to be applicable. For most professional, relatively experienced, developers, even if they are working on very interesting problems in the large, sitting at a keyboard isn’t really puzzle solving time.
If I have some problem that’s conceptually difficult I might talk to a colleague about it, but I don’t need them to spell out the solution or watch me do the same in implementation. Imagining pair programming as joint puzzle solving is a bit like considering a lawyer to spell checking a document for spelling mistakes the same thing as legal advice.
I’d also add that sometimes it is a way to get past a hurdle when you are banging your head against a wall. Not to be done every time, just when you are stuck and need extra input and a sanity check.
These scenarios aren’t pair programming, though. These are normal collaboration. Pair programming is supposed to be day-in, day-out shared-screen collaborative coding.
Well, collaboration just happens. It’s a given. Every programmer who hits a snag collaborates (if they have teammates). Pair programming as a practice is different. If you’re not talking about the idea as espoused in XP or in Agile propaganda, you’re not really responding to it.
I guess I disagree with you, and also disagree with the OP. While I would personally consider myself an advocate of pair programming as an occasionally useful tool, the OP certainly doesn’t consider me an advocate:
let Ep = pair programming efficacy
let Es = solo programming efficacy
Proponents of pair programming claim that Ep>Es.
Certainly, I would not claim that. (In fact, I don’t think I’ve ever met anyone credible who has claimed something so rigid, so I wonder if the OP is engaged in a straw man.) What I would claim is that I have had the occasion to pair program with others (I have played both roles, the mentor and the mentee), and it has generally been a positive experience. I’d like to note though that it has always occurred under a mutual desire from both parties to actually pair program; I am certainly against some top-down force directing people into a certain style of work.
To give more context, I pair program very infrequently. It’s on the order of a few times (if that) per year.
I would also say that OP’s analysis seems quite incomplete to me. For example, the last time I pair-programmed with someone was on a bit of Javascript code. I know Javascript reasonably well, but the other person didn’t. We had a choice: I could either write the Javascript piece of the task, or the other person could write it. But the other person would take quite a bit longer because they’d have to learn a bit of Javascript to do it. However, in the future, it was clear that this other person would benefit from being able to make changes to the Javascript code, so perhaps the initial investment was worth it. Both of us saw this as opportunity to learn together. I hooked my keyboard into his computer, and we were off with a well defined task. He could ask questions and ponder things in real time, since my full focus was on the task. The exchange of information was rapid, and we both enjoyed it I think. Needless to say, that person has now made several other contributions to that Javascript code since then without any future pair programming.
Could that person have just picked up the Javascript themselves? Yes! Would it have taken them longer? Maybe. Would have it taken so long that it was actually worth me stopping all my work and pair programming for an hour? I have no clue. That kind of micromanagement of efficiency seems hard to nail down in a precise way.
In fact, I don’t think I’ve ever met anyone credible who has claimed something so rigid, so I wonder if the OP is engaged in a straw man.
Certainly the following Martin Fowler quote proclaiming that Ep > Es makes my argument not a straw man.
Fowler also says this:
Of course, since we CannotMeasureProductivity we can’t know for sure. My view is that you should try it and the team should reflect on whether they feel they are more effective with pairing that without. As with any new practice make sure you allow enough time so you have a good chance of crossing the ImprovementRavine.
Which kind of puts a damper on this entire enterprise. This statement, to me, feels like it makes it clear that Fowler isn’t intending to be dogmatic about this. With that said, his choice of words could be better!
The statement you quote there certainly has the feel of something pragmatic, but in practice my experience with nearly everyone in the “agile world” has been a feverish dogmatism toward pair programming. I suspect it’s born out of a lack of empathy for others, and a failure to comprehend that a practice that might be overwhelmingly positive for them could simply fail to work for other teams, but whatever the reason the discourse has always been universally that pairing (or mobbing) is The One True Way, and any objections are purely the result of The Practice being misapplied.
Capital-A Agile usually means practices being dictated by one or two people (typically none of the four principles in the manifesto are noticed). Pairing in an environment like that is an exercise in management control by ensuring you are closely watched at all times.
Interesting. Thanks for sharing. I guess if this thread has taught me anything, it’s to be on the look out for places that mandate this style of pair programming and probably try to avoid them. I certainly have experience with “dogmatic agile,” but I hadn’t really heard of this intense style of forced pairing before!
And the next time I comment on this matter, I will be sure to get the definition problem out of the way first. :-) I had no idea that “pair programming” was even jargon in the first place!
otherwise they would be advocating for pair programming all the time
My company does exactly that!
And good point about Martin Fowler.
otherwise they would be advocating for pair programming all the time
My company does exactly that!
Hmm… All right. Seems quite strange to me. I’ve never heard of such a thing! If that’s true, then yes, I agree that is quite unreasonable.
(I made a mess of my previous comment and edited it down to something more reasonable. Sorry about that.)
XP held that “all code sent into production is created by two people working together at a single computer.” Source here. With respect, I don’t think that if you only pair a “few times a year” you’re experiencing it as it works if executed as originally envisioned. Collaboration on hard problems is useful—even fun, if you’ve got a good colleague or two to work with. Pair programming, though, is not just that. It’s working on the same code, taking turns as the “driver” and the “observer,” either physically at the same computer or sharing your screen and microphone for hours every day.
Nobody bothered to actually define pair programming until now, but as this thread demonstrates, there is no obvious clear consensus on what the term means.
So if you say, “oh i was using this definition of pair programming. What do you think about that?” In response, i would say, “i wouldn’t want to work under those conditions either.” And indeed, i have never experienced that sort of pair programming.
In any case, i was clearly wrong about saying this post was a straw man, since it seems people really do advocate for this stuff. But I’ve never experienced anyone use pair programming with the definition you provided. At work, when we say, “pair program” we mean it as, “a short period of intense collaboration on the same code might be helpful.”
It is a dismal slog
I’ve always thought of it like being on a chain-gang, splitting rocks in the desert.
I think it has it’s place - for juniors or for the particularly tough problems.
That said, I’ve only really enjoyed it with a few people I call friends, generally never enjoyed it with anybody I didn’t get along with anyway.
I agree with many of these, and there are a few I’m neutral on, but I think it’s a real shame to see so much momentum toward pair programming. I’ve done it before, and I will try very hard to avoid ever having to pair again as part of a regular workflow. I think it can be a fine way to help someone with a specific bug, or to get a new team member up to speed, but as a general practice I think it’s one of those things that can work for some people, and be a huge negative for others. There have been so many otherwise great teams that I’ve talked to who are doing interesting work, and I know I’ll never work with them because pairing is a big part of their culture.
I worked for about 9 months on a team of 8 that paired 100% of the time, I’ve also paired with a few other people here and there for workshops or because someone wanted some help learning something. In general the people that I’ve paired with have spent a lot of time pairing, and I believe that they know how to do it well, I just found it awful.
The end of my 9 month run with the team that paired left me so burned out I seriously considered completely leaving tech and going into academia or something. I’ve given it what I believe is about the best possible chance for me to get on board with it, and I just abjectly loath it.
Edit for clarity: The team of 8 pair-switched between 2 and 8 times per day. There was a run of 1-hour pair switching, but for the most part it was a mandatory morning and after lunch pair switch, along with a pair switch whenever the team transitioned from development to deployment, so I regularly paired with every one else on that team.
Erlang’s syntax is a weakness. Almost nobody looks at the Erlang syntax and falls in love with it at first sight. No, it takes time to learn it and understand how good it is. You need to sell Erlang to people without showing the Erlang syntax. If you do show it, then you need to hide the parts that feel alien. Function calls are OK. Recursion, not so much. Maps are OK. Records, not.
It’s too bad Erlang syntax is such a sticking point for people. Despite looking funny, Erlang syntax is is a huge strength. It takes all of 1 hour to grok almost entirely. In this sense, Elixir is a step backwards. All the syntax that attracts the “oh it looks like Ruby” crowd is complicating the language and making it harder to learn. Unfortunately, I think the software industry is too immature (is any industry mature enough?) to admit they like certain syntax just because it makes them feel good rather than fulfilling the utilitarian function of writing correct programs.
[Comment removed by author]
My comparison is Erlang vs Elixir, though. Not Elixir vs Ruby. Being less insane than Ruby is not a challenge, but Elixir is still more complicated than Erlang.
It does all this while acting as a strong selling point for new adopters.
This is exactly my point, though. The Erlang syntax is weird and that seems to be enough to dissuade adopters, despite it being quite utilitarian.
It certainly doesn’t seem to be making anything harder to learn.
In a world where Python and Ruby are considered easy, I do not believe this is a powerful statement.
I think that Erlang’s syntax has some warts even if you’re strictly comparing it to other languages with fairly lightweight syntax. In particular, whenever I use Erlang I find myself mentally comparing it to ML-family languages and wishing that the syntax was similar.
I think the software industry is too immature (is any industry mature enough?) to admit they like certain syntax just because it makes them feel good rather than fulfilling the utilitarian function of writing correct programs.
Absolutely agree. Programmers believe their job is to write code, when in actuality it is to write as much as necessary.
We have a long way to go.
Forced collaboration is counter-productive almost always.
Two observations on this. First, there’s a spectrum between the introverted and extroverted approaches to work, and most people do best in the middle. At Bell Labs, people who had their doors open were less productive in the short term, but more productive in the long term, than those whose doors were always closed. You get more done per hour, perhaps, when your door is closed, but it’s important to know what others are working on and where fruitful results might be found. Of course, the software industry has swung the pendulum to the other extreme, with these open-plan offices (cough back-door age discrimination) where it’s impossible to get anything done. It seems that the only think that open-plan startup offices/cultures generate are sexual harassment claims, and I think that we’d all agree that those are net negative.
Having the option to retreat into a silent space is important. That said, practical research is often more of a team sport than an individual one. Actually, this is even true in theoretical pursuits like pure math. Otherwise, the concept of the Erdos Number wouldn’t have much meaning. Of course, all of this collaboration is organic and voluntary, and that ought to be the key observation.
Second, the software industry has some terrible people in it, especially at the executive ranks. Why is this relevant? Because the evildoers don’t want programmers to be more productive, but more replaceable. Hence, the drive to turn software engineering, which used to have an R&D flavor, into ticket-shop Scrum work where non-technical people called “stake holders” (presumably because programmers are vampires) call the shots. The culture where (to use OP’s on-the-mark words) “Everyone must be available to everyone, all the time,” isn’t about productivity but about exerting a fascist degree of control. And, of course, a malevolent manager striving to turn programmers into fungible commodity workers would want a culture where there’s (again, using OP’s on-the-mark phrasing) “no ownership of work”. Not all of that is an accidental accommodation of extroverts; some of it’s malicious.
The problem, in dealing with management practices like required pair programming, is that you have to know which sort of managerial impulse you’re dealing with. You might have the well-intended, benevolent manager who sees that pairing can have its uses, but who takes it too far and makes it 4 hours of pairing per day instead of 1-2. That requires a diplomatic approach. Or, you might have the malevolent manager who’s trying to make engineers replaceable, in which case the best option is probably to change jobs.
I agree with you 100%.
You might have the well-intended, benevolent manager who sees that pairing can have its uses, but who takes it too far and makes it 4 hours of pairing per day instead of 1-2. That requires a diplomatic approach.
I pair-program 7 hours a day, 5 days a week. Not only that, we rotate once per week. It’s like it was designed to burn us out.
Or, you might have the malevolent manager who’s trying to make engineers replaceable, in which case the best option is probably to change jobs.
Yes, and yes
I had the same “always pairing” situation and found it drained my skill and energy to the point that I was barely able to accomplish anything. I’ve recovered… but get out while you can.
My last job had us pairing 7-8 hours per day with a pair switch every every 1-4 hours. There were other issues there as well, but the pairing was enough after 9 months to make me seriously consider leaving software alltogether.
the pairing was enough after 9 months to make me seriously consider leaving software alltogether.
You’d be in good company. “Agile” methodologies, forced pairing, and open-plan offices are a big part of why our industry seems to eject from the top, starting in the early 30s and finishing around 40.
Pairing is sometimes useful but in limited doses. To me, forced, constant pairing suggests that management wants people to be constantly watched and is trying to set up a fuck-your-buddy system where slackers (real or perceived) get found and ratted by teammates, saving management the burden of actually, um, managing. “Agile” and everything-must-be-in-Jira work the same way: create a system where the team rats out its own people.
The proletarianization of software engineering is both depressing and a bit dangerous to the survival of the industry. I’ve come to the conclusion that for corporate software engineering, the ghettoization is both harmless and inevitable. Line-of-business coding can afford to lose the over-30, top-5-percent programmers because it doesn’t need us in the first place. However, there’s a lot of software where correctness and performance actually matter, and driving out the most conscientious and knowledgable people is really, really bad. I have a friend at a company (details omitted) doing something that all of us would call “real technology” and the sloppy, Silicon Valley, Agile/Scrum approach is used even there… and, in that particular case, it could get people killed.
Over the years I’ve come to the conclusions that business software is a lost cause. The culture of mediocrity and performance surveillance (which is necessary if you’re letting in cut-rate boot-camp grads with 3 weeks of training) is permanently entrenched and it won’t go away. The right move for us older, conscientious people is to go back into research and science, even if that means giving up the peak-bubble salaries, free dinners, and expensive holiday parties.
I believe in the case of this team the constant pair switching and focus on hyper-local maxima was all about driving a hivemind. The constant pair switching amplified every single cultural clash, difference of opinion, of ideal, and ultimately drove out anyone who refused to conform. I came into the team with far more experience in some key areas of the project than anyone else on the team, and yet was completely blocked and unable to make any effective positive changes, even in the face of ample evidence that the existing approaches were failing. Rather than doing things the right way, they doubled down on pair switching more, as though it would increase the synaptic response of this siphonophoric team. This singular being was unable to comprehend an outsider, or act on advise that came from outside of it’s colony. The futility of trying to make even a modicum of progress on things that were just absolute garbage fire codebases, along with the rampant sexism that was amplified through the style of work, was just incredibly draining.
All that said, there’s just too much compelling about software for me to be able to leave; I’ve gotten a new appreciation for what kinds of teams I won’t work for, and I’ve been invigorated to help try to teach and mentor people on better, less systemically toxic was of creating software.
I believe in the case of this team the constant pair switching and focus on hyper-local maxima was all about driving a hivemind.
A lot of “Agile” is a way for management to get programmers to rat each other out. The problem with hyper-literal people (like many engineers) is that they actually believe in objective job “performance” and that the people who are disengaged, ignored, or just unlucky are genuinely stupid, irredeemable people.
For example, when Goldman Sachs rolled out a peer review system, salespeople and traders gave each other all top marks while engineers slammed each other. That shouldn’t be surprising. Only engineers would be hyper-literal/socially-stupid enough to believe that there could be any positive benefit in ratting each other out to management.
The purpose of the constant “little brother” surveillance (constant, forced pairing with frequent changes of forced pairs) is to get to an arrangement where people feel pressured by each other to “perform” (cough conform) and management doesn’t have to do any work. This way, the business gets (in the short term) the most out of everyone while investing nothing in its employees.
The futility of trying to make even a modicum of progress on things that were just absolute garbage fire codebases
Yes, this style of management results in terrible code. It’s a disaster in the long term, but most software managers expect to be promoted away from the mess before that matters.
along with the rampant sexism that was amplified through the style of work
Oh yes. Most of us have no idea how much racism and sexism there is in these open-plan offices. I’m a white male so I don’t experience it, but I do something that most corporate software engineers don’t do, which is talk to people who are different from me.
To an extent, it’s deliberate. I don’t know that the Powers That Be in your office (or in the typical software office) have any ideological love for sexism per se, but they’re all about divide-and-conquer tactics. Unfortunately, engineers tend to be very easy to play against each other.
I think the reason is that all of them recognize the gap between their intellectual capability and the demands of the work– let’s be honest: corporate software isn’t that hard and any idiot can do it; and that’s not true of the math-heavy R&D work you were trained to do if you got a serious STEM degree– and therefore assume an implicit extreme superiority. They are correct in observing that they’re smarter than typical corporate software work demands they be; they are incorrect in believing that this matters or gives them an advantage. At any rate, this leads the hyper-literal/socially-gullible engineer to conclude, “I’m so much smarter than these idiots”, and that prevents anything like a union from forming.
This is not a technical question, though it is a decent set of questions.
Do a work sample with an objective rubric and take feedback based on the performance of hires and non-hires.
It’s amazing that, “ask someone to do their job under normal circumstances and evaluate the result” isn’t the norm.
It isn’t really feasible unless your company has huge resources.
Many people have a job already and “normal circumstances” need a ramp-up time. Their performance will be influenced by that. Some developers are not good under stress (some excel under stress!). Some feel like this is an exam situation which a non-trivial part people have an unreasonable fear of. Also, Interviews conducted by untrained engineers tend to be biased towards their knowledge and interests, not the companies.
All this is incredibly hard to set up for small to medium companies, as it needs a dedicated hiring process, someone caring for it and trained interviewers.
I do enjoy having some technical part in an interview situation, but let’s not pretend it comes any close to the performance they will have on the desk later.
The problem, I think, is that it’s very difficult to create a work sample that fits any of the criteria you’ve listed: normal job, normal circumstances, and fairly evaluated.
The normal job criteria fails because every job is going to be a bit different, and in most cases a job is going to involve working with peers in an environment that you’ve had a hand in building and been able to spin up on. Because you’re in an interview situation the people you’re working with are evaluators, not peers, which changes the power dynamic. The environment is also going to be unfamiliar, and you won’t have been in a position where you’ve been able to spin up on the tech or culture being used. For more senior people this may present less of a problem because they should have a large enough breadth of experience and enough adaptability to be able to navigate unfamiliar environments and languages, but for junior and mid-level people and interview-length project (even on the order of a 1-2 week take-home project) isn’t going to be enough time.
The circumstances are also going to be different. The power differential noted above aside, there’s the problem that most people are already working a job when they go looking. The extra hours and severe context switch of going from day-job to interview-project are going to be a pretty significant deviation from the way a person would normally work.
Finally, the fair evaluation. I’m assuming when you say “evaluated” you are meaning some sort of fair and as-objective-as-possible evaluation. This is going to be hard because it’s going to strongly favor candidates who are most like the existing team. Leaving aside personal implicit bias regarding race, gender, age, etc. and assuming we’re talking about a black-box project, because of the factors above related to the timeframe, availability of peers, lack of contextual background, etc. you’re going to be strongly bias toward people who are already working in environments like yours, and with your technology stack. This can be a good thing if you need to scale up your team quickly, but at a rather severe cost to diversity of experience with different tech stacks, languages, backgrounds, etc. In the long-term you’ll be hobbling your team’s effectiveness by tending toward a monoculture of people who passed the interview thanks to a predisposition to thinking the way your team already things.
It’s also pretty reasonable to expect a 3-6 month “settling in” period for development work. You might know $TECHSTACK cold, but you don’t know our implementation at all, our idiom, and our quirks. Just chucking a real-ish problem at someone and expecting them to solve it probably isn’t going to be a good sample.
Honestly, the best approach is to do a few short interviews to establish: 1) they’re not a con artist, 2) they’re not a lunatic, then bring them on as contract-to-hire to see if they work out.
People who work like this might be able to get ahead for a while in their career, but only by exploiting people who haven’t been burned by people like them before. The article makes it sound like he’s optimizing to be an amazing developer, but the truth is that, by foregoing so much actual experience with people and being so entirely hyperfocused on the singular idea of being a developer, he’s stunted himself mentally.
Great developers, and great people of any discipline, need to be well rounded, grounded in the world, and attached to things in order to work effectively. How is a developer like that ever going to build a product for customers if he’s so detached that he has no idea who he’s building products for or what world they live in? How is he going to be an effective leader or architect if he’s so unable to connect with the people that he’s working with that he can’t effectively relate to and manage them?
I’ve spoken and worked with a lot of people who are astoundingly good at software and computer science, and the ones who have really truly been great have been grounded in something. Some of them were assholes, all of them had passion, but none of them had a singular focus that occluded everything else in the world around them.
I run the local Haskell user group in my city, and I’ve spent a lot of time there, at work, and in my personal life trying to help people come to grips with haskell and pure FP in general, and this presentation is something that people should really take to heart. The number of people who come to meetups and talk to me about how hard of a time they are having getting over monads is enough that some of us have even started referring to a persons general level of familiarity with the language as whether or not that they’ve crossed the monad gap (and I think that moving on and focusing on other things successfully counts as crossing that gap).
To take a more broad look at the problem though, I think that a lot of people have trouble with the mathematical language that tends to be used when talking about haskell. Much like educated people used to use Latin, the language of category theory and abstract algebra permeates discussions of haskell and at times feels like it’s being used entirely to keep people out of the loop. I know that a lot of people involved in haskell are academic- either actively in academia or else having been steeped in the culture of academia they still communicate as though they are writing papers. It just starts to feel like a club sometimes. I had no background in higher level math when I started learning haskell- and I eventually managed to learn and start to grok category theory and abstract algebra through sheer determination, force of will, and the generous helping of free time that comes from being early in your career, single, and introverted.
I think haskell is a pretty good language. It’s not perfect by any means, but it’s in the list of languages that exist right now that I’d prefer to work with for most things, and it’s a much harder sell when so few people are comfortable working in it. I think as a community we’d do well to try to make the language more accessible. The underlying math is interesting and important, but not for everyone, not all the time, and I think it needs to be less front-loaded into what people are told they need to learn.
The underlying math is interesting and important
If by underlying math you mean category theory, then I disagree. I never use category theory in my research, it is rarely used in the research results I feel are important, and I don’t think in terms of category theory when programming in Haskell. Even monads, the shining star of applying category theory to PL, I feel are better understood in terms of their operational analogue to delimited continuations.
The OCaml community is much more sane about this.
I happen to prefer ML to Haskell, and agree that the importance of category theory in everyday programming has been oversold. However, to be fair, there are some virtues to thinking denotationally (monads) rather than operationally (continuations). For example, the virtues of call-by-push-value, were immediately clear to me when I saw its categorical semantics: An adjunction between the categories of value and computation types, where the former admits nice universal constructions (direct sums, tensor products) and the latter admits nice couniversal constructions (direct products). The adjunction itself gives you function types. Considering just the category of value types (and the monad on it) gives you (a denotational semantics for) call-by-value. Dually, considering just the category of computation types (and the comonad on it) gives you (a denotational semantics for) call-by-name. I have no idea how I would have even begun to understand this from a purely operational point of view.
I have no idea how I would have even begun to understand this from a purely operational point of view.
If you are unable to explain the significance of this result in operational terms then I have to doubt its relevance to PL. Ultimately we work in the domain of machines; a denotational semantics is interesting to us as computer scientists when we can apply results stated in terms of other mathematical domains to our domain. I see a denotational semantics that gives insight into category theory as primarily interesting to category theorists.
An example of denotational semantics I find interesting would be the recent work on semantic subtyping. Normally subtyping is presented axiomatically as a series of inference rules. In what sense are these axioms “complete”? For example, this paper presents a subtyping relation, but you can’t prove ∀a.() → a ≤ () → ∀a.a in it. Are we justified in adding the distributivity axiom, and are there other axioms that we should add? By interpreting types as their obvious set-theoretic counterparts we can answer these questions definitively.
Of course, ultimately a programming language has to be implementable on a machine, so it must have an operational semantics. However, I prefer to regard the operational semantics as the final product of the language design process. The starting point is elsewhere.
A programming language that is useful to human beings can’t be defined by arbitrary rules for pushing symbols, otherwise we end up with monstrosities like null references, template specialization, superclass linearization, etc., whose sole justification is “well, it works that way”. In a sensibly designed language, the symbols mean something to us, and it’s this meaning that justifies the use of this language over that other one. So the language designer must have in mind some collection of intended denotations, and use that as a yardstick for evaluating his or her work.
However, I prefer to regard the operational semantics as the final product of the language design process. The starting point is elsewhere. […] In a sensibly designed language, the symbols mean something to us, and it’s this meaning that justifies the use of this language over that other one.
Correct me if I am misunderstanding you, but it sounds to me like if you take this argument to its logical conclusion, you are arguing for language design based on mathematical aesthetics. I disagree with this perspective. The value of a denotation is derived from its ability to model desirable operational phenomena, not the other way around. I advocate functional programming because its easier to reason about the operational behavior of programs when there are no “hidden variables”, to borrow a physics term. If mathematical functions didn’t capture the essence of this operational requirement, then I’d pick something else.
To pick one of your examples, null is bad because it is a member of every type whether you want it to be or not, which limits the ability of the programmer to constrain the operational behavior of their program through the type system, which leads to bugs. Or in other words, null is evidence that your type system is insufficiently expressive. Whether or not some categorical construction naturally accounts for null is irrelevant to the badness of null. On the contrary, if that construction leads one to advocate for null, then that is evidence that that construction would be a poor foundation for a programming language.
Let’s connect this with your CBPV example earlier.
An adjunction between the categories of value and computation types, where the former admits nice universal constructions (direct sums, tensor products) and the latter admits nice couniversal constructions (direct products). The adjunction itself gives you function types. Considering just the category of value types (and the monad on it) gives you (a denotational semantics for) call-by-value. Dually, considering just the category of computation types (and the comonad on it) gives you (a denotational semantics for) call-by-name.
Without establishing the operational consequences this statement has, it is meaningless. Maybe I could model a competitor to CBPV with semisimple rings and Hilbert spaces. How would you compare and contrast these evaluation strategies without an appeal to operational semantics?
whose sole justification is “well, it works that way”
One way to justify an operational semantics is by connecting it with an intuitive denotation. But that isn’t the only way. Another way is by directly stating the mathematical properties you want the operational semantics to have. You can justify your choice of call-by-value evaluation by proving that the cost semantics for your language is compositional. You can justify the type system you chose by proving that the operational semantics does not admit undefined behavior (the classic “progress + preservation” theorem). Etc.
Correct me if I am misunderstanding you, but it sounds to me like if you take this argument to its logical conclusion, you are arguing for language design based on mathematical aesthetics.
You’re not wrong.
The value of a denotation is derived from its ability to model desirable operational phenomena, not the other way around.
Sure. I didn’t mean to suggest that I begin with a fully formed denotational semantics and use that to come up with the operational semantics of a new language. Rather, I think denotational considerations provide useful guidelines for language design. More on this later.
Without establishing the operational consequences this statement has, it is meaningless.
The operational consequence is that call-by-push-value is very finicky about the distinction between a value and a computation that produces a value, and dually, between a computation and a thunk that, when forced, produces this computation. This gives the programmer maximal control over when computation happens, as opposed to either call-by-value or call-by-name, which come with implicit assumptions about when computation happens. CBV and CBN can be embedded into CBPV by explicitly using these assumptions in the appropriate places, and, furthermore, CBPV’s type system will tell you what these appropriate places are.
In retrospect, that probably wasn’t so hard to understand coming from an operational approach. But for some reason I found it easier to come at it from a denotational approach first.
Another way is by directly stating the mathematical properties you want the operational semantics to have.
In a previous conversation with you, I mentioned that I prefer languages that don’t gratuitously provide type formers that break isomorphism invariance. (In fact, lately I have entertained the idea that a good definition of “effect” is “anything that breaks some equational law”.) How would I state the property that a type former is isomorphism-invariant, if not first in categorical language?
In a previous conversation with you, I mentioned that I prefer languages that don’t gratuitously provide type formers that break isomorphism invariance. (In fact, lately I have entertained the idea that a good definition of “effect” is “anything that breaks some equational law”.) How would I state the property that a type former is isomorphism-invariant, if not first in categorical language?
If I’m understanding what you’re saying, then it’d look something like T : ⋆ -> ⋆ is an isomorphism-invariant type former if for all types A and B, A ~ B implies T A ~ T B. There are a couple notions of type isomorphism that aren’t quite equivalent, but for the sake of completeness you could define A ~ B as there exist two terms f : A -> B and g : B -> A such that f . g ≡ id and g . f ≡ id where ≡ is contextual equivalence at the appropriate type.
If that sounds overly categorical it is because the property you desire relies essentially on categorical concepts. The question then is why you chose this to be the property that you want to hold. It isn’t immediately obvious to me that this property implies that GADTs are well behaved, nor is it immediately obvious to me that references don’t satisfy this property (although I believe it because references are tricky). As in the previous discussion the property I would have stated is that the language should not include a mechanism for deciding type disjointness. One can justify this in a couple of ways: first, disjointness is not closed under abstraction, so for GADTs to work in full generality one would need to add disjointness predicates, similar to how Gaster-Jones style record systems allow abstraction over disjoint name predicates.
Second, unlike names, there is no obvious “base case” for disjointness. If you’re working in a lambda calculus, it may be that all types ultimately are built from -> constructors (or at least the semantics provides the illusion of this for clarity/ease of formal manipulation). For this concept to work in full generality mandates that you have a mechanism for “hard” type generativity a-la Haskell newtype, as opposed to what I would call “soft” type generativity that essentially relies on parametricity a-la ML. And I consider newtype to be a hack that forces programmers to endlessly litter any code they wish to hide implementation details of with operationally insignificant injections and projections, so I consider any features that suggest the addition of newtype suspect.
You can use this same reasoning to argue against Racket-style intersections and unions, although I would argue against these more directly by appealing to parametricity.
If you were able to immediately deduce all of the implications I stated above from that isomorphism invariance property, then maybe I should give category theory another shot.
There are a couple notions of type isomorphism that aren’t quite equivalent, but for the sake of completeness you could define
A ~ Bas there exist two termsf : A -> Bandg : B -> Asuch thatf . g ≡ idandg . f ≡ idwhere≡is contextual equivalence at the appropriate type.
In my mind, the right notion of equivalence between two abstract types is that no client program can possibly distinguish between them. That is, if a client program that uses one abstract type is consistently modified to use an equivalent one, then the behavior of the client program remains unchanged. I will call this property “weak equivalence”.
Unfortunately, weak equivalence is difficult to establish in the general case. To simplify the task, I pay attention to the special case when a weak equivalence of abstract types is witnessed by computable mappings (inside the programming language) back and forth between their internal representations, such that “transporting” a value from either abstract type to the other (using the appropriate mapping) doesn’t change the observations that can be made from it. It follows that composing the mappings in either order gives a contextual equivalence at the appropriate abstract type. I will call this property “strong equivalence”.
Since strong equivalence implies weak equivalence and is so much easier to establish, I want to milk strong equivalences for all they’re worth. 90% of the benefit for 10% of the cost is a truly awesome deal.
If that sounds overly categorical it is because the property you desire relies essentially on categorical concepts. The question then is why you chose this to be the property that you want to hold.
When I implement an abstract type, I often want to consider several possible representations, even if in the end I will only use one, because different operations of an abstract type might be easier to implement using different representations. Using strong equivalences, I can “patch together” several incomplete abstract type implementations into a single complete (if crude) one, which can then be optimized by rewriting it in meaning-preserving ways.
It isn’t immediately obvious to me that this property implies that GADTs are well behaved, nor is it immediately obvious to me that references don’t satisfy this property (although I believe it because references are tricky). As in the previous discussion the property I would have stated is that the language should not include a mechanism for deciding type disjointness.
Right, in our previous conversation, you convinced me that the real problem is the the ability to answer type disjointness questions, not GADTs themselves. Alas, the only two implementations of GADTs I’ve seen so far, namely, Haskell’s and OCaml’s, both rely on the ability to answer type disjointness questions (with something other than “dunno”).
One can justify this in a couple of ways: first, disjointness is not closed under abstraction, so for GADTs to work in full generality one would need to add disjointness predicates, similar to how Gaster-Jones style record systems allow abstraction over disjoint name predicates.
This objection isn’t strong enough. You could say “if you need disjointness predicates, well, add them!”
Second, unlike names, there is no obvious “base case” for disjointness. If you’re working in a lambda calculus, it may be that all types ultimately are built from -> constructors (or at least the semantics provides the illusion of this for clarity/ease of formal manipulation). For this concept to work in full generality mandates that you have a mechanism for “hard” type generativity a-la Haskell newtype, as opposed to what I would call “soft” type generativity that essentially relies on parametricity a-la ML.
Yes, this gets to the heart of my objection. If we take isomorphism-invariance seriously, it follows that, ignoring efficiency considerations, the language’s semantics shouldn’t distinguish between “hard” and “soft” generativity in the first place. Isomorphic means equal modulo injections and projections in the right places. If we can make different observations from types meant to be isomorphic, something went wrong.
If you were able to immediately deduce all of the implications I stated above from that isomorphism invariance property, then maybe I should give category theory another shot.
Unfortunately, being neither a computer scientist nor a mathematician, just a layman programmer, I lack the necessary sophistication to make a solid enough case to convince you.
OK, I better understand what you are getting at. I agree that my definition of isomorphism-invariance isn’t what you want. I think the stumbling block for me is that I understand isomorphism in terms of the abstract algebra definition of bijection + preserving structure and in this case didn’t understand what structure you were trying to preserve–the definition I gave above is more-or-less the direct transliteration of (my understanding of) the category theoretic definition of isomorphism.
My new understanding of this is that you want the type system to be “monotonic.” If you’re unfamiliar with this term, it means that if there is some subtyping/subkinding relation, Γ ⊢ e : t, and Γ' ≤ Γ implies Γ' ⊢ e : t' and t' ≤ t (where subtyping/subkinding is extended pointwise to contexts). If monotonicity doesn’t hold, then it makes the notion of backwards compatibility for APIs difficult to nail down. Haskell’s core language isn’t monotonic for a number of reasons, and there isn’t a formal subtyping relation defined on modules to begin with (I’m not even sure it’s possible to do at module granularity because of orphan type classes).
Generally we want a type definition to be in a subkinding relation with an abstract type. That, is if the language has singleton kinds, S(t : k) ≤ k. It is impossible to consider Haskell-style data and newtype in this way without breaking monotonicity–coverage checking for GADTs, type family pattern matching, and type class canonicity all crucially rely on the type system being non-monotonic. (One has to be careful of the statement of this fact because newtype also allows recursive types & wrapping impredicative types. These have structural equivalents, so we could say something like S(StructuralEquivalent(t) : k) ≤ k)
Unfortunately, being neither a computer scientist nor a mathematician, just a layman programmer, I lack the necessary sophistication to make a solid enough case to convince you.
FWIW I am enjoying our conversation, so I’m sorry if it comes across as otherwise.
My new understanding of this is that you want the type system to be “monotonic.” If you’re unfamiliar with this term, it means that if there is some subtyping/subkinding relation, Γ ⊢ e : t, and Γ' ≤ Γ implies Γ' ⊢ e : t' and t' ≤ t (where subtyping/subkinding is extended pointwise to contexts).
Monotonic as in “monotonic logic”? Aha, yes! I start with two types from which I can make equivalent observations. Then I define a bunch of GADTs, type families, class instances, whatever, and, voilà, I can no longer make equivalent observations from those types.
If monotonicity doesn’t hold, then it makes the notion of backwards compatibility for APIs difficult to nail down.
As a layman programmer, what I observe is that it hurts modularity. Adding a new definition “here” (without syntactically altering existing ones) destroys a property established and used “there”. It is difficult to understand for me that Haskellers of all people would be fine with this situation.
FWIW I am enjoying our conversation, so I’m sorry if it comes across as otherwise.
No problem, I am actually enjoying this too! :-)
The value of a denotation is derived from its ability to model desirable operational phenomena, not the other way around.
Are you saying the requirements of programming are operational? That’s not my interpretation. People don’t run programs to make machines do things, they run programs to answer their questions (the original case was decrypting encrypted messages AIUI) - even in a case where a program is automating, like, a warehouse or something, I usually find it more helpful to think of this as the operator (or machine) asking “what should I do now?” I mean if we’re talking about mathematics, the original use cases were things like land surveying, which is operational in a sense (and in the same sense that most programming is, as I would see it). I think mathematicians have been solving similar problems to those that programmers solve, for longer, and so mathematics should inform programming language design.
Another way is by directly stating the mathematical properties you want the operational semantics to have. You can justify your choice of call-by-value evaluation by proving that the cost semantics for your language is compositional.
Isn’t that still a mathematical style of modelling - the kind of thing that category theory would be useful for?
I think there is some confusion wrt the terminology I am using. By “operational” I am referring to operational semantics, a way of formalizing a programming language by describing how programs in the language are executed. The alternative to an operational semantics is a denotational semantics, which formalizes a programming language by associating its syntax with some other mathematical construction, like a set, or a function, or a category. Both are important, we are just disagreeing about which “comes first” or “has primacy” (not really sure how to put it, I’m sleepy)
It’s been quite the opposite historically. I’m not sure where Lambda Calculus fits in denotational vs operational. The Turing Machine & Von Neumann Architecture appear more like operational semantics. Computers were designed to be equivalent to them. Programming languages were as well with them gradually becoming more high-level. Most of what we have today was built with experimental iteration that didn’t involve formal methods or even formal specs. Most programming is similarly “perform these steps to solve the goal.”
Operational semantics has ruled the day in both languages and computers since they began. Possibly why the early proof efforts on programs used operational semantics.
Any programming language can be given an operational semantics, and lambda calculi are no exception.
People have been at least attempting both approaches for most of programming history I think. The Church-Turing equivalence was surprising to some at the time partly because it showed that the more “mechanical” way of thinking of computation and the more “mathematical” way were equivalent, while it wasn’t entirely obvious beforehand that this would be true.
In design of “real” PLs, I think the 1950s projects of Algol vs. Lisp present some of the same flavor. By 21st century standards 1950s Lisp is not all that high-level and abstract (I mean, car and cdr are named after register-twiddling), but compared to what was going on at the time, the Lisp people at least saw themselves as trying to go the other direction. Rather than starting bottom-up from hardware and abstracting some structure out of that gradually, start top-down from something more like mathematical functions, and find a way to implement them on the computer so that the programmer is presented with the abstract computational model regardless of what happens at the machine-code level. Hence the assumption of a bunch of things that were nice in the abstract model, but didn’t actually exist concretely yet, like garbage collection, higher-order functions, etc. (most of which wouldn’t exist in efficient implementations until decades later).
Ironically, Dijkstra complained that the definition of LISP (when its name was still in all caps) was too operational and tied to its implementation details. And he invented predicate transformer semantics, which allows you to ask (and sometimes answer) whether two operationally different, imperative, nondeterministic programs denote the same thing (relate the same precondition to the same postcondition). So it isn’t completely fair to suggest that imperative languages necessarily beget mechanistic reasoning.
I’m not referring to category theory per-se, although the sort of category-ish language is part of it, along with the type theory. More than those specific things though I’m really referring to the general approach to thinking and communicating, and the sort of math-adjacent pseudo-formalism that the Haskell community is really fond of, and the mathematical leaning theoretical cs that people often run across in papers that still form most of the documentation for some libraries.
For me personally, the language used around Haskell motivated me to study the mathematics, and in turn to realize that a lot of the things in Haskell are named by analogy to those concepts than strict formal adherence to them. The useful bit I think came from the way of thinking I started to develop and how I got a better intuition about going between math and code. Still, I don’t think making that leap is necessary to use the language and is turning a lot of people away.
A big part of the problem is that Haskell, or rather, Haskellers make mutually inconsistent promises. For example:
The types can tell you everything that’s going on, especially in plumbing libraries like lens, conduit, you name it.
You don’t need to learn any math to understand Haskell.
You can only pick one, really. You need a certain degree of mathematical sophistication to understand Haskell’s type system, e.g. how parametricity constrains the possible inhabitants of a type, how type class laws make sense in a generic way, and aren’t just the result of someone’s whims. Contrapositively, if Haskell libraries are meant to be used by people with no mathematical inclination, they need to be documented without an implicit assumption that types will tell you everything that’s going on, and equational laws should play a less prominent rôle in defining commonly used abstractions.
I don’t think they’re supposed to apply simultaneously. It’s more like:
Which means that there’s a (well identified) gap as someone learns where their ability to read types is not developed enough to handle “advanced” code. That doesn’t mean they can’t still write it, but it is a steep part of the learning curve and could use some better documentation.
Sadly, this won’t do. Would you use libraries in your own work that contain unlawful instances for standard type classes like Applicative and Monad? Probably not, right? (At least I know I wouldn’t.) Well, that’s the kind of code that someone without mathematical inclination would write. And, for them, this would be fine so long as their programs work.
re category theory
I’m always confused seeing the submissions talking about how important category theory is to the working programmer. I collect cutting-edge work in all major sub-fields of IT plus quite a bit on type systems, formal verification, PL research, and so on. The latter being stuff I pass onto others with only most abstract understanding on my part. Almost all the major stuff I have in the latter categories is done in Coq, HOL’s, custom logics derived from things like ZF/FOL, and so on. I’ve never seen category theory in any groundbreaking work programmers could actually use. Even most ordinary advances came from just using creativity and reasoning within ordinary programming languages like C, Java, etc. Java being number one in quantity of clever stuff published in it. Many are using Python now for stuff based on ad hoc methods.
I had to go out of my way to find a significant work, a compiler, done with category theory. It wasn’t that good compared to really old stuff done with other methods. I say category theory isn’t important 99+% of the time plus hasn’t brought us much at all. I’d say set theory, propositional logic, FOL, automatic solvers (esp SAT/SMT), and LCF-style provers have collectively gotten us the biggest gains justifying continued investment that continues to pay off (rinse repeat). Haskell is best off understood without any of that stuff if people just want to get stuff done. Even those doing formal proofs should ignore category theory in favor of ASM’s, Isabelle/HOL, etc. As you suggested, it’s likely a social, clique effect keeping it so prominent in discussions despite being so unimportant in the big picture.
I think that there is a mode of thinking that’s more of an analogy to category theory than actually being category theory, and it can be helpful in providing a mental framework for gaining an intuition of certain types of abstractions. Beyond that I agree its usefulness has been overstated.
That said, I think there are a lot of benefits to thinking of api design algebraicly and Haskell has done a lot to facilitate that.
I think the 99% figure might be underestimating the importance of category.theory, though maybe not by much- but in my own experience it’s always served more as a framework for brainstorming and I move on to other techniques to actually get stuff done.
“ and it can be helpful in providing a mental framework for gaining an intuition of certain types of abstractions.”
I won’t argue that. I lack the experience for it. Haskellers sure like it.
“I think the 99% figure might be underestimating the importance of category.theory”
Over 10 years and 12,000 papers on CompSci in languages, engineering, formal verification, and so on before I saw category theory mentioned… on Hacker News and then on Lobste.rs. I might be underestimating its importance but it can’t be that high outside the Haskell community.
Edit to add: my papers focused on significant achievements, how they were built/specified, and so on. There could be low-value or incidental work going on in category theory all the time. Just never made enough headway for me to see it in places showing high-impact tech.
I think a lot of things have been discovered under different names and then Category Theory swoops in and takes credit for those accomplishments since it’s a functional generalization of it. That said, most of that generality is pretty hard to put to practice… but it does all hang together and thus form a really pleasing way of seeing these concepts together.
I mean, I’d probably argue that any time you solve a problem by emphasizing functions then you’re practicing Category Theory in that one of its most key insights is to organize how focusing on morphisms between similar structures has been a powerful force in abstract algebra.
Without Category Theory we’d be in the same place we are now with less pretty language for talking about it.
I find problematic this attitude where whoever comes up with the general method for solving a large class of problems gets all the credit, and those who solved particular cases before get ignored. Good abstractions don’t arise in a vacuum: they are the result of studying lots of good motivating examples.
Incidentally, this is also why type classes are problematic: they assume that you can come up with the right abstractions (classes) and only then define models (instances) of them.
To be clear, I’m not exactly fond of this tendency either.
But on the other hand, generalization—the method of cutting away the inessential—is hugely vital to the development of theories in mathematics and computer science. I can’t overstate that. It’s not something to shake a stick at.
Category Theory almost gets gypped here since what it eventually generalized was so heavily used in mathematics since Bourbaki that the act of (formal) generalization gave almost no power—those roads were well-trodden. It just became a lingua franca for those who want to learn it.
All that said, I don’t have a clue why you’re taking aim at typeclasses. As far as I’m concerned they assume no such thing. A user of typeclasses certainly could… but it’s no more inherent in the feature than any other abstraction method.
But on the other hand, generalization—the method of cutting away the inessential—is hugely vital to the development of theories in mathematics and computer science. I can’t overstate that. It’s not something to shake a stick at.
Yes, and category theory hasn’t invented generalization. (Though it probably has refined it.) There is ample historical evidence of both mathematicians and programmers without exposure to category theory, who have come up with nice generalizations.
All that said, I don’t have a clue why you’re taking aim at typeclasses.
Type classes make it unnecessarily difficult to redesign existing abstraction hierarchies in view of newly acquired information. For example, when Haskell made Applicative a superclass of Monad, library authors who had defined Monad but not Applicative instances had to go out of their way to implement those instances. And, even today, the fail method hasn’t been removed from Monad, even though there are no good reasons for it to be there.
ML modules are more flexible in this regard. They allow signatures and structures to be defined and evolved independently, and matched only when the need arises. IMO, this matches more closely the process by which algebraic structures and their models are discovered.
Sorry, I don’t mean to say that CT invented generalization but instead that it’s just a nice generalization of some very well known things.
I see what you mean about typeclasses as well and agree.
That’s pretty much my position. Category Theory is a neat-looking formalism that shows up after all kinds of others, even informal methods, do the real work.
Re Functions. We got from the Lambda Calculus far as I can guess. Reinvented informally as interchangeable subroutines on punch cards with argument passing done manually. Then Fortran and LISP showed up. I don’t know if LC came from category theory in any way. If not, then functions are another example of after-the-fact, pretty-it-up effect you described.
I’d say we got them from math itself well before LC or subroutines :)
LC didn’t come from CT—it was invented around 15 years later. Its fundamental goal was to formalize the intuitive idea mathematicians had long had of certain transformation/arrows/functions being “natural” and it succeeded in abstracting this principle into a very general framework for talking about algebraic theories.
It’s absolutely just a matter of cleaning up language, but those cleaning up stages often turn into big wins. In fact, it was a principle tool of the French school of Algebraic Geometry led by Grothendieck (though I know nothing about AG so I can’t say anything more).
Additionally, now it’s a driving tool in certain schools of type theory so there’s a chance it will play a part in new developments there as they arise.
All that said, I don’t fundamentally disagree with your analysis. Category Theory gets its name as “Abstract Nonsense” largely because its discovery didn’t say anything anyone didn’t already know—it just said it in weird new words that someone else was promising you were really coherent.
Much like educated people used to use Latin, the language of category theory and abstract algebra permeates discussions of haskell and at times feels like it’s being used entirely to keep people out of the loop.
Nice pun with loop. In all seriousness, though… I’ve taken abstract algebra up to a graduate level and a lot of the Haskell terminology was new to me as well. Most of abstract algebra is groups, rings and fields and around specific applications (e.g. number fields, linear algebra, functional analysis). You learn that loops and monoids exist, but you don’t do much with them. Category theory is on the fringe even by pure math standards (I didn’t even encounter it until my late 20s).
Yeah, Haskegorical theory is a thing of its own. I learned category theory too in math school, but the real interesting applications for me were completely different from Haskell’s, like the Galois functor or homological algebra. I seldom see familiar stuff like universal mapping properties or snake lemmas in Haskegorical theory.
That’s because Haskellers only work with the moral analogue of the category of sets, tops Kleisli categories of Set-monads, and, even then, only by wishing bottom (and sometimes laziness altogether) out of existence. I bet the categorical techniques used in homological algebra (which is an actual use case of category theory for things you couldn’t reasonably do without it) would look just as alien and confusing to the average Haskeller as they would to any other programmer. Probably even an Abelian category would look weird: “I didn’t know the initial and final objects could be the same!”
In a more mathematical setting, I think this comment mirrors similar concerns to this.
I think both the original article as well as this followup miss the point that type systems aren’t, entirely, about preventing errors- at least not in the same way that tests are about finding them. Tests should be about checking assumptions, making sure that what you wrote matches up with what you intended to write, and provide a mechanism to check that your assumptions still hold under changes to other parts of the system.
In my opinion the value of tests is that they provide an abstraction over the type of data you can be working with. Sure, the type checker will provide errors when you make a mistake, but a large part of the value is simply in allowing or forcing you to think about your data algebraically or categorically. The benefit to code quality is that people make fewer mistakes when they have more power and flexibility for how they are expressing the code. When people are complaining about the difficulty of working with strong statically typed languages it seems that it’s often because they haven’t learned to exercise that style of thinking to the point where they see it’s benefits to expressiveness.
I’m interested in this, but I’m not sure I entirely follow. I get:
I think that (3) is a follow-on from (1). Can you elaborate? I’d love to hear more.
I’m actually in the early stages of writing a book about this; I’ll try to distill my thoughts down to something makes a little more sense here. For a bit of perspective, the languages I’ve worked with most in my career are: C, Haskell, and Ruby. Much of my thinking has evolved out of comparing the general approaches that I’ve used in those three languages, and trying to generalize them to the wider practice of programming.
To start at a very high level, I think of the Curry-Howard Isomorphism, which basically says that a type signature for a program is a formula that is proved by it’s implementation. In that sense, when we’re working with a strongly typed language, our entire program is a composition of many tiny proofs that we develop with the help of our theorem prover (in our case, the type checker). If take a category theoretical approach to program analysis we can see that each of these proofs can have several different implementations that may or may not be equivalent for our purposes. For example (*) and (+) both prove (Int -> Int -> Int) but we can’t use them interchangeably in our application.
When we start looking at developing applications this way, it leads us to consider an approach where we start with some set of formulae that we want to implement as programs, which are themselves pure functions. Our tests then are acting much more like function-level acceptance tests than unit tests, and our focus is not on the proof of our formula but rather on demonstrating that we have selected the correct morphism.
Moving back into the “Not about preventing errors” part of this though, taking a category-theoretical approach to software design also means that, beyond the area of proofs for our applications, we have an entirely new toolbox to use when we’re thinking about how we design our applications. I typically use the language of algebra instead of category theory because it’s often sufficient and less intimidating. If we start to look at our application as a type level formula then the process of proving it with our program becomes one of defining an algebraic structure that’s appropriate for our data domain, and then building up the set of morphisms we have over the objects in that structure, and finally using the algebra we’ve defined to prove our formula.
I believe that the benefits of this approach extend beyond making our code less error prone, and improve the overall quality of our code under refactoring, the addition of new features, or changes to requirements. While traditional approaches to unit testing will give you the confidence to make changes without introducing regressions, the type-driving algebraic approach gives you a coherent language to work with and I think makes it much easier to write applications that are structured such that they facilitate modularity and refactoring. When you’re building an algebra it’s difficult NOT to build small orthogonal components that can be composed into different programs as your business requirements grow and change.
Thank you very much!
I can’t claim to fully appreciate your insight into types… I’ve tackled the Curry-Howard Isomorphism and Category Theory many times now, and every time my brain just fuzzes and I only get the most general gist. So I’m still waiting for the breakthrough that will really allow me to apply that whole side of things to programming. Maybe your book will be the thing!
I think the author would benefit from looking at Quorum which was designed with syntax and semantics chosen based on research into what made languages easier or harder for people to learn (see: https://quorumlanguage.com/evidence.html).
There haven’t been many attempts at reproducing the results that were used to drive the design of the language, but it still seems like a more worthwhile starting place than COBOL and english grammar.
I didn’t know about this, chur!