Well done EU! Google fully deserves this.
Personal anecdote: I worked at Samsung on the team working on its own browser. In 2012, Android WebKit was old and problematic. Google pushed Chrome for Android and my team was disbanded “due to pressure from Google”.
You’d think Google would be more confident in their own browser. I hadn’t heard your story before, that is probably the worst one I’ve heard yet. They also do things like accidentally forget to test their websites on other browsers.
I think Google was correct to be afraid. (But I would think so, wouldn’t I?) Historical case of Microsoft and Internet Explorer comes to mind.
I understand that this might be serious, but it seriously reads like a parody of techie gear-obsession. GUIs (including vi) were invented for a reason, and though you don’t need to like them, when I read that “the file system is likewise adequate for organizing ones work if the user works out a reasonable naming convention” I can’t help but think of someone who exclusively uses a typewriter or someone else who uses only paper and fountain pen saying the exact same thing. And of course such people do exist, which makes the entire idea of claiming something extremely high up on the ladder of relative complexity is “adequate if the user is reasonable” rather silly.
Reading through the original message, I found myself wondering if this were real or not as well. It seems like it’s a different form of hipsterism, based in computers instead of something more analogue.
I guess it’s nice if he actually enjoys that flow, but I find it hard to believe it’s more productive than opening a modern text editor.
I mean, there have been other posts about how a lot of big authors still use WordStar. Maybe this was a parody of some kind? It kinda gets into Poe’s Law territory.
Neal Stephenson wrote ‘Cyphernomicon’ and the Baroque Cycle books with a fountain pen for the first drafts and then used Emacs for the revisions and polishing up.
Edit: Neil Gaiman uses fountain pens exclusively for his writing, and has said that using computers actually reduces his productivity.
My favorite quote by Patrick O’Brian was when he was asked what word processor he used:
I use pen and paper, like a Christian.
Also known as McNamara’s fallacy.
https://en.wikipedia.org/wiki/McNamara_fallacy
TBH mathwashing doesn’t seem like a very good name.
The advice i’ve seen around the lesswrong people give is to take the time and do all the math you reasonably can, but if despite all the calculations you still feel like it’s telling you to do the wrong thing, just do the right thing anyway. That doing the math is important for influencing that gut feeling, but you shouldn’t ignore it. Hadn’t known there was a name for doing the opposite though.
Well I’d say it’s more descriptive than “artificial intelligence”, given we usually speak of cybernetics instead.
Do you have an alternative term to propose?
The difference I see here, which may be a bit small, is that I don’t think Microsoft was addressing a legitimate problem with any of the standard they did the EEE thing on. The ones cited for Google mostly are.
Spam is a legitimate problem with email, and it’s hard to solve in a way that both makes it easy for people to run mail servers as a hobby and effectively blocks spam for the majority of users.
There’s also some legitimate problems with email that various vendors have tried to solve by emailing links to webpages with the actual content. I’ve seen this pattern a few times for security. Want to send someone a message that isn’t exposed to every email server on the net without making them generate and manage RSA keypairs securely? It’s gotta be on a webpage.
Ditto with AMP - Google’s use of it seems a bit controlling at times, but it is a problem that many sites are way too heavy, particularly for mobile devices on cellular networks. When all the incentives for publishers are to put in one more ad network and one more tracking script, it’s tough to slim things down without a big hammer to swing.
It does seem to be a disturbing larger tend in tech in the last 5-10 years that the behavior of bad actors and dark patterns are pushing things towards greater centralization. Google, being one of the top dogs in tech now, stands to benefit a lot from this. It makes me wonder if there are any architectural changes to the whole system that could make it more resistant to various forms of badness without all of the centralization.
Not really, Microsoft always had good excuses for their anticompetitive practices, just like Google. Technology just changes quick enough that many of their “solutions” are themselves considered obsolete now.
It makes me wonder if there are any architectural changes to the whole system that could make it more resistant to various forms of badness without all of the centralization.
At least for social networks, requiring federation would be a to mitigate the network effects. It would be great if I could see friends and families Facebook content without me being on Facebook.
Want to send someone a message that isn’t exposed to every email server on the net without making them generate and manage RSA keypairs securely? It’s gotta be on a webpage.
Isn’t it still visible to email servers if you’re using email to send it? It may be an extra level of indirection, but including a hyperlink in an email is not really that clever of a solution.
At least for social networks, requiring federation would be a to mitigate the network effects. It would be great if I could see friends and families Facebook content without me being on Facebook
Could be, too bad none of the federated social networks seem to have gotten much traction.
Isn’t it still visible to email servers if you’re using email to send it? It may be an extra level of indirection, but including a hyperlink in an email is not really that clever of a solution.
Suppose so, though fewer. It isn’t a perfect solution, but personal management of keypairs seems to have fallen flat. What cleverness there is is that the website can manage accounts and only show the contents to the registered and authenticated user over a TLS-protected connection.
The difference I see here, which may be a bit small, is that I don’t think Microsoft was addressing a legitimate problem with any of the standard they did the EEE thing on.
EEE doesn’t work if you don’t have desired features to encourage uptake of your extended version.
Sorry but AMP didn’t solve anything that other tools at google couldn’t solve either. They do have tools available right now that score your webpage for loading speed, performance and mobile friendliness. There is no reason this couldn’t have been used instead of AMP.
While I do agree on that others have “solved” problems on email in the same manner, I’m not aware of any particular solution that caught on. Plus gmail has a significant marketshare.
As an American, I was really confused by the date of this article. I kept thinking to myself, “Wow, this post is from January and it just now made it to lobste.rs?” Then I clicked on the News homepage to see what other news they had, and promptly realized they’re using the European format (01.05.2018) on the article, but a less ambiguous format (May 01, 2018) for the News homepage.
It’s not the “European” format. It’s the international format. The US, of course, needs to be a snowflake.
YYYY-MM-DD is the one true international date format! :-)
DMY is definitely more widespread than MDY, I’ll agree, but it isn’t used in most of East Asia, besides the US. People in countries that don’t use either of those often find it ambiguous whether a year-last date was intended as a “European-style” or “American-style” date (which in my limited experience is what Japanese and Chinese call those two formats), since both styles are foreign. You can even find examples of all three styles on Chinese universities’ English-language pages…
Going by user population size, by international standards, and by rationality (sort lexicographically!), YYYY-MM-DD is probably the only format that deserves to be called international. It’s also much less ambiguous than month-first and date-first, given that the US and Europe do the opposite thing but write it the same way. I suppose someone could write YYYY-DD-MM but I don’t remember having seen this, while I definitely am confused about whether someone is writing in the European/US style from time to time.
This is as an American, born and raised. :) I still prefer to write MM/DD, though, because we speak dates that way. Maybe it’s different in other languages.
EDIT: Actually, according to Wikipedia, DMY is used by the most people! https://en.m.wikipedia.org/wiki/Date_format_by_country
Other than ISO 8601, I prefer DMY with the month written as a three-letter abbreviation. ex: 01 May 2018. It prevents the confusion over whether 01 is the first day of the month or the first month of the year, and reads in the order one typically cares about while preserving the rank order of the components. When I need a checksum I put the day of the week in front: Tue 01 May 2018. That lets me be confident I didn’t make a transcription error and lets the person I’m communicating with check my work if they need to.
Good point, I definitely think the day of the week as checksum is underused. I always try to include it in scheduling emails in case I mistype a number.
MDY and DMY are equally unambiguous when the month is written as an abbreviation, but a numeric month papers over language differences: It doesn’t matter if you call it “Aug” or “八月”, it’s 8.
(That requires everyone to standardize on the Hindu-Arabic numerals, but, in practice, that seems like it’s happened, even in places which don’t use the Latin alphabet.)
In Hungary, though we are in Europe, we don’t use the “European format”. The hungarian standard format is “YYYY. MM. DD.”. I prefer the ISO format for anything international, as it is easy to recognize from the dashes, and avoids confusion. (In my heart I know that our format is the one true format, but I’m happy the ISO has also recognized it! 😉)
Edit: To me the D M Y format can be justified, though for me Y M D seems more logical. (specifying a time instance from the specific to the generic, or from the generic to the specific range can both be ok) What I cannot grasp is how the M D Y format appeared.
What I cannot grasp is how the M D Y format appeared.
The tentative progression I pieced together last time I looked into it, though note that this is definitely not scientific grade historical research, is something like this:
When talking about a date without the year, English has for centuries used both “May 1st” and “1st May” (or “1st of May”), unlike some languages where one or the other order strongly predominates. Nowadays there’s a strong UK/US split on that one, but in 18th-19th century England they were both common;
it seems to have been common for authors to form a fully qualified date by just tacking on the year to however they normally wrote the month/day, so some wrote “May 5th, 1855” and others “5th May, 1855”;
fairly early on, the “May 5th” and “May 5th, 1755” forms seem to have become dominant in the US for whatever reason; and finally
much later, when writing dates in fully numerical format became a thing, Americans kept the same MDY order that they had gotten used to for the written-out dates.
In my mind if it’s not the American standard it must be the European standard. Even it encompasses more than Europe. I understand that’s probably not the best way to think of things.
As an Australian, I get pretty annoyed every time I read a US article and have to deal with the mental switch. Even worse because I work for a US company and people throw around “we’re doing this 6/5”, and that doesn’t even look like a date to my eyes — we never just do D/M, so “number/number” looks like a fraction. once I work out it’s a date, I realise it’s an American thing and realise it must be M/D.
In OCaml, you can even match exceptions, replacing try-catch:
match do_something () with
| Red(...) -> ...
| Black(...) -> ...
| exception Some_exn -> ...
I think in the case of OCaml the reason if and let are still around is just because they save characters, and they’re also more “precise” as to intention. Same reason we might have redundant forms in English.
Wow, this is extremely helpful!
This makes me think whether parts of the traditional try ... catch ... finally could also be subsumed by this (in Scala catch is already more or less a pattern match).
Thanks for bringing this up!
While I agree that the article is probably true, the biggest problem with Electron, and a lot of modern software development, is that “Developer happiness” and “Developer Efficiency” are both arguments for electron, but “user happiness” and “user efficiency” aren’t.
Electron developers are incentivized to develop applications that make users happy in the small- they want something that looks nice, has lots of features, is engaging. The problem is that in their myopic pursuit of this one-and-only goal too many apps (and electron is a vanguard of this trend, but not the only culpable technology by far) forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
That’s where electron as a model breaks down. Electron apps are performant enough, and don’t use too much memory, when they are used by themselves on a desktop or powerful docked laptop- but I shouldn’t have to be killing slack and zoom every time I unplug my laptop from a power source because I know they’ll cut my battery life in half. I shouldn’t have to ration which slack teams I join lest I find other important processes swapping or getting oom-killed.
Even without those concerns, Electron apps selfishly break the consistency of visual design and metaphors used in a desktop experience, calling attention to themselves with unidiomatic designs.
We do need easier and better ways of developing cross-platform desktop applications. Qt seems to be the furthest along in this regard, but for reasons not entirely clear to me it’s never seemed to enter the wider developer consciousness - perhaps because of the licensing model, or perhaps because far fewer people talk about it than actually use it and so it’s never been the “new hotness”.
the author specifically calls out what the problem with QT is.
Native cross-platform solution like Qt tend to consider themselves more a library, less a platform, and have little to offer when it comes to creating auto-updating software, installers, and App Store packages.
Don’t be so dismissive of peoples choices with the ‘new hotness’ criticism.
I think you misunderstand what I’m saying. My claim isn’t that Qt would solve every problem that people are looking to electron to solve if only it were more popular. My claim is merely that of the cross-platform native toolkits, Qt seems to be both the furthest along in terms of capability, and also seems to be one of the less recognized tools in that space (compared to Wx, GTK, Mono, Unity, heck I’ve seen seen more about TK and FLTK than Qt lately). I suspect that Qt could grow and support more of what people want if it got more attention, but for whatever reason of the cross-platform native toolkits it seems to be less discussed.
Just to be clear, this is the workflow I have currently if I’m targeting Electron. Can you show me something comparable with Qt?
This is an overly simplistic argument that misses the point. Desktop app development has not changed significantly in the past five years, and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. You can’t talk about “not optimizing for user happiness” when the alternative is these apps just not existing.
I don’t like the Slack app, it’s bloated and slow. I wouldn’t call myself a JavaScript developer, and I think a lot of stuff in that world is too ruled by fashion. But this posturing and whining by people who are “too cool for Electron” is just downright silly.
Make a better alternative. It’s not like making an Electron app is morally worse than making a desktop app. When you say “we need to make desktop app development better” you can’t impose an obligation on anyone but yourself.
without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today.
I don’t really remember having a problem finding desktop applications before Electron. There seems to be relatively little evidence for this statement.
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron. That’s absurd. You also conveniently ignored the part of my sentence where I say “cross-platform”.
Obviously we can’t turn back the clock and rewrite history, so what evidence would suffice for you? Maybe it would be the developers of cross-platform apps like Slack, Atom, and VS Code writing about how Electron was a boon for them. Or it could be the fact that the primary cross-platform text editors we had before Electron were Vim and Emacs. Be reasonable (and more importantly, civil.)
I think Vim and Emacs, traditional tools of UNIX folks, propped up as examples of what Slack or VS Code replaced is also a fallacy you’re using to justify a need for Electron. Maybe better comparisons would be Xchat/HexChat/Pidgin, UltraEdit or SlickEdit for editor, and NetBeans or IntelliJ IDEA for IDE. So, those products sucked compared to Electron apps for reasons due to cross-platform technology used vs other factors? Or do they suck at all?
Nah, if anything, they show these other projects couldve been built without Electron. Whether they should or not depends on developers’ skills, constraints, preferences, etc on top of markets. Maybe Electron brings justifiable advantages there. Electron isnt making more sophisticated apps than cross-platform native that Ive seen, though.
I think you and the other poster are not making it very clear what your criterion for evidence is. You’ve set up a non-falsifiable claim that simply depends on too many counterfactuals.
In the timeline we live in, there exist many successful apps written in Electron. I don’t like many of them, as I’ve stated. I certainly would prefer native apps in many cases.
All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. I’m really not sure why we are debating about whether a implies b and b implies c means a implies c.
You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.
“Electron is just fashion, Slack and VS Code aren’t really useful to me so there aren’t any useful Electron apps” is not a productive belief and not a reasonable one. I don’t like Slack and I don’t particularly like VS Code. But denying that they are evidence that Electron is letting developers create cross-platform apps that might not have existed otherwise and that are useful to many people requires a lot of mental gymnastics.
“You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.”
You argued something about Electron vs cross-platform native by giving examples of modern, widely-used apps in Electron but ancient or simplistic ones for native. I thought that set up cross-platform native to fail. So, I brought up the kind of modern, widely-used native apps you should’ve compared to. The comparison then appeared to be meaningless given Electron conveyed no obvious benefits over those cross-platform, native apps. One of the native apps even supported more platforms far as I know.
“All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. “
It actually doesn’t unless you similarly believe we should be writing business apps in COBOL on mainframes. Visual Basic 6, or keeping the logic in Excel spreadsheets because those developers or analysts were doing it saying it was easiest, most-effective option. I doubt you’ve been pushing those to replace business applications in (favorite language here). You see, I believe that people using Electron to build these apps means it can be done. I also think something grounded in web tech would be easier to pick up for people from web background with no training in other programming like cross-platform native. This much evidence behind that as a general principle and for Electron specifically. The logic chain ends right here though:
“then this implies that Electron has made it easier for developers to make useful cross-platform apps.”
It does not imply that in general case. What it implies is the group believed it was true. That’s it. All the fads that happen in IT which the industry regretted later on tells me what people believe was good and what objectively was are two different things with sadly little overlap. I’d have to assess things like what their background was, were they biased in favor of or against certain languages, whether they were following people’s writing who told them to use Electron or avoid cross-platform native, whether they personally or via the business were given constraints that excluded better solutions, and so on. For example, conversations I’ve had and watched with people using Electron have showed me most of them didn’t actually know much about the cross-platform native solutions. The information about what would be easy or difficult had not even gotten to them. So, it would’ve been impossible for them to objectively assess whether they were better or worse than Electron. It was simply based on what was familiar, which is an objective strength, to that set of developers. Another set of developers might have not found it familiar, though.
So, Electron is objectively good for people how already know web development looking for a solution with good tooling for cross-platform apps to use right now without learning anything else in programming. That’s a much narrower claim than it being better or easier in general for cross-platform development, though. We need more data. Personally, I’d like to see experiments conducted with people using Electron vs specific cross-platform native tooling to see what’s more productive with what weaknesses. Then, address the weaknesses for each if possible. Since Electron is already popular, I’m also strongly in favor of people with the right skills digging into it to make it more efficient, secure, etc by default. That will definitely benefit lots of users of Electron apps that developers will keep cranking out.
Hey, I appreciate you trying to have a civilized discussion here and in your other comments, but at this point I think we are just talking past each other. I still don’t see how you can disagree with the simple logical inference I made in my previous comment, and despite spending some effort I don’t see how it at all ties into your hypothetical about COBOL. It’s not even a hypothetical or a morality or efficacy argument, just transitivity, so I’m at a loss as to how to continue.
At this point I am agreeing with everything you are saying except on those things I’ve already said, and I’m not even sure if you disagree with me on those areas, as you seem to think you do. I’m sorry I couldn’t convince you on those specifics, which I think are very important (and on which other commenters have strongly disagreed with me), but I’ve already spent more time than I’d have preferred to defending a technology I don’t even like.
On the other hand, I honestly didn’t mind reading your comments, they definitely brought up some worthwhile and interesting points. Hope you have a good weekend.
Yeah, we probably should tie this one up. I thank you for noticing the effort I put into being civil about it and asking others to do the same in other comments. Like in other threads, I am collecting all the points in Electron’s favor along with the negatives in case I spot anyone wanting to work on improvements to anything we’re discussing. I got to learn some new stuff.
And I wish you a good weakend, too, Sir. :)
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron
And if you read what I said, I did not claim that you believed there were no desktop apps before Electron. If you’re going to complain about straw men, please do not engage in them yourself.
My claim was that there was no shortage of native applications, regardless of the existence of electron. This includes cross platform ones like xchat, abiword, most KDE programs, and many, many others. They didn’t always feel entirely native on all platforms, but the one thing that Electron seems to have done in order to make cross platform easy is giving up on fitting in with all the quirks of the native platform anyways – so, that’s a moot point.
Your claim, I suppose, /is/ tautologically true – without electron, there would be no cross platform electron based apps. However, when the clock was rolled back to before electron existed and look at history, there were plenty of people writing enough native apps for many platforms. Electron, historically, was not necessary for that.
It does let web developers develop web applications that launch like native apps, and access the file system outside of the browser, without learning new skills. For quickly getting a program out the door, that’s a benefit.
No one is saying there was a “shortage” of desktop applications; I’m not sure how one could even ascribe that belief to someone else without thinking they were completely off their rocker. No one is even claiming that without Electron none of these apps would exist (read my comment carefully). My claim is also not the weird tautology you propose, and again I’m not sure why you would ascribe it to someone else if you didn’t think they were insane or dumb. This is a tactic even worse than straw manning, so I’m really not sure you why you are so eager to double down on this.
Maybe abstracting this will help you understand. Suppose we live in a world where method A doesn’t exist. One day method A does exist, and although it has lots of problems, some people use method A to achieve things B that are useful to other people, and they publicly state that they deliberately chose method A over older methods.
Now. Assuming other people are rational and that they are not lying [1], we can conclude that method A helped people achieve things B in the sense that it would have been more difficult had method A not existed. Otherwise these people are not being rational, for they chose a more difficult method for no reason, or they are lying, and they chose method A for some secret reason.
This much is simple logic. I really am not interested in discussing this if you are going to argue about that, because seriously I already suspect you are being argumentative and posturing for no rational reason.
So, if method A made it easier for these people to achieve things B, then, all else equal, given that people can perform a finite amount of work, again assuming they are rational, we can conclude that unless the difference in effort really was below the threshold where it would cause any group of people to have decided to do something else [2], if method A had not existed, then some of the things B would not exist.
This is again seriously simple logic.
I get it that it’s cool to say that modern web development is bloated. For the tenth time, I agree that Electron apps are bloated. As I’ve stated, I don’t even like Slack, although it’s ridiculous that I have to say that. But don’t try to pass off posturing as actual argument.
[1]: If you don’t want to assume that at least some of the people who made popular Electron apps are acting intelligently in their own best interests, you really need to take a long hard look at yourself. I enjoy making fun of fashion-driven development too, but to take it to such an extreme would be frankly disturbing.
[2]: If you think the delta is really so small, then why did the people who created these Electron apps not do so before Electron existed? Perhaps the world changed significantly in the meantime, and there was no need for these applications before, and some need coincidentally arrived precisely at the same time as Electron. If you had made this argument, I would be a lot more happy to discuss this. But you didn’t, and frankly, this is too coincidental to be a convincing explanation.
then why did the people who created these Electron apps not do so before Electron existed?
…wut.
Apps with equivalent functionality did exist. The “Electron-equivalent” apps were a time a dozen, but built on different technologies. People creating these kinds of applications clearly did exist. Electron apps did not exist before electron, for what I hope are obvious reasons.
And, if you’re trying to ask why web developers who were familiar with a web toolkit running inside a browser, and unfamiliar with desktop toolkits didn’t start writing things that looked like desktop applications until they could write them inside a web browser… It’s easier to do something when you don’t have to learn new things.
There is one other thing that Electron did that makes it easier to develop cross platform apps, though. It dropped the idea of adhering fully to native look and feel. Subtle things like, for example, the way that inspector panels on OSX follow your selection, while properties dialogs on Windows do not – getting all that right takes effort.
At this point, I don’t really see a point in continuing, since you seem to consistently be misunderstanding and aor misinterpreting everything that’s been said in this entire thread, in replies to both me and others. I’m not particularly interested in talking to someone who is more interested in accusing me of posturing than in discussing.
Thank you for your time.
I am perplexed how you claim to be the misunderstood one when I have literally been clarifying and re-clarifying my original comment only to see you shift the goalposts closer and closer to what I’ve been saying all along. Did you even read my last comment? Your entire comment is literally elaborating on one of my points, and your disagreement is literally what I spent my entire comment discussing.
I’m glad you thanked me for my time, because then at least one of us gained something from this conversation. I honestly don’t know what your motives could be.
I find it strange that you somehow read
I don’t really remember having a problem finding desktop applications before Electron
as implying that you’d said
no desktop apps existed before Electron
@orib was simply saying that there was no shortage of desktop apps before Electron. That’s much different.
…That’s absurd… Obviously we can’t turn back the clock and rewrite history… …Be reasonable (and more importantly, civil.)
You should take your own advice. @orib’s comment read as completely anodyne to me.
I find it strange that you’re leaving out parts of my comment, again. Not sure why you had to derail this thread.
Please, please stop continuing to derail this conversation. I am now replying to your contentless post which itself was a continuation of your other contentless post which was a reply to my reply to orib’s post, which at least had some claims that could be true and could be argued against.
I’m not sure what your intentions are here, but it’s very clear to me now that you’re not arguing from a position of good faith. I regret having engaged with you and having thus lowered the level of discourse.
Please, please stop continuing to derail this conversation… I regret having engaged with you and having thus lowered the level of discourse.
Yeah, I wouldn’t want to derail this very important conversation in which @jyc saves the Electron ecosystem with his next-level discourse.
My intention was to call you out for being rude and uncivil and the words you’ve written since then only bolster my case.
What is even your motive? Your latest comment really shows you think this whole thing is some sort of sophistic parlor game. I have spent too much time trying to point out that there may even exist some contribution from a technology I don’t even like. I honestly hope you find something better to do with your time than start bad faith arguments with internet strangers for fun.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative. For a technical audience, sure. I can choose to, grudgingly, use some bloated application that I know is going to affect my performance, and I’m technical enough to know the tradeoffs and how to mitigate the costs (close all electron apps when I’m running on battery, or doing something that will benefit from more available memory). The problem is for a non-technical audience who doesn’t understand these costs, or how to manage their resources, the net result is a degraded computing experience- and it affects the entire computing ecosystem. Resource hog applications are essentially replaying the tragedy of the commons on every single device they are running on, and even as the year-over-year gains in performance are slowing the underlying problem seems to be getting worse.
And when I say “we” should do better, I’m acknowledging that the onus to fix this mess is going to be in large part on those of us who have started to realize there’s a problem. I’m not sure we’ll succeed as javascript continues to eat the world, but I’ll take at least partial ownership over the lack of any viable contenders from the native application world.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative.
I think this and your references to a “tragedy of the commons” and degrading computing experiences are overblowing the situation a bit. You may not like Slack or VS Code or any Electron app at all, but clearly many non-technical and technical people do like these apps and find them very useful.
I agree 100% that developers should be more cautious about using user’s resources. But statements like the above seem to me to be much more like posturing than productive criticism.
Electron apps are making people’s lives strictly worse by using up their RAM—seriously? I don’t like Electron hogging my RAM as much as you, but to argue that it has actually made people’s lives worse than if it didn’t exist is overdramatic. (If you have separate concerns about always-on chat apps, I probably share many of them, but that’s a separate discussion).
but clearly many non-technical and technical people do like these apps and find them very useful.
If you heard the number of CS folks I’ve heard complain about Slack clients destroying their productivity on their computers by lagging and breaking things, you’d probably view this differently.
If you also heard the number of CS folks I’ve heard suggest you buy a better laptop and throwing you a metaphorical nickel after you complain about Slack, you’d probably view it as futile to complain about sluggish Web apps again.
Dude, seriously, the posturing is not cool or funny at this point. I myself complain about Slack being bloated, and IIRC I even complained about this in my other post. Every group I’ve been that has used Slack I’ve also heard complaints about it from both technical and non-technical people.
I’ll leave it as an exercise for you to consider how this is not at all a contradiction with what you quoted. My God, the only thing I am more annoyed by at this point than Electron hipsterism is the anti-Electron hipsterism.
Not posturing–this is a legitimate problem.
Dismissing the very real pain points of people using software that they’re forced into using because Slack is killing alternatives is really obnoxious.
People aren’t complaining just to be hipsters.
Dude, at this point I suspect you and others in this thread are trying to imagine me as some personification of Electron/Slack so that you can vent all your unrelated complaints about them to me. For the last time, I don’t even like Electron and Slack that much. What is obnoxious is the fact that you are just ignoring the content of my comments and using them as a springboard for your complaints about Slack which I literally share.
You seriously call this account @friendlysock?
Your latest comment doesn’t add anything at all. Many users, perhaps even a majority of users, find Slack and other Electron software useful. I don’t and you don’t. I don’t like Slack’s business practices and you don’t either. Seriously, read the damn text of my comment and think about how you are barking up the entirely wrong tree.
“and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. “
What that’s actually saying is that people who envision and build cross-platform apps for their own satisfaction, fame, or fortune would stop doing that if Electron didnt exist. I think the evidence we have is they’d build one or more of a non-portable app (maybe your claim), cross-platform app natively, or a web app. That’s what most were doing before Electron when they had the motivations above. Usually web, too, instead of non-portable.
We didnt need Electron for these apps. Quite a few would even be portable either immediately or later with more use/funds. The developers just wanted to use it for whatever reasons which might vary considerably among them. Clearly, it’s something many from a web background find approachable, though. That’s plus development time savings is my theory.
I agree that many people might have ended up building desktop apps instead that could have been made even better over time. I also agree with your theory about why writing Electron apps is popular. Finally, I agree that Electron is not “needed”.
I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults: assuming the developers of these apps are rational, clearly they thought Electron was the best choice for them. Anyone “sufficiently motivated” would be willing to write apps in assembler; that doesn’t mean we should be lamenting the existence of bloated compilers.
Is saying developers should think about writing apps to use less resources productive? Yes. Is saying Electron tends to create bloated apps productive? Definitely. Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.
“I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults”
Maybe that was meant for a different commenter. I haven’t done any JS bashing in this thread that I’m aware of. I even said Electron is good for them due to familiarity.
“ Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.”
Maybe that claim was also meant for a different commenter. I’d not argue it at all since those using Electron built some good software with it.
I’ve strictly countered false positives in favor of Electron in this thread rather than saying it’s all bad. Others are countering false negatives about it. Filtering the wheat from the chaff gets us down to the real arguments for or against it. I identified one, familiarity, in another comment. Two others brought up some tooling benefits such as easier support for a web UI and performance profiling. These are things one can make an objective comparison with.
forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
Quoted for truth.
Always assume that your software is sitting between your user and what they actually want to do. Write interactions accordingly.
We don’t pay for software because we like doing the things it does, we pay so we don’t have to keep doing those things.
perhaps because of the licensing model
I also think so. It’s fine for open source applications, but the licensing situation for proprietary applications is tricky. Everyone who says you can use Qt under LGPL and just have to dynamically link to Qt, also says “but I’m not a lawyer so please consult one”. As a solo developer working on building something that may or may not sell at some point, it’s not an ideal situation to be in.
I think the big caveat to this is that for a great many of the applications I see that have electron-based desktop apps, they are frontends for SAAS applications. They could make money off a GPL application just as easily as a proprietary one, especially since a lot of these services publish most of the APIs anyway.
Granted, I’d love to see a world where software moved away from unnecessary rent-seeking and back to actually selling deliverable applications, but as long as we’re in a SAAS-first world the decision to release a decent GPL-ed frontend doesn’t seem like it should be that hard.
The situation is more nuanced than that. Because Electron provides developers with a better workflow and a lower barrier to entry that results in applications and features that simply wouldn’t exist otherwise. The apps built with Electron might not be as nice as native ones, but they often solve real problems as indicated by the vast amount of people using them. This is especially important if you’re running Linux where apps like Slack likely wouldn’t even exist in the first place, and then you’d be stuck having to try running them via Wine hoping for the best.
While Qt is probably one of the better alternatives, it breaks down if you need to have a web UI. I’d also argue that the workflow you get with Electron is far superior.
I really don’t see any viable alternatives to Electron at the moment, and it’s like here to stay for the foreseeable future. It would be far more productive to focus on how Electron could be improved in terms of performance and resource usage than to keep complaining about it.
I never claimed that it doesn’t make life easier for some developers, or even that every electron app would have been written with some other cross-platform toolkit. Clearly for anyone who uses Javascript as their primary (or, in many cases, only) language, and works with web technology day in and day out, something like electron is going to be the nearest to hand and the fastest thing for them to get started with.
The problem I see is that what’s near to hand for developers, and good for the individual applications, ends up polluting the ecosystem by proliferating grossly, irresponsibly inefficient applications. The problem of inefficiency and the subsequent negative affect it has on the entire computing ecosystem is compounded by the fact that most users aren’t savvy enough to understand the implications of the developers technology choices, or even capable of looking at the impact that a given application is having on their system. Additionally, software as an industry is woefully prone to adopting local maxima solutions- even if something better did come along, we’re starting to hit an inflection point of critical mass where electron will continue to gain popularity. Competitors might stand a chance if developers seemed to value efficiency, and respect the resources of their users devices, but if they did we wouldn’t be in this situation in the first place.
Saying that developers use Electron simply because don’t value efficiency is absurd. Developers only have so much time in a day. Maintaining the kinds of applications built with Electron using alternatives is simply beyond the resources available to most development teams.
Again, as I already pointed out, the way to address the problem is to look for ways to improve Electron as opposed to complaining that it exists in the first place. If Electron runtime improves, all the applications built on top of it automatically get better. It’s really easy to complain that something is bloated and inefficient, it’s a lot harder to do something productive about it.
but I shouldn’t have to be killing slack and zoom every time I unplug my laptop
Yes, you shouldn’t. But that is not Electron’s fault.
I’ve worked on pgManage, and even though ii is based on Electron for the front-end, we managed to get it work just fine and use very little CPU/Memory*. Granted, that’s not a chat application, but I also run Riot.im all day everyday and it show 0% CPU and 114M of memory (about twice as much as pgManage).
Slack is the worst offender that I know of, but it’s because the people who developed it were obviously used to “memory safe” programming. We had memory issues in the beginning with the GC not knowing what to do when we were doing perfectly reason able things. But we put the effort in and made it better.
We have a strong background in fast C programs, and we applied that knowledge to the JS portion of pgManage and cut down the idle memory usage to 58M. For this reason, I’m convinced that C must never die.
* https://github.com/pgManage/pgManage/blob/master/Facts_About_Electron_Performance.md (Note: the version numbers referred to in this article are for Postage, which was later re-branded pgManage)
*Edit for spelling*
“But that is not Electron’s fault.”
It happens by default with a lot of Electron apps. It doesnt so much with native ones. That might mean it’s a side effect of Electron’s design. Of course, Id like to see more data on different use-cases in case it happens dor some things but not others. In your case, did you have to really work hard at keeping the memory down?
Edit: The Github link has some good info. Thanks.
It happens by default with a lot of Electron apps.
I see where your coming from, and you’re right, but if more JS devs had C experience (or any other non-memory-managed language), we would all be better for it. The GC spoils, and it doesn’t always work.
It doesnt so much with native ones.
Yes, but I think that greatly depends on the language, and how good the GC is.
That might mean it’s a side effect of Electron’s design.
Maybe, but if pgManage can do it (a small project with 5 people working on it), than I see absolutely no reason why Slack would have any difficulty doing it.
In your case, did you have to really work hard at keeping the memory down?
Yes and no. Yes it took time (a few days at most), but no because Electron, and Chrome, have great profiling tools and we were able to find most issues fairly quickly (think Valgrind). IIRC the biggest problem we had at the time was that event listeners weren’t being removed before an element was destroyed (or something like that).
One thing I’ll note, look at the ipc ratio of electron apps versus other native apps. You’ll notice a lot of tlb misses and other such problems meaning that the electron apps are mostly sitting there forcing the cpu to behave in ways it really isn’t good at optimizing.
In the end, the electron apps just end up using a lot of power spinning the cpu around compared to the rest. This is technically also true of web browsers.
You may use perf on linux or tiptop to read the cpu counters (for general ipc eyeballing i’d use tiptop): http://tiptop.gforge.inria.fr
Most functional languages fit this category. Clojure is a good example of a language that provides great tools to avoid naming transient variables with its various threading macros.
It’s not really a functional language, but fwiw this is also common in modern R code via magrittr pipelines.
If you want recursion you’ll have to name functions, unless you want to write out the combinator from first principles every time. (Naming the combinator fix would be cheating!)
That’s a good point…I hadn’t thought of that, and I use method chaining all the time! Do you find in Clojure that it is difficult to debug the intermediate states of a “thread” (not sure what term is appropriate here) such that intermediate variables would be more convenient? (I guess I’m thinking from a perspective of JavaScript’s method chaining, which seems similar…and it is somewhat frequent that I need to log intermediate values in a .map.reduce.filter chain.)
In the standard library, you can debug -> using another (-> doto prn) and you can debug ->> using (#(-> % doto prn)), although I usually use taoensso.timbre/spy for this. A simplistic solution to work for both ->, and ->> is (defn debug [x] (prn x) x).
Really wish this had example programs on the page or just something I could see without watching a video.
Looks like a design bug for @andrewrk (or someone) to fix :-)
Nothing’s wrong with it. All I meant to say was, if someone couldn’t find examples maybe it could be made clearer where to find them. By “design” I didn’t mean just the CSS.
I’ve had a little adventure with my Fedora Atomic Workstation this morning and almost missed a meeting because I couldn’t get to a desktop session.
This sort of stressful random breakage when waking from suspend or unplugging a monitor was enough to give me a sort of low level background anxiety while using Linux that I didn’t notice until I forced myself to try macOS for a while in earnest.
Until of course Sierra started making my GPU randomly crash.
[Comment removed by author]
one of the worst companies for the Internet in 2018
Definitely not an overreaction at all when we consider what other companies do as SOP. The point is that there has been pushback against Mozilla every time one of these exceptional events has occurred.
I honestly don’t understand why the “use my custom WebKit-based browser” people can’t be just a little more rational.
I think it’s part and parcel of the techie personality. Companies are either Good or Evil. Because Mozilla has made decisions they disagree with, and which violate some sort of open source Purity of Essence, they are now Evil and should be shunned.
I’d argue that this is exactly the wrong behavior to enact and that instead we should be using Firefox and guiding Mozilla in the right direction. Firefox is pretty much the only game in town that’s an open source alternative AND has significant market share. That means something, unless we want to just abandon ship and let Google and Microsoft win utterly.
[Comment removed by author]
Mozilla is just nothing.
Wow, who at Mozilla hurt you? Definitely Mozilla has problems, which is while the rest of us criticize them rationally. Then, like I said, invariably the “custom WebKit browser” people come in and say stuff like this.
[Comment removed by author]
Please don’t encourage people to use browsers which don’t get security updates on most platforms.
This a fascinating case. It’s very unfortunate that the cyclist had to die for it to come before us. However, had the car been driven by a human, nobody would be talking about it!
That said, the law does not currently hold autonomous vehicles to a higher standard than human drivers, even though it probably could do so given the much greater perceptiveness of LIDAR. But is there any precedent for doing something like this (having a higher bar for autonomous technology than humans)?
Autonomous technology is not an entity in law, and if we are lucky, it never will be. Legal entities designed or licensed the technology, and those are the ones the law finds responsible. This is similar to the argument that some tech companies have made that “it’s not us, it’s the algorithm.” The law does not care. It will find a responsible legal entity.
This is a particularly tough thing for many of us in tech to understand.
It’s hard for me to understand why people in tech find it so hard to understand. Someone wrote the algorithm. Even in ML systems where we have no real way of explaining its decision process, someone designed it the system, someone implemented it, and someone made the decision to deploy it in a given circumstance.
Not only that, but one other huge aspect of things nobody is probably thinking about. This incident is going to probably start the ball rolling on certification and liability for software.
Move fast and break things is probably not going to fly in the faces of too many deaths to autonomous cars. Even if they’re safer than humans, there is going to be repercussions.
Even if they’re safer than humans, there is going to be repercussions.
Even if they are safer than humans, a human must be held accountable of the deaths they will cause.
Well… it depends.
When a bridge breaks down and kills people due to bad construction practices, do you put in jail the bricklayers?
And what about a free software that you get from me “without warranty”?
Indeed. The same would work for software.
At the end of the day, who is accountable for the company’s products is accountable for the deaths that such products cause.
Somewhat relevant article that raised an interesting point RE:VW cheating emissions tests. I think we should ask ourselves if there is a meaningful difference between these two cases that would require us to shift responsibility.
Very interesting read.
I agree that the AI experts’ troupe share a moral responsibility about this death, just like the developers at Volkswagen of America shared a moral responsibility about the fraud.
But, at the end of the day, software developers and statisticians were working for a company that is accountable for the whole artifact they sell. So the legal accountability must be assigned at the company’s board of directors/CEO/stock holders… whoever is accountable for the activities of the company.
What I’m saying is this is a case where those “without warranty” provisions may be deemed invalid due to situations like this.
I don’t think it’ll ever be the programmers. It would be negligence either on the part of QA or management. Programmers just satisfy specs and pass QA standards.
It’s hard to take reponsability for something evolving in a such dynamic environment, with potentially used for billions of hours everyday, for the next X years. I mean, knowing that, you would expect to have a 99,99% of cases tested, but here it’s impossible.
It’s expensive, not impossible.
It’s a business cost and an entrepreneurial risk.
If you can take the risks an pay the costs, that business it not for you.
It’s only a higher bar if you look at it from the perspective of “some entity replacing a human.” If you look at it from the perspective of a tool created by a company, the focus should be ok whether there was negligence in the implementation of the system.
It might be acceptable and understandable for the average human to not be able to react that fast. It would not be acceptable and understandable for the engineers on a self-driving car project to write a system that can’t detect an unobstructed object straight ahead, for the management to sign off on testing, etc.
From the author of the operating system kernel that gives us dozens of CVEs every month, that puts drivers into kernel-space, comes a condemnation about the way vulnerabilities are being presented, for “attention whoring”.
Thanks Linus! I knew we could count on you.
I’m looking at the statistics for macOS and Windows and the average rate of CVEs over the past few years for each is identical. In 2017 Linux had around twice as many exploits as the average, which is worrying until you notice that macOS had a similar spike in 2015. Windows is broken down by version.
More than a little ironic that you are practically proving Linus’s point about fearmongering in a post condemning his condemnation.
Link? I was talking about the kernel, not a full operating system. One compares Linux to Darwin, not macOS.
Comparing CVE/timeframe is not how you compare the security of software. Not even remotely.
For one, Linux has a much broader attack spectrum since it runs most of the internet out there. Attackers and researchers are probing every line of source code in Linux to get into servers.
There are a few reasons for me clinging to MacOS for work (I’m a network engineer, and I code a bit too). The overshadowing first reason is called Microsoft Office. I wish I didn’t have to use it, but I have so far not been able to properly dodge it and my current employer is entangled beyond belief in the whole Microsoft ecosystem with OneDrive, Teams, Yammer, OneNote et. al. that I’m aware of nice cross-platform replacements for, but stuck with.
Similarly, I’m depending on OmniGraffle to display and create visio (compatible) drawings.
So why not just run Windows? Well, I had a go at that although not by personal choice when I started my current employment half a year ago, where I was handed a mediocre HP laptop while waiting for my Macbook Pro to be available, and it was quite terrible to work with. It became bearable when I had my emacs setup tuned, and I could sort of live inside emacs, but it was a poor substitute for the terminals and unix tools I’ve come to depend on.
Another reason, and that may just me being scared from previous experience running Linux for work, and that’s the whole multiple display thing. I have multiple displays at my home office at different rotations, and a widescreen monitor at work. Switching between multiple displays was never painless when I ran Linux, but that may have improved since then Still the point about different DPIs have been raised elsewhere here, so I believe it at least partly still applies.
And then there’s stability. It is entirely possible to have a stable Linux environment, but not perpetually. Something will break between releases and you’re forced to tinker and be unproductive. I enjoyed that part when I was younger, and I still do for my hobby systems. But for work, I just want things to work.
Multi monitor is definitely why I stay on OS X. Perfect it is not, but as someone that has hand edited x.org files in the past, i’ve never had a great experience with multiple monitors.
And osx with nix basically solves all my needs for a unix os. I get emacs and anything else out of there.
If I were to switch to linux on the desktop it would probably be nixos, least then I can easily move between stable islands of software at once with sane backing out of things.
I’ve often ran multi-monitor setups on Linux, and the selection of monitors has usually been rather odd. I usually use arandr to arrange and set them up, and… it just works.
Just curious what sorts of issues you had?
Mostly plugging things in and having the window layouts work sanely. Also at issue tended to be putting the laptop to sleep and unplugging the monitor and not having anything come back up until I rebooted the laptop etc…
In a nutshell, edge cases all over, not that osx doesn’t have its own similar problems it tends not to lose the ability to display a screen.
Multimonitor support is 90% of why I’m planning to test drive moving away from OSX back to windows :)
Have you run into the bug where sleeping with a monitor attached causes everything to black screen forever? Haven’t been able to escape that :/
I’d want to move to Windows too, but the privacy policy creeps me out.
Yes. It happens not very often, but just often enough to make me irritated at the best of times. (And I still get the occasional panic on plugging in or removing a monitor.)
I get all my windows moved to one monitor 95% of the time the displays come back on, and there’s a bug in the video card driver (Mac Pro Toob) that crashes everything on-screen (except the mouse pointer) and also crashes displayport audio, but leaves every application running as if everything were peachy. That one gets me every few weeks or so.
Also, I used to run 2 * UHD displays at 60hz, a third at 30hz. But now I can only run one at 60hz, both others run at 30. It’s fucked and it shits me to tears. When I bought it this was the top-shelf you could get, and while I cheaped out on core count, I went for the higher-end video option.
Imo nix is just a better system period. It is growing despite some usability concerns I have (that I think can be solved) because it is actually a better model fundamentally. It takes a few weeks of using it to really appreciate it. Bonus points if you use nixops to manage multiple nix machines.
Do you use nix on the desktop? I was wondering if it might work even better there because on my desktop I try not to edit configuration files directly, while on a personal server I often have many changes.
I’ve used it on the desktop but I stopped. I found hardware support to be a significant barrier there, though surmountable with a lot of work. That’s one of those things that becomes a lot easier when the community gets large enough, and is nearly impossible until then. So I’m optimistic for the future.
Agh, hardware support is the bane of my Linux life. Even the XPS I got from Dell supposedly intended for Ubuntu has weird problems. Thanks for the info.
ring if it might work even better there because on my desktop I try not to edit configuration files directly, while on a personal server I often have many changes.
I currently use nix on my desktop, laptop and server, I think the model works well for all of them.
Thanks for the reply. That’s really encouraging! Do you run into problems with hardware support? (Assuming everything is installed using nix.) I’ve have pretty good luck with Thinkpads running Arch, but even then I’ve run into annoying issues. (Current laptop is worse.) If you’ve used a Thinkpad w/ a different distro would you say the support is about as good?
tbh, on my thinkpad openbsd worked much better out of the box, the trackpad didn’t work on nix, but I always use the little nib thing anyway so haven’t bothered fixing it. I think it partially depends on what graphical environment you enable, and I use i3 which does not do much work for you.
I see. Same here with the nib :P OK, that still sounds reassuring enough to give it a try on my old thinkpad when I have time. Thanks for the info!!
The book is in my university’s library, and apparently I can request scans of up to 30 pages for $4. The book is 361 pages, so I suppose hypothetically could get the whole thing scanned for $50. I could ask a librarian some time next week if there are other options, provided you haven’t found an alternative.
Though you mention that you’d be willing to digitize it yourself, in which case the other suggestions about interlibrary loans seem good.
k/q don’t have (f g h) and have fewer “words” so it’s easy to take a look and see if it’s more discoverable.
k)mode:*>#:'=:
q)mode:{first idesc count each group x}
but having 200 primitives shouldn’t be a problem, it’s just a lot to (re) learn. 眾數!
Anyway.
If I need the mode, why wouldn’t I just write?
y{~{.\:+/y=/y
That seems perfectly clear to me, and doesn’t require fiddling with “expert level” at all.
I think stuff like this:
(i. >./) & (#/.~) { ~.
is one of those things where if you needed to compute a trillion modes a day, you might try to think about how you reduce the time and space of the operation and so it’s worth spending forty minutes on it.
btw, this is faster: ~. {~ [: (i. >./) #/.~
As a data point: While reading the blog post, I tried writing it in k, and came up with exactly the same answer you did (albeit with slight syntactic changes for kona). It only took a few minutes.
It doesn’t really make sense to compare APLs to logographic languages. Natural languages are optimized for single-pass parsing.
A more apt comparison (and the most popular one, I believe) might be between the notation of APL and mathematical notation. But even then most mathematics can be read without the insane backtracking and potential for error that APL presents.
I don’t agree that “natural” languages are optimised for anything: “L’invité qu’il a dit des folies”, or “The horse raced past the barn fell“ or even “为了环保运动必须深化” all demonstrate how sometimes a lack of clarity early in the sentence, accidentally or not, can force the reader to backtrack in a “natural” language.
I understand APL this way: from left to right, top to bottom. If you understand it a different way it may be helpful to explain to people how you are (or came to be) successful in APL instead of trying to mix up my metaphors.
Certainly you can construct natural language sentences that require backtracking. But my statements about the features of natural languages are uncontroversial (ask any linguist) and backed up by significant evidence. In fact, you can easily observe this yourself through the following experiment: take an program in an APL and a sentence in Chinese or even English and see how much someone can understand when you black out a few characters/words.
So I’m a bit surprised as to why you seem to be arguing whether natural languages are like that instead of whether APL is like that.
Comparing APLs to Chinese or other logographic languages is disingenuous because it suggests that Chinese, a language used by billions for thousands of years, shares the flaws of the APLs, a niche language used by at most hundreds for a few decades; it attempts to link a dislike of the design of the APLs with a dislike of logographic languages in general. As I’ve said, this comparison is superficial, and it is much easier to understand the flaws and benefits of APLs through a less synthetic comparison, e.g. with mathematical notation or other programming languages. It’s folly to “mix up the metaphors” of natural language and computer language.
based off of this, would it make sense to make an APL variant where you still type the syntax like in ASCII variants, but you apply a rich formatting environment to it? Do you think there would be possibly better layout techniques for reading the code?
This kind of arguments against APL family always feels odd. You can say APL lacks modern FFI. You can say APL lacks variable scopes. You can say APL’s workspace feels arcane. But the symbols/functions/operators?
If you native tongue is English, how long do you expect to spend to write a sentence in Farsi by using a dictionary only?
I think this argument is limited to J: k/q and other APLs like Dyalog don’t have this problem and the author admits they don’t know those languages. At the end of the day the author is calling for more/better documentation, and I don’t think either of those things would make J worse.
See, most people learn to write by reading lots of things.
Programmers however, learn to program by programming lots of things.
The idea is that after 5-10 years of programming a certain way, your brain changes enough so you start thinking that way.
To that end, it makes sense to try to find ways to make it easier for newbies to learn how to program lots of things.
Now.
It’s difficult to look up words in a Chinese dictionary.
Most Chinese “words” are made up of a few radicals.
If you can identify them, you can look up the number of strokes in the radical on a table to get an index. You combine all of the indexes together to find what page to look up.
If you can’t identify them, you can try looking it up by stroke count. There are lots of “words” with twelve strokes, so this can take a lot of time.
It might not be possible to improve on either of these (there’s also a corners-method, but it’s frustrating to learn) which puts Chinese at a disadvantage in recruitment.
Showing people things they can do and say in J that is so much shorter and faster and with fewer bugs than other languages works only so far as other programmers believe those things are important, and I don’t think most programmers believe those things are important enough to be worth an extra 5-10 years of learning J or Chinese.
Saw this in another comment, but it doesn’t really make sense to compare J to Chinese. Natural languages are optimized for single-pass parsing and have lots of redundancy for error correction. Consider how much worse your understanding becomes when you don’t know / misread one logogram vs. even one character in a J program.
It’s about the same as when I read Chinese. Some of the reasons are in my other comment.
I found this thread of people talking in music absolutely fascinating.
I can’t afford to completely stop being productive at work, and I don’t want to spend personal time learning a new language. The kind of code I write today (in Scala) would look completely alien to the me who first picked up Scala 8 years ago, but, crucially, I’ve been able to reach that point incrementally, step by step, spending work time only and remaining productive all the time - even 8 years ago I was able to be as productive in Scala as I had been in Java, and close to as productive as I had been in Python. (And this is why I learnt Scala rather than Haskell - even if the kind of code I’m now writing would be easier in Haskell, I had no way to get there)
Maybe it’s not possible to make a language that has the advantages of APL and the gradual on-ramp of Scala, but if so I can’t see any way most programmers are ever going to adopt such a language, however good the end state may be.
Shops that use less mainstream languages tend to have language courses as part of the onboarding. Advanced places using mainstream languages also tend to have this, extreme’s being Google. I think of using some language is truly as valuable as the company claims it is they are likely to support training during work hours.
Hmm. The place I currently work actually has (or at least had at some point) a significant APL contingent, but I’ve never seen a course offered. Will keep my eyes open.
I think the Jane St position on learning a new language is a nice counterpoint with Ocaml being the language. One point they made (IIRC) is people who could pick it up in onboarding were likely better programmers than those only knowing say Java. Supporting your side are all the languages that are successful because they built on the syntax, basic capabilities, or runtimes of other ones. Scala is a great example of the latter for Java. Go might be another since it’s very approachable for developers of imperative languages.
So, if it’s APL, might either make the onboarding easy if going all in, make it a framework/library for an existing language that’s very popular, or DSL for a language supporting macros. I find the latter most interesting with Julia being leading candidate since it already targets numerical sub-field with seemless integration of C and Python code they already use. I’d default on doubting APL would have much productivity over APL DSL in Julia since developers will spend more time thinking and operating on data than writing the code itself. Julia’s code would also be short being dynamic.
That’s just my speculations on this. I’d only learn APL to learn the mindset and some useful operators for a DSL or library. We shouldn’t need to go all in with it given current state of programming tooling.
I believe both touch interesting subjects, but this form is not discussion-ready. A discussion of leadership style cannot be done based on a single email, a discussion of a leadership statement can only be done with appropriate introduction of the context around the mail.
Why should we treat our fellow users like babies? If you don’t want to read the context, that is on you–let others hold the discussion.
More importantly, instead of just saying “please don’t”, which comes off a lot like “Deep-linking Considered Harmful” (meriting https://meyerweb.com/eric/comment/chech.html), why not structure your post around suggesting alternatives? You mention writing appropriately researched blog posts. It seems like text posts like yours simply linking to & quoting from replies giving the necessary context would work well too.
I find the accusation of “treating fellow users like babies” quite baseless. There’s miles of land between critisising a pattern (which some people don’t agree with, which I in turn find fair) and treating people like babies.
Why do Medium posts about front-end development always make it seem like such a soap opera? The actual people I know who are into front-end development are all reasonable smart people.
I think there must just be some toxic mutual appreciation subsubculture of front-end “engineers” who like to blog drama.
The actual content of this post when you strip out all of the “I’ll support my French bros” and expletives is minuscule and entirely unoriginal.