They [TrueType fonts] look ugly in my opinion and rendering of these is slower (you can measure it).
It’s true they render slower. I have never found it to be a problem. (And I don’t get the author’s deep affection for the Terminus font.)
It is strange, taste in fonts. I’ve searched for a different programming/terminal font a few times but Terminus looks best to me.
It’s pretty much the difference between dot-matrix printing and laser printing.
All the bitmap fonts become either ugly or unreadable on a true retina screen.
So far I’ve only found one solution that is actually robust. Which is to manually check that the value is not nil before actually using it.
This seems reasonable to me. If anything, I’d consider knowing how and when to use this kind of check a part of language competency knowledge as it is how Go was designed.
We expect people to be competent enough to not crash their cars, but we still put seatbelts in.
That’s perhaps a bad analogy, because most people would say that there are scenarios where you being involved in a car crash wasn’t your fault. (My former driver’s ed teacher would disagree, but that’s another post.) However, the point remains that mistakes happen, and can remain undiscovered for a disturbingly long period of time. Putting it all down to competence is counter to what we’ve learned about what happens with software projects, whether we want it to happen or not.
I wish more languages had patterns. Haskell example:
data Named = Named {Name :: Text} deriving Show
greeting :: Maybe Named -> Text
greeting (Just thing) = "Hello " + (Name thing)
greeting _ = ""
You still have to implement each pattern, but it’s so much easier, especially since the compiler will warn you when you miss one.
You can even use an optional type in C++. It’s been a part of the Boost library for a while and was added to the language itself in C++17.
You can do anything in C++ but most libraries and people don’t. The point is to make these features integral.
If it’s not returned as a rule and not as an exception throughout the standard library it doesn’t matter though. C++, both the stdlib and the wider ecosystem, rely primarily on error handling outside of the type-system, as do many languages with even more integrated Maybe types
Yep. Swift has nil, and by default no type can hold a nil. You have to annotate them with ? (or ! if you just don’t care, see below).
var x: Int = nil // error
var x: Int? = nil // ok
It’s unwrapped with either if let or guard let
if let unwrapped_x = x {
print("x is \(x)")
} else {
print("x was nil")
}
guard let unwrapped_x = x else {
print("x was nil")
return
}
Guard expects that you leave the surrounding block if the check fails.
You can also force the unwraps with !.
let x_str = "3"
let x = Int(x_str)! // would crash at run-time if the conversion wouldn't succeed
Then there’s implicit unwraps, which are pretty much like Java objects in the sense that if the object is nil when you try to use it, you get a run-time crash.
let x: Int! = nil
Hey, I’m the author of the post. And indeed that does work, which is why I’m doing that currently. However, like I try to explain further in the post this has quite some downsides. The main one is that it can be easily forgotten. The worst part of which is that if you did forget, you will likely find out only by a runtime panic. Which if you have some bad luck will occur in production. The point I try to make is that it would be nice to have this be a compile time failure.
Sure, and that point came across. I think you’d agree that language shortcomings - and certainly this one - are generally excused (by the language itself) by what I mentioned?
Last week and this one:
Finishing the Aubrey-Maturin series I started reading during my batch at Recurse Center. I’ve really enjoyed the period language, manners, and plotting and am sad to have reached the end.
Those truly are great books. Did you read the last (incomplete) one? I didn’t think it’d be worth it.
I’m about 50 pages from the end of book 20. I probably will look at the incomplete one out of curiosity.
I am totally impressed by the article. The authors tries to silence his computers for decades, I am doing the exactly opposite. All my workstations in the past were equipped with large fans (not the small and noisy ones, the large ones that run slow) to generate a decent amount of white noise.
When I am usually sitting in my room and nothing is running, I can hear the noise from the trains, cars, kids, etc outside and from my neighbours inside the house. As soon as I turn my computer on the room is filled with white noise and I can concentrate on my work. Thus, I personally would never, ever use a silent workstation :)
Am I the only one using “noisy” computers?
Have you tried listening to ‘pink noise’? I don’t use it all that often as I prefer silence, but it does help me concentrate sometimes.
Sounds interesting. Currently, I am only having the noise generated by my noisy computer.
How do you generate the noise? Do you use a specific hardware/tool/… ?
I first tried listening to YouTube videos like speps mentioned and that got me interested. I had a shell alias for it named ‘pink’ that used sox, but I don’t seem to have it on the computer I’m currently using. I’m pretty sure it was just something like this:
$ play -n synth pinknoise vol 0.25
I just start it up when I get too distracted. There’s also ‘brownnoise’ and (suprise) ‘whitenoise’. Listening to regular white noise first gives you something to compare it with. I find pink noise to sound kind of like flowing water and not at all distracting. You might be fine with the sound of your computer ;).
$ play -n synth brownnoise vol 0.25
$ play -n synth whitenoise vol 0.25
Actually it might have been this one (sounds more like what I remember): https://askubuntu.com/a/789469
I use the iOS app from https://mynoise.net. It generates various types of noises and lets you change levels, save presets, etc. They also have albums on iTunes, Amazon, and Google Play. Most generators cost money but I find the free set to be good enough. Although it does “coloured noises” I prefer the “rain storm” generator.
While I agree that the article is probably true, the biggest problem with Electron, and a lot of modern software development, is that “Developer happiness” and “Developer Efficiency” are both arguments for electron, but “user happiness” and “user efficiency” aren’t.
Electron developers are incentivized to develop applications that make users happy in the small- they want something that looks nice, has lots of features, is engaging. The problem is that in their myopic pursuit of this one-and-only goal too many apps (and electron is a vanguard of this trend, but not the only culpable technology by far) forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
That’s where electron as a model breaks down. Electron apps are performant enough, and don’t use too much memory, when they are used by themselves on a desktop or powerful docked laptop- but I shouldn’t have to be killing slack and zoom every time I unplug my laptop from a power source because I know they’ll cut my battery life in half. I shouldn’t have to ration which slack teams I join lest I find other important processes swapping or getting oom-killed.
Even without those concerns, Electron apps selfishly break the consistency of visual design and metaphors used in a desktop experience, calling attention to themselves with unidiomatic designs.
We do need easier and better ways of developing cross-platform desktop applications. Qt seems to be the furthest along in this regard, but for reasons not entirely clear to me it’s never seemed to enter the wider developer consciousness - perhaps because of the licensing model, or perhaps because far fewer people talk about it than actually use it and so it’s never been the “new hotness”.
the author specifically calls out what the problem with QT is.
Native cross-platform solution like Qt tend to consider themselves more a library, less a platform, and have little to offer when it comes to creating auto-updating software, installers, and App Store packages.
Don’t be so dismissive of peoples choices with the ‘new hotness’ criticism.
I think you misunderstand what I’m saying. My claim isn’t that Qt would solve every problem that people are looking to electron to solve if only it were more popular. My claim is merely that of the cross-platform native toolkits, Qt seems to be both the furthest along in terms of capability, and also seems to be one of the less recognized tools in that space (compared to Wx, GTK, Mono, Unity, heck I’ve seen seen more about TK and FLTK than Qt lately). I suspect that Qt could grow and support more of what people want if it got more attention, but for whatever reason of the cross-platform native toolkits it seems to be less discussed.
Just to be clear, this is the workflow I have currently if I’m targeting Electron. Can you show me something comparable with Qt?
This is an overly simplistic argument that misses the point. Desktop app development has not changed significantly in the past five years, and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. You can’t talk about “not optimizing for user happiness” when the alternative is these apps just not existing.
I don’t like the Slack app, it’s bloated and slow. I wouldn’t call myself a JavaScript developer, and I think a lot of stuff in that world is too ruled by fashion. But this posturing and whining by people who are “too cool for Electron” is just downright silly.
Make a better alternative. It’s not like making an Electron app is morally worse than making a desktop app. When you say “we need to make desktop app development better” you can’t impose an obligation on anyone but yourself.
without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today.
I don’t really remember having a problem finding desktop applications before Electron. There seems to be relatively little evidence for this statement.
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron. That’s absurd. You also conveniently ignored the part of my sentence where I say “cross-platform”.
Obviously we can’t turn back the clock and rewrite history, so what evidence would suffice for you? Maybe it would be the developers of cross-platform apps like Slack, Atom, and VS Code writing about how Electron was a boon for them. Or it could be the fact that the primary cross-platform text editors we had before Electron were Vim and Emacs. Be reasonable (and more importantly, civil.)
I think Vim and Emacs, traditional tools of UNIX folks, propped up as examples of what Slack or VS Code replaced is also a fallacy you’re using to justify a need for Electron. Maybe better comparisons would be Xchat/HexChat/Pidgin, UltraEdit or SlickEdit for editor, and NetBeans or IntelliJ IDEA for IDE. So, those products sucked compared to Electron apps for reasons due to cross-platform technology used vs other factors? Or do they suck at all?
Nah, if anything, they show these other projects couldve been built without Electron. Whether they should or not depends on developers’ skills, constraints, preferences, etc on top of markets. Maybe Electron brings justifiable advantages there. Electron isnt making more sophisticated apps than cross-platform native that Ive seen, though.
I think you and the other poster are not making it very clear what your criterion for evidence is. You’ve set up a non-falsifiable claim that simply depends on too many counterfactuals.
In the timeline we live in, there exist many successful apps written in Electron. I don’t like many of them, as I’ve stated. I certainly would prefer native apps in many cases.
All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. I’m really not sure why we are debating about whether a implies b and b implies c means a implies c.
You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.
“Electron is just fashion, Slack and VS Code aren’t really useful to me so there aren’t any useful Electron apps” is not a productive belief and not a reasonable one. I don’t like Slack and I don’t particularly like VS Code. But denying that they are evidence that Electron is letting developers create cross-platform apps that might not have existed otherwise and that are useful to many people requires a lot of mental gymnastics.
“You point out the examples of IntelliJ and XChat. I think these are great applications. But you are arguing against a point no one is making.”
You argued something about Electron vs cross-platform native by giving examples of modern, widely-used apps in Electron but ancient or simplistic ones for native. I thought that set up cross-platform native to fail. So, I brought up the kind of modern, widely-used native apps you should’ve compared to. The comparison then appeared to be meaningless given Electron conveyed no obvious benefits over those cross-platform, native apps. One of the native apps even supported more platforms far as I know.
“All we need to do is consider the fact that these apps are written in Electron and that their authors have explicitly stated that they chose Electron over desktop app frameworks. If you also believe that these apps are at all useful then this implies that Electron has made it easier for developers to make useful cross-platform apps. “
It actually doesn’t unless you similarly believe we should be writing business apps in COBOL on mainframes. Visual Basic 6, or keeping the logic in Excel spreadsheets because those developers or analysts were doing it saying it was easiest, most-effective option. I doubt you’ve been pushing those to replace business applications in (favorite language here). You see, I believe that people using Electron to build these apps means it can be done. I also think something grounded in web tech would be easier to pick up for people from web background with no training in other programming like cross-platform native. This much evidence behind that as a general principle and for Electron specifically. The logic chain ends right here though:
“then this implies that Electron has made it easier for developers to make useful cross-platform apps.”
It does not imply that in general case. What it implies is the group believed it was true. That’s it. All the fads that happen in IT which the industry regretted later on tells me what people believe was good and what objectively was are two different things with sadly little overlap. I’d have to assess things like what their background was, were they biased in favor of or against certain languages, whether they were following people’s writing who told them to use Electron or avoid cross-platform native, whether they personally or via the business were given constraints that excluded better solutions, and so on. For example, conversations I’ve had and watched with people using Electron have showed me most of them didn’t actually know much about the cross-platform native solutions. The information about what would be easy or difficult had not even gotten to them. So, it would’ve been impossible for them to objectively assess whether they were better or worse than Electron. It was simply based on what was familiar, which is an objective strength, to that set of developers. Another set of developers might have not found it familiar, though.
So, Electron is objectively good for people how already know web development looking for a solution with good tooling for cross-platform apps to use right now without learning anything else in programming. That’s a much narrower claim than it being better or easier in general for cross-platform development, though. We need more data. Personally, I’d like to see experiments conducted with people using Electron vs specific cross-platform native tooling to see what’s more productive with what weaknesses. Then, address the weaknesses for each if possible. Since Electron is already popular, I’m also strongly in favor of people with the right skills digging into it to make it more efficient, secure, etc by default. That will definitely benefit lots of users of Electron apps that developers will keep cranking out.
Hey, I appreciate you trying to have a civilized discussion here and in your other comments, but at this point I think we are just talking past each other. I still don’t see how you can disagree with the simple logical inference I made in my previous comment, and despite spending some effort I don’t see how it at all ties into your hypothetical about COBOL. It’s not even a hypothetical or a morality or efficacy argument, just transitivity, so I’m at a loss as to how to continue.
At this point I am agreeing with everything you are saying except on those things I’ve already said, and I’m not even sure if you disagree with me on those areas, as you seem to think you do. I’m sorry I couldn’t convince you on those specifics, which I think are very important (and on which other commenters have strongly disagreed with me), but I’ve already spent more time than I’d have preferred to defending a technology I don’t even like.
On the other hand, I honestly didn’t mind reading your comments, they definitely brought up some worthwhile and interesting points. Hope you have a good weekend.
Yeah, we probably should tie this one up. I thank you for noticing the effort I put into being civil about it and asking others to do the same in other comments. Like in other threads, I am collecting all the points in Electron’s favor along with the negatives in case I spot anyone wanting to work on improvements to anything we’re discussing. I got to learn some new stuff.
And I wish you a good weakend, too, Sir. :)
Please do not straw man. If you read what you quoted, you will see I did not say no desktop apps existed before Electron
And if you read what I said, I did not claim that you believed there were no desktop apps before Electron. If you’re going to complain about straw men, please do not engage in them yourself.
My claim was that there was no shortage of native applications, regardless of the existence of electron. This includes cross platform ones like xchat, abiword, most KDE programs, and many, many others. They didn’t always feel entirely native on all platforms, but the one thing that Electron seems to have done in order to make cross platform easy is giving up on fitting in with all the quirks of the native platform anyways – so, that’s a moot point.
Your claim, I suppose, /is/ tautologically true – without electron, there would be no cross platform electron based apps. However, when the clock was rolled back to before electron existed and look at history, there were plenty of people writing enough native apps for many platforms. Electron, historically, was not necessary for that.
It does let web developers develop web applications that launch like native apps, and access the file system outside of the browser, without learning new skills. For quickly getting a program out the door, that’s a benefit.
No one is saying there was a “shortage” of desktop applications; I’m not sure how one could even ascribe that belief to someone else without thinking they were completely off their rocker. No one is even claiming that without Electron none of these apps would exist (read my comment carefully). My claim is also not the weird tautology you propose, and again I’m not sure why you would ascribe it to someone else if you didn’t think they were insane or dumb. This is a tactic even worse than straw manning, so I’m really not sure you why you are so eager to double down on this.
Maybe abstracting this will help you understand. Suppose we live in a world where method A doesn’t exist. One day method A does exist, and although it has lots of problems, some people use method A to achieve things B that are useful to other people, and they publicly state that they deliberately chose method A over older methods.
Now. Assuming other people are rational and that they are not lying [1], we can conclude that method A helped people achieve things B in the sense that it would have been more difficult had method A not existed. Otherwise these people are not being rational, for they chose a more difficult method for no reason, or they are lying, and they chose method A for some secret reason.
This much is simple logic. I really am not interested in discussing this if you are going to argue about that, because seriously I already suspect you are being argumentative and posturing for no rational reason.
So, if method A made it easier for these people to achieve things B, then, all else equal, given that people can perform a finite amount of work, again assuming they are rational, we can conclude that unless the difference in effort really was below the threshold where it would cause any group of people to have decided to do something else [2], if method A had not existed, then some of the things B would not exist.
This is again seriously simple logic.
I get it that it’s cool to say that modern web development is bloated. For the tenth time, I agree that Electron apps are bloated. As I’ve stated, I don’t even like Slack, although it’s ridiculous that I have to say that. But don’t try to pass off posturing as actual argument.
[1]: If you don’t want to assume that at least some of the people who made popular Electron apps are acting intelligently in their own best interests, you really need to take a long hard look at yourself. I enjoy making fun of fashion-driven development too, but to take it to such an extreme would be frankly disturbing.
[2]: If you think the delta is really so small, then why did the people who created these Electron apps not do so before Electron existed? Perhaps the world changed significantly in the meantime, and there was no need for these applications before, and some need coincidentally arrived precisely at the same time as Electron. If you had made this argument, I would be a lot more happy to discuss this. But you didn’t, and frankly, this is too coincidental to be a convincing explanation.
then why did the people who created these Electron apps not do so before Electron existed?
…wut.
Apps with equivalent functionality did exist. The “Electron-equivalent” apps were a time a dozen, but built on different technologies. People creating these kinds of applications clearly did exist. Electron apps did not exist before electron, for what I hope are obvious reasons.
And, if you’re trying to ask why web developers who were familiar with a web toolkit running inside a browser, and unfamiliar with desktop toolkits didn’t start writing things that looked like desktop applications until they could write them inside a web browser… It’s easier to do something when you don’t have to learn new things.
There is one other thing that Electron did that makes it easier to develop cross platform apps, though. It dropped the idea of adhering fully to native look and feel. Subtle things like, for example, the way that inspector panels on OSX follow your selection, while properties dialogs on Windows do not – getting all that right takes effort.
At this point, I don’t really see a point in continuing, since you seem to consistently be misunderstanding and aor misinterpreting everything that’s been said in this entire thread, in replies to both me and others. I’m not particularly interested in talking to someone who is more interested in accusing me of posturing than in discussing.
Thank you for your time.
I am perplexed how you claim to be the misunderstood one when I have literally been clarifying and re-clarifying my original comment only to see you shift the goalposts closer and closer to what I’ve been saying all along. Did you even read my last comment? Your entire comment is literally elaborating on one of my points, and your disagreement is literally what I spent my entire comment discussing.
I’m glad you thanked me for my time, because then at least one of us gained something from this conversation. I honestly don’t know what your motives could be.
I find it strange that you somehow read
I don’t really remember having a problem finding desktop applications before Electron
as implying that you’d said
no desktop apps existed before Electron
@orib was simply saying that there was no shortage of desktop apps before Electron. That’s much different.
…That’s absurd… Obviously we can’t turn back the clock and rewrite history… …Be reasonable (and more importantly, civil.)
You should take your own advice. @orib’s comment read as completely anodyne to me.
I find it strange that you’re leaving out parts of my comment, again. Not sure why you had to derail this thread.
Please, please stop continuing to derail this conversation. I am now replying to your contentless post which itself was a continuation of your other contentless post which was a reply to my reply to orib’s post, which at least had some claims that could be true and could be argued against.
I’m not sure what your intentions are here, but it’s very clear to me now that you’re not arguing from a position of good faith. I regret having engaged with you and having thus lowered the level of discourse.
Please, please stop continuing to derail this conversation… I regret having engaged with you and having thus lowered the level of discourse.
Yeah, I wouldn’t want to derail this very important conversation in which @jyc saves the Electron ecosystem with his next-level discourse.
My intention was to call you out for being rude and uncivil and the words you’ve written since then only bolster my case.
What is even your motive? Your latest comment really shows you think this whole thing is some sort of sophistic parlor game. I have spent too much time trying to point out that there may even exist some contribution from a technology I don’t even like. I honestly hope you find something better to do with your time than start bad faith arguments with internet strangers for fun.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative. For a technical audience, sure. I can choose to, grudgingly, use some bloated application that I know is going to affect my performance, and I’m technical enough to know the tradeoffs and how to mitigate the costs (close all electron apps when I’m running on battery, or doing something that will benefit from more available memory). The problem is for a non-technical audience who doesn’t understand these costs, or how to manage their resources, the net result is a degraded computing experience- and it affects the entire computing ecosystem. Resource hog applications are essentially replaying the tragedy of the commons on every single device they are running on, and even as the year-over-year gains in performance are slowing the underlying problem seems to be getting worse.
And when I say “we” should do better, I’m acknowledging that the onus to fix this mess is going to be in large part on those of us who have started to realize there’s a problem. I’m not sure we’ll succeed as javascript continues to eat the world, but I’ll take at least partial ownership over the lack of any viable contenders from the native application world.
I’m not sure sure that it’s necessarily true that the existence of these apps is necessarily better than the alternative.
I think this and your references to a “tragedy of the commons” and degrading computing experiences are overblowing the situation a bit. You may not like Slack or VS Code or any Electron app at all, but clearly many non-technical and technical people do like these apps and find them very useful.
I agree 100% that developers should be more cautious about using user’s resources. But statements like the above seem to me to be much more like posturing than productive criticism.
Electron apps are making people’s lives strictly worse by using up their RAM—seriously? I don’t like Electron hogging my RAM as much as you, but to argue that it has actually made people’s lives worse than if it didn’t exist is overdramatic. (If you have separate concerns about always-on chat apps, I probably share many of them, but that’s a separate discussion).
but clearly many non-technical and technical people do like these apps and find them very useful.
If you heard the number of CS folks I’ve heard complain about Slack clients destroying their productivity on their computers by lagging and breaking things, you’d probably view this differently.
If you also heard the number of CS folks I’ve heard suggest you buy a better laptop and throwing you a metaphorical nickel after you complain about Slack, you’d probably view it as futile to complain about sluggish Web apps again.
Dude, seriously, the posturing is not cool or funny at this point. I myself complain about Slack being bloated, and IIRC I even complained about this in my other post. Every group I’ve been that has used Slack I’ve also heard complaints about it from both technical and non-technical people.
I’ll leave it as an exercise for you to consider how this is not at all a contradiction with what you quoted. My God, the only thing I am more annoyed by at this point than Electron hipsterism is the anti-Electron hipsterism.
Not posturing–this is a legitimate problem.
Dismissing the very real pain points of people using software that they’re forced into using because Slack is killing alternatives is really obnoxious.
People aren’t complaining just to be hipsters.
Dude, at this point I suspect you and others in this thread are trying to imagine me as some personification of Electron/Slack so that you can vent all your unrelated complaints about them to me. For the last time, I don’t even like Electron and Slack that much. What is obnoxious is the fact that you are just ignoring the content of my comments and using them as a springboard for your complaints about Slack which I literally share.
You seriously call this account @friendlysock?
Your latest comment doesn’t add anything at all. Many users, perhaps even a majority of users, find Slack and other Electron software useful. I don’t and you don’t. I don’t like Slack’s business practices and you don’t either. Seriously, read the damn text of my comment and think about how you are barking up the entirely wrong tree.
“and without Electron we would simply not have many of the Electron-powered cross-platform apps that are popular and used by many today. “
What that’s actually saying is that people who envision and build cross-platform apps for their own satisfaction, fame, or fortune would stop doing that if Electron didnt exist. I think the evidence we have is they’d build one or more of a non-portable app (maybe your claim), cross-platform app natively, or a web app. That’s what most were doing before Electron when they had the motivations above. Usually web, too, instead of non-portable.
We didnt need Electron for these apps. Quite a few would even be portable either immediately or later with more use/funds. The developers just wanted to use it for whatever reasons which might vary considerably among them. Clearly, it’s something many from a web background find approachable, though. That’s plus development time savings is my theory.
I agree that many people might have ended up building desktop apps instead that could have been made even better over time. I also agree with your theory about why writing Electron apps is popular. Finally, I agree that Electron is not “needed”.
I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults: assuming the developers of these apps are rational, clearly they thought Electron was the best choice for them. Anyone “sufficiently motivated” would be willing to write apps in assembler; that doesn’t mean we should be lamenting the existence of bloated compilers.
Is saying developers should think about writing apps to use less resources productive? Yes. Is saying Electron tends to create bloated apps productive? Definitely. Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.
“I’m going to preemptively request that we keep “hur dur, JavaScript developers, rational?” comments out of this—let’s be adults”
Maybe that was meant for a different commenter. I haven’t done any JS bashing in this thread that I’m aware of. I even said Electron is good for them due to familiarity.
“ Is saying Electron makes the world a strictly worse place productive or even rational? Not at all.”
Maybe that claim was also meant for a different commenter. I’d not argue it at all since those using Electron built some good software with it.
I’ve strictly countered false positives in favor of Electron in this thread rather than saying it’s all bad. Others are countering false negatives about it. Filtering the wheat from the chaff gets us down to the real arguments for or against it. I identified one, familiarity, in another comment. Two others brought up some tooling benefits such as easier support for a web UI and performance profiling. These are things one can make an objective comparison with.
forget that a user want’s to do things other than constantly interact with that one single application for their entire computing existence.
Quoted for truth.
Always assume that your software is sitting between your user and what they actually want to do. Write interactions accordingly.
We don’t pay for software because we like doing the things it does, we pay so we don’t have to keep doing those things.
perhaps because of the licensing model
I also think so. It’s fine for open source applications, but the licensing situation for proprietary applications is tricky. Everyone who says you can use Qt under LGPL and just have to dynamically link to Qt, also says “but I’m not a lawyer so please consult one”. As a solo developer working on building something that may or may not sell at some point, it’s not an ideal situation to be in.
I think the big caveat to this is that for a great many of the applications I see that have electron-based desktop apps, they are frontends for SAAS applications. They could make money off a GPL application just as easily as a proprietary one, especially since a lot of these services publish most of the APIs anyway.
Granted, I’d love to see a world where software moved away from unnecessary rent-seeking and back to actually selling deliverable applications, but as long as we’re in a SAAS-first world the decision to release a decent GPL-ed frontend doesn’t seem like it should be that hard.
The situation is more nuanced than that. Because Electron provides developers with a better workflow and a lower barrier to entry that results in applications and features that simply wouldn’t exist otherwise. The apps built with Electron might not be as nice as native ones, but they often solve real problems as indicated by the vast amount of people using them. This is especially important if you’re running Linux where apps like Slack likely wouldn’t even exist in the first place, and then you’d be stuck having to try running them via Wine hoping for the best.
While Qt is probably one of the better alternatives, it breaks down if you need to have a web UI. I’d also argue that the workflow you get with Electron is far superior.
I really don’t see any viable alternatives to Electron at the moment, and it’s like here to stay for the foreseeable future. It would be far more productive to focus on how Electron could be improved in terms of performance and resource usage than to keep complaining about it.
I never claimed that it doesn’t make life easier for some developers, or even that every electron app would have been written with some other cross-platform toolkit. Clearly for anyone who uses Javascript as their primary (or, in many cases, only) language, and works with web technology day in and day out, something like electron is going to be the nearest to hand and the fastest thing for them to get started with.
The problem I see is that what’s near to hand for developers, and good for the individual applications, ends up polluting the ecosystem by proliferating grossly, irresponsibly inefficient applications. The problem of inefficiency and the subsequent negative affect it has on the entire computing ecosystem is compounded by the fact that most users aren’t savvy enough to understand the implications of the developers technology choices, or even capable of looking at the impact that a given application is having on their system. Additionally, software as an industry is woefully prone to adopting local maxima solutions- even if something better did come along, we’re starting to hit an inflection point of critical mass where electron will continue to gain popularity. Competitors might stand a chance if developers seemed to value efficiency, and respect the resources of their users devices, but if they did we wouldn’t be in this situation in the first place.
Saying that developers use Electron simply because don’t value efficiency is absurd. Developers only have so much time in a day. Maintaining the kinds of applications built with Electron using alternatives is simply beyond the resources available to most development teams.
Again, as I already pointed out, the way to address the problem is to look for ways to improve Electron as opposed to complaining that it exists in the first place. If Electron runtime improves, all the applications built on top of it automatically get better. It’s really easy to complain that something is bloated and inefficient, it’s a lot harder to do something productive about it.
but I shouldn’t have to be killing slack and zoom every time I unplug my laptop
Yes, you shouldn’t. But that is not Electron’s fault.
I’ve worked on pgManage, and even though ii is based on Electron for the front-end, we managed to get it work just fine and use very little CPU/Memory*. Granted, that’s not a chat application, but I also run Riot.im all day everyday and it show 0% CPU and 114M of memory (about twice as much as pgManage).
Slack is the worst offender that I know of, but it’s because the people who developed it were obviously used to “memory safe” programming. We had memory issues in the beginning with the GC not knowing what to do when we were doing perfectly reason able things. But we put the effort in and made it better.
We have a strong background in fast C programs, and we applied that knowledge to the JS portion of pgManage and cut down the idle memory usage to 58M. For this reason, I’m convinced that C must never die.
* https://github.com/pgManage/pgManage/blob/master/Facts_About_Electron_Performance.md (Note: the version numbers referred to in this article are for Postage, which was later re-branded pgManage)
*Edit for spelling*
“But that is not Electron’s fault.”
It happens by default with a lot of Electron apps. It doesnt so much with native ones. That might mean it’s a side effect of Electron’s design. Of course, Id like to see more data on different use-cases in case it happens dor some things but not others. In your case, did you have to really work hard at keeping the memory down?
Edit: The Github link has some good info. Thanks.
It happens by default with a lot of Electron apps.
I see where your coming from, and you’re right, but if more JS devs had C experience (or any other non-memory-managed language), we would all be better for it. The GC spoils, and it doesn’t always work.
It doesnt so much with native ones.
Yes, but I think that greatly depends on the language, and how good the GC is.
That might mean it’s a side effect of Electron’s design.
Maybe, but if pgManage can do it (a small project with 5 people working on it), than I see absolutely no reason why Slack would have any difficulty doing it.
In your case, did you have to really work hard at keeping the memory down?
Yes and no. Yes it took time (a few days at most), but no because Electron, and Chrome, have great profiling tools and we were able to find most issues fairly quickly (think Valgrind). IIRC the biggest problem we had at the time was that event listeners weren’t being removed before an element was destroyed (or something like that).
One thing I’ll note, look at the ipc ratio of electron apps versus other native apps. You’ll notice a lot of tlb misses and other such problems meaning that the electron apps are mostly sitting there forcing the cpu to behave in ways it really isn’t good at optimizing.
In the end, the electron apps just end up using a lot of power spinning the cpu around compared to the rest. This is technically also true of web browsers.
You may use perf on linux or tiptop to read the cpu counters (for general ipc eyeballing i’d use tiptop): http://tiptop.gforge.inria.fr
Well yes a lot of their issues are caused by having APIs that are too open. To be fair, back in those days, the tech ecosystem was definitely pushing for this openness. It was considered a good thing. Now, not so much..
In our buzzwords-driven field?
Probably people considered API access “a good thing” just because “Facebook/Google is doing this too!”
But the problem was not the technology back then, just like AI is not the solution right now.
It’s the business model.
I remember a younger Zuckerberg explaining the world how privacy had no value for modern people.
He meant it!
back in those days, the tech ecosystem was definitely pushing for this openness.
I would hardly call 2015 “those days”.
I’ve talked to some people in and close to this industry and it feels like we’re a good 15 years away from autonomous vehicles. The other major issue we’re not addressing is that these cars cannot be closed source like they are now. At a minimum, the industry needs to share with each other and be using the same software or same algorithms. We can’t enter a world where Audi claims their autonomous software is better than Nissan’s in adverts.
People need to realize they won’t be able to own these cars or modify them in any way if they ever do come to market. The safety risks would be too great. If the cars are all on the same network, one security failure could mean a hacker could kill thousands of people at once.
I really think the current spending on this is a huge waste of money, especially in America when tax money given to companies to subsidize research could be used to get back the trains system we lost and move cities back inward like they were in the earlier 1900s. I’ve written about this before:
http://penguindreams.org/blog/self-driving-cars-will-not-solve-the-transportation-problem/
If the cars are all on the same network
Any company that is connecting these cars to the Internet is being criminally negligent.
I say that as an infosec person who worked on self-driving cars.
Also human-driven cars.
They have to be able to communicate though to tell other cars where they intend to go or if there is danger ahead.
Networking that doesn’t represent a national security threat, and nothing that a self-driving car shouldn’t already be designed to handle.
What happens when someone discovers a set of blinker indications that can cause the car software to malfunction?
Serious question (given that you’ve worked on self-driving cars): is computer vision advanced enough today to be able to reliably and consistently detect the difference between blinkers and hazards for all car models on the roads today?
As often is the case, some teams will definitely be able to do it, and some teams won’t.
Cities and States should use it as part of a benchmark to determine which self-driving cars are allowed on the road, in exactly the same way that humans must pass a test before they’re allowed a drivers license.
The test for self-driving cars should be harder than the test for humans, not easier.
They could use an entirely separate cell network that isn’t connected to the Internet. All Internet enable devices, like the center console, could use the standard cell network and they have a read-only bus between the two for sensor data like speed, oil pressure, etc.
The other major issue we’re not addressing is that these cars cannot be closed source like they are now.
I strongly agree with this. I believe autonomous vehicles are the most important advancement in automotive safety since the seatbelt. Can you imagine if Volvo had kept a patent on the seatbelt?
The autonomous vehicle business shouldn’t be about whose car drives the best, it should be about who makes the better vehicles. Can you imagine the ads otherwise? “Our vehicles kill 10% fewer people than our competitors!” Ew.
I don’t buy your initial claims.
When you said “we’re 15 years away from autonomous vehicles”, what do you mean exactly? That it’ll be at least 15 years before the general public can ride in them? Waymo claims this will happen in Pheonix this year: https://amp.azcentral.com/amp/1078466001 That the majority of vehicles on US roads will be autonomous? Yeah, that’ll definitely take over 15 years!
We can have a common/standard set of rigorous tests that all companies need to pass but we don’t need them to literally all use the same exact code. We don’t do that for aeroplanes or elevators either. And the vanguard of autonomous vehicles are large corporations that aren’t being funded by tax dollars.
That said, I agree that it would be better to have more streetcars and other light rail in urban areas.
It will be at least 15 years before fully autonomous vehicles are available for sale or unrestricted lease to the general public. (In fact, my estimate is more like twice that.) Phoenix is about the most optimal situation imaginable for an autonomous vehicle that’s not literally a closed test track. Those vehicles will be nowhere near equipped to deal with road conditions in, for example, a northeastern US winter, which is a prerequisite to public adoption, as opposed to tests which happen to involve the public.
Also, it’s a safe bet this crash will push back everyone’s timelines even further.
I think you are correct about sales to the public but a managed fleet that the public can use on demand in the southern half of the country and the west coast seems like it could happen within 15 years.
Mandatory locking… Seriously, don’t do it. Advisory locks are the only thing that makes any sense…Still not convinced?… Look, imagine someone is holding a mandatory lock on a file, so you try to read() from it and get blocked. Then he releases his lock, and your read() finishes, but some other guy reacquires the lock. You fiddle with your block, modify it, and try to write() it back, but you get held up for a bit, because the guy holding the lock isn’t done yet. He does his own write() to that section of the file, and releases his lock, so your write() promptly resumes and overwrites what he just did.
How is this problem unique to mandatory file locking? This is the quintessential race condition example. You’re aware that this could happen, so you take an exclusive mandatory lock on the file, do your read, do your write, then release the lock. How is the situation any better for advisory locks? If you’re going to be releasing your advisory lock between the read and the write, how is that not going to result in lost writes?
And then he says, still on the topic of mandatory file locks,
What good does that do anyone? Come on. If you want locking to do you any good whatsoever, you’re just going to have to acquire and release your own locks.
How does mandatory locking equate to “not acquiring and releasing one’s own locks”, but advisory locking somehow does?
I must be missing something obvious here…
As a Java programmer, it frightens me how much companies depend on pulling hundreds of third party jars into their Java projects to speed up development. It’s pretty much a guarantee that at any given time a dozen of them have serious security flaws…
And if anyone thinks only Equifax has such lax and incompetent security, they’re crazy. The state of computer security in 2017 is astonishingly bad.
The solution to this is having scripts that continually go through your projects and show you outdated jars. It should be part of the CI. Trouble is you don’t often know if a jar update is security related or not like with traditional package management. But still on production projects, you really shouldn’t let your dependencies rot either.
I mean, the alternative is to write the code and security holes yourself. You’ll have the same security issues either way :P
The problem is that bundling dependencies prevents them from being updated in a timely fasion (ie, as part of system updates). Nobody’s saying third-party code is bad.
Link to expanded text/slides version (from the youtube comments): http://idlewords.com/talks/web_design_first_100_years.htm
Also see part 2 in which he makes some recommendations on how to avoid the multithreading pitfalls.
This story’s discussion on LWN is pretty lively: https://lwn.net/Articles/730630/.
For what it’s worth: Any package manager with a post/pre install process has the same exact issue.
I wouldn’t say exactly. Part of the problem is allowing unvetted randos to publish to the repository, but that’s not a necessary part of package manager design.
Whenever I set up a new OpenBSD system, I run “pkg_add chrome” which always fails, because the package is actually called chromium (although the binary in the package is called chrome, hence the confusion). It would probably be bad for somebody to come along and squat on the chrome name, but I don’t worry about this.
Yep, installing a package from an unvetted rando would be about as dumb as installing a CA root certificate from an unvetted rando.
Part of the problem is allowing unvetted randos to publish to the repository,
Even if you vet every publisher and vet every publish, someone can still find a MITM weakness and then have libraries post-install hacks.
Oh, of course, why didn’t I think of that?
EDIT: I honestly can’t believe people on lobster are uploading this person’s troll response.
[Comment removed by author]
That’s a very generous interpretation, but even then MITM is just one of many surfaces to protect.
Generous? The topic of the subthread was MITM attacks and tedu said “such obvious weaknesses”. TLS and signatures are two of the most well-known defenses against MITM attacks. Seems pretty clear-cut to me.
Too much time spent using npm? It’s certainly possible for a package manager that solves these problems to exist, because such package managers do exist. Debian has been around for how long? When was the last time a typo squatting deb had to recalled?
I was being completely sarcastic. There are no package managers that are immune to security issues and to suggest something so silly is really trolly dude.
Here’s how to know if your package manager is vulnerable to this situation:
If you checked both of these boxes you now have a chance to encounter this exact issue.
There are no package managers that are immune to security issues
I don’t think the claim was immunity to security issues. The claim was that there are package managers resistant to phishing and homograph attacks.
A significant mitigation is often reasonable, particularly when a total solution isn’t readily available. In this case, I would argue that there is literally nothing that can stop a determined enough individual with the right skills / resources / access from using the package manager against an adversary. But if we can make it more difficult, or just more of a pain, then we’ll still have improved the situation.
Yup, remember when this happened? https://gist.github.com/titanous/3e4829f79dbd1be11295
Oh wow yes! Haha! I also remember the (short lived) reaction https://tonyarcieri.com/lets-figure-out-a-way-to-start-signing-rubygems once people realized they could be post-installing really unknown code.
Why not? If they stored their private key elsewhere (ie, not on rubygems.org) and the package manager automatically checks signatures, how would this attack have succeeded?
This is interesting, because it is one of the numerous protections Nix and packaging software with Nix protects against.
As posted elsewhere:
If you checked both of these boxes you now have a chance to encounter this exact issue.
Nix may be special in that it also has the concept of self contained processing? This is a rather large ask of normal package managers. Can you provide more insight?
#1 is effectively a “no”. The hooks are run, but in a sandbox without access to the general application, operating environment, or the network.
EDIT: of course, the install time problem also impacts any package manager where you run the code it manages, assuming the running code ever interacts with sensitive internal data or environment. However, Nix very nicely protects against install-time trouble-makers.
No network or operating environment is a killer. I get that it means nix packages are secure, but quite a few popular npm packages determine what OS you’re on and then download binaries (instead of building from source).
I don’t know the details for NPM specifically, but we’re quite comfortable patching around issues like that, and indeed have done so many times. Sometimes working with the tool we’re dealing with, sometimes working around the tool :) For example pre-fetching what it is looking for and happening to put it where npm will look for it. Sometimes just patching the source of the package, if that is what it takes.
You change nix based on the needs of the library author? Even if npm could create a sandbox for post install and it worked with what they already had, there are so many packages and authors that I doubt it would be feasible.
I’m confused by that question.
npm’s program has a post install hook that allows a package to run code. That code can do anything it wants in the context of the user’s access and authority. The reason for this hook existing is so the package can setup whatever it needs to be usable.
I think what you’re suggesting is that npm handle binary dependencies, which it could definitely do, but it there are a lot of diverse needs when it comes to just that use case that the feature would probably dwarf other things.
I’m not sure what’s confusing there.
You mentioned downloading versions of packages generated for specific systems as something that packages handle themselves. It seems to me like figuring out what version of a package is needed and installing it is the job of the package manager.
Ignoring sandboxing – it’s pointless, I’m going to run the code that gets installed anyways – having packages do it themselves seems like a lot of duplicated code and a lot of potential for bugs, MITM attacks, injections, etc. I doubt anyone is auditing them for certificate validation, SSL use, and other basics.
You mentioned downloading versions of packages generated for specific systems as something that packages handle themselves. It seems to me like figuring out what version of a package is needed and installing it is the job of the package manager.
Here’s an example of what a package might do with a postinstall process: https://github.com/sass/node-sass/blob/master/scripts/build.js
There is no realistic way for npm (or rubygems or pip or…) to both be simple and also handle all usecases for a postinstall process.
It’s all or nothing, IMO.
I wonder if this is the release where they ditch Unity and go back to Gnome? Wish they’d picked KDE. Way better accessibility story.
However, given that Ubuntu Unity is simply being replaced with the existing Ubuntu Gnome I guess they could easily have gone with Kubuntu instead.
I wonder what this means for Mir, their Wayland competitor?
It would be a shame if they dumped 4 years of time and money into Mir when they could have been dumping it into Wayland.
From the Ars article at https://arstechnica.com/information-technology/2017/04/ubuntu-unity-is-dead-desktop-will-switch-back-to-gnome-next-year/:
By switching to GNOME, Canonical is also giving up on Mir and moving to the Wayland display server, another contender for replacing the X window system.
Those are specifically mentioned in the article as something they plan to keep:
The choice, ultimately, is to invest in the areas which are contributing to the growth of the company. Those are Ubuntu itself, for desktops, servers and VMs, our cloud infrastructure products (OpenStack and Kubernetes) our cloud operations capabilities (MAAS, LXD, Juju, BootStack), and our IoT story in snaps and Ubuntu Core.
Snaps are designed to run on distros other than ubuntu, so they’re pretty much completely independent of Unity.
It’s probably safe to assume that any Ubuntu project unrelated to Unity will continue development.
They already dumped upstart and now Unity, why not Mir? If there are advantages of Mir in contrast to Wayland please let me know because I know very little about the differences of both display servers (protocols).
I am also pretty happy about the anouncement because it looks like canonical won’t continue with developing an alternative solution for everything in house, instead they will take the effort to improve an already existing solution which will hopefully be advantageous for anyone using Gnome and not running Ubuntu. With other words, this decision is good news for the Linux desktop.
They’re giving up Mir and moving to Wayland.
Reprising and reformatting something I wrote on that other site about this:
The problem with JWT/JOSE is that it’s too complicated for what it does. It’s a meta-standard capturing basically all of cryptography which wasn’t written by or with cryptographers. Crypto vulnerabilities usually occur in the joinery of a protocol. JWT was written to maximize the amount of joinery.
Negotiation: Good modern crypto constructions don’t do complicated negotiation or algorithm selection. Look at Trevor Perrin’s Noise protocol, which is the transport for Signal. Noise is instantiated statically with specific algorithms. If you’re talking to a Chapoly Noise implementation, you cannot with a header convince it to switch to AES-GCM, let alone “alg:none”. The ability to negotiate different ciphers dynamically is an own-goal. The ability to negotiate to no crypto, or (almost worse) to inferior crypto, is disqualifying.
Defaults: A good security protocol has good defaults. But JWT doesn’t even get non-replayability right; it’s implicit, and there’s more than one way to do it.
Inband Signaling: Application data is mixed with metadata (any attribute not in the JOSE header is in the same namespace as the application’s data). Anything that can possibly go wrong, JWT wants to make sure will go wrong.
Complexity: It’s 2017 and they still managed to drag all of X.509 into the thing, and they indirect through URLs. Some day some serverside library will implement JWK URL indirection, and we’ll have managed to reconstitute an old inexplicably bad XML attack.
Needless Public Key: For that matter, something crypto people understand that I don’t think the JWT people do: public key crypto isn’t better than symmetric key crypto. It’s certainly not a good default: if you don’t absolutely need public key constructions, you shouldn’t use them. They’re multiplicatively more complex and dangerous than symmetric key constructions. But just in this thread someone pointed out a library — auth0’s — that apparently defaults to public key JWT. That’s because JWT practically begs you to find an excuse to use public key crypto.
These words occur in a JWT tutorial (I think, but am not sure, it’s auth0’s):
“For this reason encrypted JWTs are sometimes nested: an encrypted JWT serves as the container for a signed JWT. This way you get the benefits of both.”
There are implementations that default to compressing plaintext before encrypting.
There’s a reason crypto people table flip instead of writing detailed critiques of this protocol. It’s a bad protocol. You look at this and think, for what? To avoid the effort of encrypting a JSON blob with libsodium and base64ing the output? Burn it with fire.
I have a related but somewhat OT question. In one of the articles linked to by the article [1], they say this:
32 bytes of entropy from /dev/urandom hashed with sha256 is sufficient for generating session identifiers.
What purpose does the hash serve here besides transforming the original random number into a different random number? Surely the only reason to use hashing in session ID generation is if there’s no good RNG available in which case one might do something like hash(IP, username, user_agent, server_secret) to generate a unique token? (And in the presence of server-side session storage there’d be no point to including the secret in the hash because its presence in the session table would prove its validity.)
[1] https://paragonie.com/blog/2015/04/fast-track-safe-and-secure-php-sessions
Yeah, if urandom is actually good, then hashing it serves no real purpose. (In fact if you want to get mathematical, it can only decrease the randomness, but luckily by an absolutely negligible amount). Certain kinds of less-than-great randomness can be improved by hashing (as a form of whitening), but no good urandom deserves to be treated that way.
The reason for that is PHP is weird. PHP hashes session entropy with MD5 by default. Setting it to SHA256 just minimizes the entropy reduction by this step. There is no “don’t hash, just use urandom” configuration directive possible (unless you’re rolling your own session management code, in which case, please just use random_bytes()).
This is no longer the case in PHP 7.1.0, but that blog post is nearly two years old.
Thanks for that very thorough dissection of JWT. Are there web app frameworks/stacks that do have helpfully secure and well-engineered defaults that you’d recommend?
The author refers to Fernet as a JWT alternative. https://github.com/fernet/spec/blob/master/Spec.md
However, Fernet is not nearly as comprehensive as JOSE and does not appear to be a suitable alternative.
And comments on https://datatracker.ietf.org/wg/cose/documents/ ?
I think he underestimates the new lifeblood that was injected into C++ with the new standards. C++11, C++14, C++17 all have transformed the language into something completely modern and fairly safe. As a developer of programs that take safety seriously, I believe there is more to software than safety. Safety is a necessary demon the same way insurance companies are. Nobody like insurance, nobody likes safety.
No, modern C++ isn’t safe. Dangling references are still possible. Using an object after you have std::move()d it is still possible. Mutating a shared object without coordinating with other threads is still possible. Using uninitialized memory is still possible.
I did say ‘fairly safe’. While you are right that all those are possible, in modern practice it’s not really a concern because with move semantics you can write more code with value semantics vs references.
My point is it’s fairly safe for its expressive power.
I did say ‘fairly safe’.
Unfortunately memory safety is more like pregnancy. Either you’re safe full-stop, or you just temporarily don’t know what your CVEs are.
That’s a great point if you’re a mathematician but not so compelling if you’re writing software for a living. The reality of modern C++ is that you don’t lose a lot of time dealing with problems which are due to a lack of memory safety. But rust costs you a lot of time in dealing with its tricky memory management. You may not ever have null pointer segfaults in rust but you’re still programming slower.
Move semantics is a good thing. The problem is that C++ doesn’t get it right. For move semantics to work correctly, the language must statically forbid reusing objects that have been moved elsewhere, otherwise double cleanup of the same resource could still happen.
This is a common misunderstanding of move semantics. Move from an object should leave the object in a valid but unspecified state. Meaning after you move, you can use the object after safely, but obviously won’t have the same value as before the move.
IMO, the way you have phrased it requires too much mental gymnastics, and a far simpler way to view this is that a std::thread doesn’t always correspond to an actual thread.
The problem is that if a std::thread isn’t always an actual thread, then the member functions std::thread::join and std::thread::detach are unsafe to call. Pick your poison: either moves are unsafe, or all member functions of any movable-but-not-copyable class are unsafe.
What does “unsafe” mean in this context? If std::thread::join on a moved-from thread throws an exception, why does that make it unsafe? When I access an array in Rust with an invalid index, it panics. Does that make it unsafe to index arrays?
Do you realize Gödel’s incompleteness theorems prove that you either have an unsafe language or you’ll suffer from the lack of expressiveness. For whatever definition of “safe”, if you can prove that your program has that feature, your proof language (i.e. your type system) is either inconsistent or incomplete.
Yes, and I prefer incomplete.
Edit: Also see Gödel’s completeness theorem, which states that, given a first-order theory, a statement is provable iff it holds in all models. Contrapositively, a statement isn’t provable iff there is a model in which it doesn’t hold.
Now, see:
For every program you can’t prove correct, I can come up with a conforming language implementation in which your program breaks.
The most wondrously remarkable result I have seen in this domain is….
The 8000th Busy Beaver number eludes ZF set theory
Utterly fascinating.
Conversely, and example of an explicitly incomplete, but very useful language is eBPF
Suffer how? Idris won’t let me write code that maybe works but I don’t actually know why it works and under what conditions it would stop working (or rather, it will force me to be explicit about the fact that I don’t know why it works and under what conditions it would stop working). I don’t see that as a downside.
But then you also can’t write an Idris compiler in Idris! We sure hope the current compiler we have works.
Because you can’t prove the correctness of a consistent type system using the type system itself. Of course, if you take all the features of Idris into account, you can implement an Idris compiler in Idris, but that’s because Idris, with all its features has an inconsistent type system (f x = f x is a proof of anything). If that’s your point, then I’d say going into an infinite loop isn’t any better than crashing with a segfault, so your choice of using Idris is a subjective trade-off.
Besides, can you really write performant code in a total dependently-typed programming language? If you have a function that accepts two vectors of the same length, it’s very hard to avoid constructing the proof of the equality of their lengths at runtime! You can’t justify spending time on constructing proofs in the world of high performance computing.
Because you can’t prove the correctness of a consistent type system using the type system itself.
Doesn’t that only apply to systems that contain PA? Or you could use the traditional large cardinal hack.
f x = f x is a proof of anything
Yes but it’s explicitly non-total.
I’d say going into an infinite loop isn’t any better than crashing with a segfault
It’s less likely to lead to security vulnerabilities, though that’s not my main argument.
Besides, can you really write performant code in a total dependently-typed programming language? If you have a function that accepts two vectors of the same length, it’s very hard to avoid constructing the proof of the equality of their lengths at runtime! You can’t justify spending time on constructing proofs in the world of high performance computing.
Eh maybe. Of course if you define high performance computing to be computing where you need the maximum possible performance then your statement is tautologically true. But I think the niche where you need absolutely as much performance as possible shrinks every day. I’ve done heavy number crunching work (machine-learning-like stuff) where we needed to use a cluster but we were still always happy to sacrifice a little performance for a stronger correctness guarantee. And I find there is rarely a significant runtime performance impact in the first place. Why do you believe those two vectors are the same length? Maybe they’ve come from the same source, in that case you can probably pass them around in a single datatype (using a value type, so zero overhead compared to storing both) that retains the fact that they’re the same length.
Doesn’t that only apply to systems that contain PA? Or you could use the traditional large cardinal hack.
Well, this branch of the discussion is happily leaving the boundaries of my knowledge, as I’m not a mathematician, I borrowed the explanation about why Idris’s type system couldn’t prove the correctness of an Idris compiler from the Idris documentation, so I really can’t reconstruct the argument from scratch. Now this is very embarassing since I can’t find a link to where it says that in the Idris documentation. I embrace my status as the town idiot in this instance :/
It’s less likely to lead to security vulnerabilities, though that’s not my main argument.
Well, a null pointer access is also unlikely to lead to security vulnerabilities, but a dangling pointer is a disaster of course.
But I think the niche where you need absolutely as much performance as possible shrinks every day.
Can you really approach anywhere near “as much performance as possible” with Idris. I don’t mean within 10%-20%, I mean 1000%. Even without having to prove myself in C++, I find it very hard to express high-level concepts in a way that avoids heap allocation. Is that really possible with Idris? “Here’s some result along with its proof of correctness, and by the way, both the result and the proof is allocated on the stack!”
Maybe they’ve come from the same source
I understand the argument here, but I’m really not sure if this will scale to real problems. If the proof is that simple, you don’t need as sophisticated a type system as that of Idris anyway, and if it’s more complicated, can you still avoid proof construction at runtime? Or even if the proof disappears at runtime, you might still have to pessimize your code in order to prove it. An example here would be how in C you’d use a T * and an integer of correct size to store a vector of values, but in idris you’d have to define it inductively (like how Vec is defined).
Can you really approach anywhere near “as much performance as possible” with Idris. I don’t mean within 10%-20%, I mean 1000%. Even without having to prove myself in C++, I find it very hard to express high-level concepts in a way that avoids heap allocation. Is that really possible with Idris? “Here’s some result along with its proof of correctness, and by the way, both the result and the proof is allocated on the stack!”
I don’t know. I would expect performance to be Haskell-like (i.e. 10%-20% behind C, not orders of magnitude) and most of the types to go away at runtime. Sometimes you as the programmer have to fiddle a little - code rarely needs to do lots of different things at top performance, there is always some structure to what you’re doing (if only because no human could express enough unique operations to occupy a processor for more than a few seconds), e.g. a loop, and you hoist the proof of correctness out of the loop. If you’re doing HPC you’re already in the business of hoisting constants out of hot loops and doing it with your proof witnesses is not really any different from doing it with any other value.
I understand the argument here, but I’m really not sure if this will scale to real problems. If the proof is that simple, you don’t need as sophisticated a type system as that of Idris anyway, and if it’s more complicated, can you still avoid proof construction at runtime?
In my experience complex problems are made up of simple problems, and having the computer do the bookkeeping for you in the simple cases lets you scale up to the complex cases more easily, to the point where it’s no bother to encode the bookkeeping in a way the computer can understand at that level as well.
Or even if the proof disappears at runtime, you might still have to pessimize your code in order to prove it. An example here would be how in C you’d use a T * and an integer of correct size to store a vector of values, but in idris you’d have to define it inductively (like how Vec is defined).
Maybe. I’m sure there are cases like that. At the same time in the cases where the compiler can figure out the optimized representation you get it “for free” everywhere, even in places you might not have thought of it when writing. It’s the classic developer time / CPU time tradeoff, and every day the latter gets cheaper - I was quite surprised to be doing serious matrix algebra on a cluster in Scala (even if delegating to Fortran for the actual optimized implementation), but in business terms it made complete sense - the cluster was expensive but manageable, whereas getting the case study that would show the business actually worked was a matter of the business’s survival, and we could buy more cluster capacity much more quickly than onboarding more developers.
I share your enthusiasm for C++ but Rust’s promises WRT performant memory safety are next-level. That said, I will be waiting for indisputable signs of maturity from Rust before I even look at it. I am thinking another 5 years at least. It takes much more than memory safety to make a language usable in practice. Just imagine how much work needs to be done for a 7-year-old language (which has been stable (v1.0) for only a little more than 1.5 years!) to catch up to one which is 34 years old and has been in heavy use for most of that time.
I also wonder how long it will be before the various sanitizer tools (address, memory, undefined behaviour, integer, etc.) in clang and gcc are able to catch the same errors that the Rust compiler can. Are they even that far off today? Surely there’s no reason they won’t get there? If that happens in the near future it could hurt Rust badly.
Edit: I guess the sanitizers will never be able to do what Rust can because safety is designed into the language.
Edit2: But then again, many of the sanitizer checks are done at runtime which probably opens many language-independent possibilities, although it’s obviously inferior to having it all done at compile time and requires excellent test coverage to be effective.
Where you see “language-independent”, I see “runtime-environment-dependent”. Which is easier to give a formal semantics (so that somebody can establish, once and for all, the way in which bugs can ruled out): a programming language or a runtime environment?
I wasn’t trying to say that the “language-independent possibilities” were better than Rust’s built-in checks, just that it allows you to do more than you can in static analysis of C++. Probably not the best choice of words.
I would think that proving that a language runs correctly on a runtime with particular semantics, and that a specific runtime has those semantics, wolud generally be easier than proving that the language runs correctly on a specific runtime “directly” - indeed if I had to do the latter I would probably approach it by doing the former. (And I would probably approach both halves by doing a similar “matryoshka” strategy - prove that high-level semantics hold provided that slightly lower level semantics hold, then that those hold provided some slightly lower level semantics hold, and so on down to the machine semantics).
I’ve switched back and for between macOS and various Linuxes for about a decade and there’s a few warnings mostly specific to Ubuntu:
If you want a Mac-like experience then you need a top-of-the-line computer. Just because Ubuntu can be installed on a cheaper PC doesn’t mean you’ll get the same snappiness.
Unity, despite so many improvements, is still really slow. That search you see in the linked gif is about as fast as search gets and the graphics are really heavyweight.
This will go away in maybe 5 years (Snap packages and the Ubuntu Software Centre), but mixing libraries is still an issue on Linux. You may be surprised to know that by downloading a single application you may also be downloading all of KDE and some weirdness can happen with that.
Checking the various Linux sites like Phoronix and OMG Ubuntu is very helpful in knowing about updates on Ubuntu down the pipeline and making sure an update doesn’t crash your particular laptop.
Buy your computer from System76. Don’t even bother with anyone else. System 76 is the only company I trust to provide real, long-term support for computers with Ubuntu pre-installed.
Buy your computer from System76. Don’t even bother with anyone else. System 76 is the only company I trust to provide real, long-term support for computers with Ubuntu pre-installed.
You get rebranded Clevo machines, sometimes not ideal for Linux. You’re also limited by the selection of Clevo machines they have, and they don’t have a good thin and light.
From my (somewhat limited understanding) Dell has been providing some real support for the developer branded XPS line and ensuring it continues to support Ubuntu, better than System76 has.
But I’ve also only heard either aggressively average reviews of System76 which basically say “yeah it’s fine, but you could get a laptop from any major manufacturer and slap linux on it and have the same or better experience”, or absolute horror stories.
But also I’ve never really encountered a laptop that I couldn’t “just” install Ubuntu, Debian, or Arch on, and have it function perfectly with minimal issue (minimal being the same amount of tweaking, setup, or modification that an OEM Windows install would require). So I don’t entirely understand the concern everyone has about laptops and linux support.
For the record, I just buy ThinkPads and they “just work” with pretty much every OS I’ve tried; even Windows has no driver trouble.
This will go away in maybe 5 years (Snap packages and the Ubuntu Software Centre), but mixing libraries is still an issue on Linux. You may be surprised to know that by downloading a single application you may also be downloading all of KDE and some weirdness can happen with that.
Perhaps I am missing something fundamental, but if anything snap will make the problem worse because every application bundles its dependencies so you’ll be downloading “all of KDE” every time instead of once. If a package is currently erroneously pulling in too many dependencies then the package maintainer simply made a mistake and would’ve made the same mistake with a snap package.
I guess you will at least avoid the conflicts between the different versions of «all of KDE» that different packages prefer. Maybe adding enough filesystem deduplication and file-by-file content-addressed retrieval could solve the storage problem. Using Nix package manager convinces me that deduplication is not the hard part of the problem.
I guess you will at least avoid the conflicts between the different versions of «all of KDE» that different packages prefer.
I have only used RPM- and dpkg-based distros, but those at least have supported the side-by-side installation of multiple versions of the same package for a very long time.
All the RPM-based and dpkg-based distros I have ever seen, explicitly support installing multiple versions of some packages side-by-side but there is usually a way to get a version dependency conflict with less-polished packages (that are still in the official repository).
I enjoyed the author’s previous series of articles on C++, but I found this one pretty vacuous. I think my only advice to readers of this article would be to make up your own mind about which languages to learn and use, or find some other source to help you make up your mind. You very well might wind up agreeing with the OP:
But it is not true for a lot of people writing Rust, myself included. Don’t take the above as a fact that must be true. Cognitive overheads come in many shapes and sizes, and not all of them are equal for all people.
A better version of this article might have went out and collected evidence, such as examples of actual work done or experience reports or a real comparison of something. It would have been a lot more work, but it wouldn’t have been vacuous and might have actually helped someone answer the question posed by the OP.
Rust did not special case its “map implementation.” Rust, the language, doesn’t have a map.
Hi burntsushi - sorry you did not like it. I spent months before this article asking Rust developers about their experiences where I concentrated on people actually shipping code. I found a lot of frustration among the production programmers, less so among the people who enjoy challenging puzzles. They mostly like the constraints and in fact find it rewarding to fit their code within them. I did not write this sentence without making sure it at least reflected the experience of a lot of people.
I would expect an article on the experience reports of production users to have quite a bit of nuance, but your article is mostly written in a binary style without much room for nuance at all. This does not reflect my understanding of reality at all—not just with Rust but with anything. So it’s kind of hard for me to trust that your characterizations are actually useful.
I realize we’re probably at an impasse here and there’s nothing to be done. Personally, I think the style of article you were trying to write is incredibly hard to do so successfully. But there are some pretty glaring errors here, of which lack of nuance and actual evidence are the biggest ones. There’s a lot of certainty expressed in this article on your behalf, which makes me extremely skeptical by nature.
(FWIW, I like Rust. I ship Rust code in production, at both my job and in open source. And I am not a huge fan of puzzles, much to the frustration of my wife, who loves them.)
I just wanted to say I thought your article was excellent and well reasoned. A lot of people here seem to find your points controversial but as someone who programs C++ for food, Go for fun and Rust out of interest I thought your assessment was fair.
Lobsters (and Hacker News) seem to be very favourable to Rust at the moment and that’s fine. Rust has a lot to offer. However my experience has been similar to yours: the Rust community can sometimes be tiresome and Rust itself can involve a lot of “wrestling with the compiler” as Jonathan Turner himself said. Rust also provides some amazing memory safety features which I think are a great contribution so there are pluses and minuses.
Language design is all about trade-offs and I think it’s up to us all to decide what we value in a language. The “one language fits all” evangelists seem to be ignoring that every language has strong points and weak points. There’s no one true language and there never can be since each of the hundreds of language design decisions involved in designing a language sacrifices one benefit in favour of another. It’s all about the trade-offs, and that’s why each language has its place in the world.
I found the article unreasonable because I disagree on two facts: that you can write safe C (and C++), and that you can’t write Rust with fun. Interpreted reasonably (so for example, excluding formally verified C in seL4, etc.), it seems to me people are demonstrably incapable of writing safe C (and C++), and people are demonstrably capable of writing Rust with fun. I am curious about your opinion of these two statements.
I think you’re making a straw man argument here: he never said you can’t have fun with Rust. By changing his statement into an absolute you’ve changed the meaning. What he said was “Rust is not a particularly fun language to use (unless you like puzzles).” That’s obviously a subjective statement of his personal experience so it’s not something you can falsify. And he did say up front “I am very biased towards C++” so it’s not like he was pretending to be impartial or express anything other than his opinion here.
Your other point “people are demonstrably incapable writing safe C” is similarly plagued by absolute phrasing. People have demonstrably used unsafe constructs in Rust and created memory safety bugs so if we’re living in a world of such absolute statements then you’d have to admit that the exact same statement applies to Rust.
A much more moderate reality is that Rust helps somewhat with one particular class of bugs - which is great. It doesn’t entirely fix the problem because unsafe access is still needed for some things. C++ from C++11 onwards also solves quite a lot (but not all) of the same memory safety issues as long as you choose to avoid the unsafe constructs, just like in Rust.
An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”… And that’s pretty much what any decent C++ shop is doing these days.
It helps with several types of bugs that often lead to crashes or code injections in C. We call the collective result of addressing them “memory safety.” The extra ability to prevent classes of temporal errors… easy-to-create, hard-to-find errors in other languages… without a GC was major development. Saying “one class” makes it seem like Rust is knocking out one type of bug instead of piles of them that regularly hit C programs written by experienced coders.
Maybe. I’m not familiar with C++17 enough to know. I know C++ was built on top of unsafe language with Rust designed ground-up to be safe-as-possible by default. I caution people to look very carefully for ways to do C++17 unsafely before thinking it’s equivalent to what safe Rust is doing.
I agree wholeheartedly. Not sure who the target survey group was for Rust but I’d be interested to better understand the questions posed.
Having written a pretty large amount of Rust that now runs in production on some pretty big systems, I don’t find I’m “fighting” the compiler. You might fight it a bit at the beginning in the sense that you’re learning a new language and a new way of thinking. This is much like learning to use Haskell. It isn’t a good or bad thing, it’s simply a different thing.
For context for the author - I’ve got 10 years of professional C++ experience at a large software engineering company. Unless you have a considerable amount of legacy C++ to integrate with or an esoteric platform to support, I really don’t see a reason to start a new project in C++. The number of times Rust has saved my bacon in catching a subtle cross-thread variable sharing issue or enforcing some strong requirements around the borrow checker have saved me many hours of debugging.
Here’s one: there’s simply not enough lines of Rust code running in production to convince me to write a big project in it right now. v1.0 was released 3 or 4 years ago; C++ in 1983 or something. I believe you when you tell me Rust solves most memory-safety issues, but there’s a lot more to a language than that. Rust has a lot to prove (and I truly hope that it will, one day).
I got convinced when Rust in Firefox shipped. My use case is Windows GUI application, and if Firefox is okay with Rust, so is my use case. I agree I too would be uncertain if I am doing, say, embedded development.
That’s fair. To flip that, there’s more than enough lines of C++ running in production and plenty I’ve had to debug that convinces me to never write another line again.
People have different levels of comfort for sure. I’m just done with C++.