1. 6

    As someone who doesn’t use Macs anymore, I’m happy to see more cross-platform apps and less Mac-native ones :P

    But seriously: why are these people saying that just because there was a flaw in the sandbox, the sandbox should be abandoned completely and all desktops should go back to what’s basically the X11 “security” “model”?!

    the Mac needs the power for apps to shoot the user in the foot

    You can do everything that could shoot the user in the foot without actually shooting them in the foot. It’s not hard to provide access-controlled APIs for all the things. And the Mac is already good at this. They’ve had the Accessibility API for years – manipulating windows and stuff requires the user to grant permission. XPC (if I remember the name correctly) for opening files and stuff is a great idea.

    The unix world is finally catching up, with freedesktop D-Bus Portals (currently used by Flatpak). And D-Bus can also used… you guessed it – to request screenshots. The window system can display its own UI that makes it clear to the user that a screenshot is being taken.

    1. 3

      I agree. I found this post hard to follow. I don’t know the details of the Mac sandbox technology, but it’s quite a jump from needing to be able to do interesting, complicated things and abandoning a proper security model altogether. I think Android (or at least OnePlus’ OxygenOS version) actually has a good example of this: apps are isolated and sandboxed (I think), but things like the global sharing menu make interaction between apps very easy and seamless. Of course, there are some apps (like Facebook) that don’t use the system menus, but there are going to be some bad actors on any platform.

      1. 1

        I think the core point the author is making is not that sandboxes should be dropped because of this security issue, but because the overlap of “things the sandbox allows apps to do” and “things I want my apps to do” is really small, probably too small to cover many features people want to use.

      1. 6

        I think the faulty assumption is that the happiness of users and developers is more important to the corporate bottom line than full control over the ecosystem.

        Linux distributions have shown for a decade that providing a system for reliable software distribution while retaining full user control works very well.

        Both Microsoft and Apple kept the first part, but dropped the second part. Allowing users to install software not sanctioned by them is a legacy feature that is removed – slowly to not cause too much uproar from users.

        Compare it to the time when Windows started “phoning home” with XP … today it’s completely accepted that it happens. The same thing will happen with software distributed outside of Microsoft’s/Apple’s sanctioned channels. (It indeed has already happened on their mobile OSes.)

        1. 8

          As a long-time Linux user and believer in the four freedoms, I find it hard to accept that Linux distributions demonstrate “providing a system for reliable software distribution while retaining full user control works very well”. Linux distros seems to work well for enthusiasts and places with dedicated support staff, but we are still at least a century away from the year of Linux on the desktop. Even many developers (who probably have some overlap with the enthusiast community) have chosen Macs with unreliable software distribution like Homebrew and incomplete user control.

          1. 2

            I agree with you that Linux is still far away from the year of Linux on the desktop, but I think it is not related to the way Linux deals with software distribution.

            There are other, bigger issues with Linux that need to be addressed.

            In the end, the biggest impact on adoption would be some game studios releasing their AAA title as a Linux-exclusive. That’s highly unlikely, but I think it illustrates well that many of the factors of Linux’ success on the desktop hinge on external factors which are outside of the control of users and contributors.

            1. 2

              All the devs I know that use mac use linux in some virtualisation options instead of homebrew for work. Obviously thats not scientific study by any means.

              1. 8

                I’ll be your counter example. Homebrew is a great system, it’s not unreliable at all. I run everything on my Mac when I can, which is pretty much everything except commercial Linux-only vendor software. It all works just as well, and sometimes better, so why bother with the overhead and inconvenience of a VM? Seriously, why would you do that? It’s nonsense.

                1. 4

                  Maybe a VM makes sense if you have very specific wishes. But really, macOS is an excellent UNIX and for most development you won’t notice much difference. Think Go, Java, Python, Ruby work. Millions of developers probably write on macOS and deploy on Linux. I’ve been doing this for a long time and ‘oh this needs a Linux specific exception’ is a rarity.

                  1. 4

                    you won’t notice much difference.

                    Some time ago I was very surprised that hfs is not case sensitive (by default). Due to a bad letter-case in an import my script would fail on linux (production), but worked on mac. Took me about 30 minutes to figure this out :)

                    1. 3

                      You can make a case sensitive code partition. And now with APFS, partitions are continuously variable size so you won’t have to deal with choosing how much goes to code vs system.

                      1. 1

                        A case sensitive HFS+ slice on a disk image file is a good solution too.

                      2. 2

                        Have fun checking out a git repo that has Foo and foo in it :)

                        1. 2

                          It was bad when microsoft did it in VB, and it’s bad when apple does it in their filesystem lol.

                      3. 2

                        Yeah definitely. And I’ve found that accommodating two platforms where necessary makes my projects more robust and forces me to hard code less stuff. E.g. using pkg-config instead of yolocoding path literals into the build. When we switched Linux distros at work, all the packages that worked on MacOS and Linux worked great, and the Linux only ones all had to be fixed for the new distro. 🙄

                      4. 2

                        I did it for awhile because I dislike the Mac UI a lot but needed to run it for some work things. Running in a full screen VM wasn’t that bad. Running native is better, but virtualization is pretty first class at this point. It was actually convenient in a few ways too. I had to give my mac in for repair at one point, so I just copied the VM to a new machine and I was ready to run in minutes.

                        1. 3

                          I use an Apple computer as my home machine, and the native Mac app I use is Terminal. That’s it. All other apps are non-Apple and cross-platform.

                          That said, MacOS does a lot of nice things. For example, if you try to unmount a drive, it will tell you what application is still using it so you can unmount it. Windows (10) still can’t do that, you have to look in the Event viewer(!) to find the error message.

                          1. 3

                            In case it’s unclear, non-Native means webapps, not software that doesn’t come preinstalled on your Mac.

                            1. 3

                              It is actually pretty unclear what non-Native here really means. The original HN post is about sandboxed apps (distributed through the App Store) vs non-sandboxed apps distributed via a developer’s own website.

                              Even Gruber doesn’t mention actual non-Native apps until the very last sentence. He just talks/quotes about sandboxing.

                              1. 3

                                The second sentence of the quoted paragraph says:

                                Cocoa-based Mac apps are rapidly being eaten by web apps and Electron pseudo-desktop apps.

                          2. 1

                            full-screen VM high-five

                          3. 1

                            To have environment closer to production I guess (or maybe ease of installation, dunno never used homebrew). I don’t have to use mac anymore so I run pure distro, but everyone else I know uses virtualisation or containers on their macs.

                            1. 3

                              Homebrew is really really really easy. I actually like it over a lot of Linux package managers because it first class supports building the software with different flags. And it has binaries for the default flag set for fast installs. Installing a package on Linux with alternate build flags sucks hard in anything except portage (Gentoo), and portage is way less usable than brew. It also supports having multiple versions of packages installed, kind of half way to what nix does. And unlike Debian/CentOS it doesn’t have opinions about what should be “in the distro,” it just has up to date packages for everything and lets you pick your own philosophy.

                              The only thing that sucks is OpenSSL ever since Apple removed it from MacOS. Brew packages handle it just fine, but the python package system is blatantly garbage and doesn’t handle it well at all. You sometimes have to pip install with CFLAGS set, or with a package specific env var because python is trash and doesn’t standardize any of this.

                              But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                              1. 1

                                Installing a package on Linux with alternate build flags sucks hard in anything except portage

                                You mention nix in the following sentence, but installing packages with different flags is also something nix does well!

                                1. 1

                                  Yes true, but I don’t want to use NixOS even a little bit. I’m thinking more vs mainstream distro package managers.

                                2. 1

                                  For all its ease, homebrew only works properly if used by a single user who is also an administrator who only ever installs software through homebrew. And then “works properly” means “install software in a global location as the current user”.

                                  1. 1

                                    by a single user who is also an administrator

                                    So like a laptop owner?

                                    1. 1

                                      A laptop owner who hasn’t heard that it’s good practice to not have admin privileges on their regular account, maybe.

                                  2. 1

                                    But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                    Can you elaborate more on this? You create a virtualenv and go from there, everything works.

                                    1. 2

                                      It used to be worse, when mainstream distros would have either 2.4 or 2.6/2.7 and there wasn’t a lot you could do about it. Now if you’re on python 2, pretty much everyone is 2.6/2.7. Because python 2 isn’t being updated. Joy. Ruby has rvm and other tools to install different ruby versions. Java has a tarball distribution that’s easy to run in place. But with python you’re stuck with whatever your distro has pretty much.

                                      And virtualenvs suck ass. Bundler, maven / gradle, etc. all install packages globally and let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs. Node installs all it’s modules locally to a directory by default but at least it automatically picks those up. I know there are janky shell hacks to make virtualenvs automatically activate and deactivate with your current working directory, but come on. Janky shell hacks.

                                      That and pip just sucks. Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch. The virtualenv melting pot of files that pip dumps into one directory just blatantly breaks a lot of the time. They’re basically write once. Meanwhile every gem version has it’s own directory so you can cleanly add, update, and remove gems.

                                      Basically the ruby, java, node, etc. all have tooling actually designed to author and deploy real applications. Python never got there for some reason, and still has a ton of second rate trash. The scientific community doesn’t even bother, they use distributions like Anaconda. And Linux distros that depend on python packages handle the dependencies independently in their native package formats. Ruby gets that too, but the native packages are just… gems. And again, since gems are version binned, you can still install different versions of that gem for your own use without breaking anything. Python there is no way to avoid fucking up the system packages without using virtualenvs exclusively.

                                      1. 1

                                        But with python you’re stuck with whatever your distro has pretty much.

                                        I’m afraid you are mistaken, not only distros ship with 2.7 and 3.5 at same time (for years now) it is usually trivial to install newer version.

                                        let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs

                                        You can also execute from virtualenvs directly.

                                        Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch.

                                        I’m not sure how to comment on that :-)

                                        1. 1

                                          it is usually trivial to install newer version

                                          Not my experience? How?

                                          1. 1

                                            Usually you have packages for all python versions available in some repository.

                            2. 2

                              Have they chosen Macs or have they been issued Macs? If I were setting up my development environment today I’d love to go back to Linux, but my employers keep giving me Macs.

                              1. 3

                                Ask for a Linux laptop. We provide both.

                                I personally keep going Mac because I want things like wifi, decent power management, and not having to carefully construct a house of cards special snowflake desktop environment to get a useable workspace.

                                If I used a desktop computer with statically affixed monitors and an Ethernet connection, I’d consider Linux. But Macs are still the premier Linux laptop.

                                1. 1

                                  At my work place every employee is given a Linux desktop and they have to do a special request to get a Mac or Windows laptop (Which would be in addition to their Linux desktop).

                              2. 3

                                Let’s be clear though, what this author is advocating is much much worse from an individual liberty perspective than what Microsoft does today.

                                1. 4

                                  Do you remember when we all thought Microsoft were evil for bundling their browser and media player? Those were good times.

                              1. 7

                                Now that we’ve passed $1K - can we beat the $5K?

                                1. 5

                                  If we’re trying to shoot for 5k we should at least let Maine know.

                                  1. 5

                                    As a European, I honestly prefer us to have it :)

                                    1. 4

                                      I’d say we go for it, and offer them to take over the gold spot for a 10k donation to Unicode instead. :-)

                                  1. 4

                                    I think there are a few different things at play:

                                    • Some people consider working on open source some kind of social happening, others work on technical stuff precisely because they are uninterested in it.
                                    • In many places, what you leave out matters much more than what you put in. Some people tend to take it personal if their feature gets rejected.
                                    • The amount of people who are capable and interested in ensuring that contributions live up to the requirements are far and few between, and it’s generally a thankless job.
                                    • It takes a lot of effort to get things right, but no effort to get things wrong.
                                    • More often than not the person who wants to add a feature is nowhere to be found when it comes to maintaining it.
                                    • Many maintainers who make contributors redo their patches over and over are not interested in wielding power, but believe that avoiding to inflict pain on actual users (if a feature is shipped in a broken state) is more important than the hurt feelings of a contributor (who felt that the proposed feature was good enough for his purposes).
                                    • While there are topics–like performance–where changes can easily measured and accepted/rejected based on that, there are equally important topics–like user experience–where it is hard to quantify the impact of a change. “This change makes the lives of every user worse”–even if perfectly accurate–tends to make the person who cares about user experience look bad, not the person proposing the change.

                                    That’s my experience from working more than half a decade on Scala. It might provide some insight why the project is hemorrhaging contributors who cared about quality and user experience (apart from the harassment issues).

                                    1. 2

                                      It is unclear which project you refer to, when speaking about hemorrhaging contributors, is it Linux or Scala?

                                      1. 1

                                        Scala.

                                    1. 1

                                      Does anyone know whether the changes are sufficient to geht vertical tabs into a usable state?

                                      1. 3

                                        I’m fine with those links, but it should be the responsibility of the submitter to supply the necessary context.

                                        1. 3

                                          The real problem usually starts when languages which want to establish null safety in their language have to interop with an ecosystem where the distinction between nullable and non-nullable doesn’t exist.

                                          Newer languages often have the approach of separating values that can be null from values that cannot be null, but this fails to work when those languages have to interoperate with code where it is unknown whether something can be null or not.

                                          This is how language would like to deal with null:

                                                 Nullability
                                                /           \
                                               /             \
                                          Not Nullable    Nullable
                                          

                                          But given an ecosystem that doesn’t make this distinction, you end up with something like:

                                                            Nullability
                                                           /          \
                                                          /            \
                                              Known Nullability     Unknown Nullability
                                                /           \
                                               /             \
                                          Not Nullable    Nullable
                                          

                                          It looks enticing for language designers to try and merge the values with unknown nullability into one of the existing categories, but treating values of unknown nullability as nullable – or as non-nullable – both approaches have substantial problems.

                                          1. 1

                                            Isn’t this the same problem as existentials in general, which any Java-interop language already needs to solve? Unknown nullability is just N forSome { type N[X] <: Nullability[X] } or however you want to write it.

                                          1. 4

                                            it’s much easier to add features slowly based on need than it is to determine all of the useful behaviors before starting development, and it’s much easier to test a small number of base features before building on top of them than to test a large complex language all at once.

                                            I don’t agree that adding features one by one improves anything about the language design. One huge challenge is that each feature interacts with most of the other ones and you need to consider each combination with the full set anyways.

                                            Let us assume you start with a small number of base features A, B, and C. You determine all useful behaviors, test the features and their interactions, you tune them to a sweet spot of language design. Now you add feature D. Assume it interacts with B and C. To find the new sweet spot, you have to tune B and C. This means all the time spent into tuning for the sweet spot A-B-C was wasted once you introduced D. Ok, maybe not all time, because you probably learned something on the way. Still, the overall process does not look efficient to me.

                                            The thoughts are the reason why I think it is a bad idea that the Go designers avoid adding Generics. I believe it is unavoidable to add them at some point and then a lot of tuning is invalidated.

                                            On the other hand, it seems unavoidable. I do not know any language, which did not grow further after 1.0. Simple languages (Lisp, SQL, SML) get extended when they get in contact with the real world.

                                            1. 1

                                              In my experience having a slightly larger language, and trimming down features after you have gained some insights into what has worked and what hasn’t is one of the best approaches to adopt.

                                              This of course requires the ability to carefully manage deprecation and migration, and not having made crazy promises about compatibility like some languages of the 90ies did. Looking at many more recent languages, this has substantially changed though, with the stronger focus on adoption over existing users.

                                            1. 5

                                              I am not going to buy intel s̵t̵u̵f̵f̵ shit again. I have been talking to my friends and relatives about it too. This is just too bad for any company to prevail.

                                              1. 2

                                                That’s a poor attitude to have. Every CPU has bugs. No modern fast CPU is going to be fully immune to all of these bugs. We made mistakes in the design of modern CPUs because we wanted speed.

                                                1. 5

                                                  Poor attitude ? My comment, much like the post, refers to the company. This is a big corp with a highly hierarchical structure and heavily thorough process in making their products reach market. Intel was doing it on purpose, make no mistake. Judge poor attitude where it is at.

                                                  “Some VW executives probably wish a problem with their brake controller software has been discovered at the same time,”

                                                2. 1

                                                  Me too. Intel really needs a credible competitor.

                                                  I delayed all my hardware purchases in the hope that the next CPU designs address these and similar bugs, that the mitigations for Rowhammer work reliably in RAM chips and that the availability of GPUs improves. After that AMD will get my money.

                                                1. 22

                                                  Virtually all of them

                                                  Dumping stuff in $HOME. Really?

                                                  There is a dedicated folder, .cache (or $XDG_CACHE_DIR), and pretty much all tools chose to ignore it:

                                                  Maven, Ivy, Cabal, SBT, Cargo, Gem, Bundler (congrats on adding two dirs to my $HOME), …

                                                  It’s come to the point where half of my dot folders are things dumped why various build tools.

                                                  As a consequence, I have made my $HOME dir read-only, and aggressively file bugs against any applications that fail running.

                                                  1. 3

                                                    Cargo is working on this! I agree that it’s annoying that they don’t follow XDG. However, I don’t think that XDG is really much of a thing outside of freedesktop-esque operating systems - pretty much anything that isn’t Linux or Mach/Hurd. Obviously Windows doesn’t fall into this, but I can’t comment on macOS or any of the BSDs because I don’t use them.

                                                    1. 3

                                                      The point is to follow the applicable standards of the respective operating system, not roll your own thing that is wrong on all of them.

                                                      There are well-published standards for Windows, macOS and Linux that need to be followed by library and application authors.

                                                      Heck, I wrote a Rust library that provides the right paths to libraries/applications in a day or two. It’s not hard, developers just have to care about it and respect their users. Sadly, more often than not, their program is the most important thing, special, and doesn’t need to follow any platform rules.

                                                    2. 2

                                                      I think the java tools are older than the XDG spec, so they cannot be blamed for not respecting something that was invented later. Now they have to be backwards compatible of course..

                                                      1. 3

                                                        They can be blamed at the point when XDG came into existence, and much earlier for their lack of respect of Windows and macOS rules.

                                                        It’s not hard to implement a solution that seamlessly migrates to the standard without disturbing existing usages.

                                                      2. 1

                                                        ls -a ~/ on my work machine = instant anxiety pangs. a terrible practice that unfortunately seems to be sustained out of pure habit.

                                                        1. 2

                                                          That’s why I made it read-only. There will never be more dot files in my $HOME than I have now, only fewer.

                                                          This month I deleted .nano and .coursier. Another two gone. :-)

                                                      1. 33

                                                        Side topic, this story may explain an odd story from two weeks ago about how the Intel CEO sold all the shares he could. If he doesn’t have rock-solid documentation that the trade was planned before he learned this, that’s probably insider trading. (Hat tip to @goodger prompting me to look up the SEC rule in the chat.)

                                                        ETA: here’s the form 4 he filed. I’ve got to step out the door, but if anyone can figure out if this was reported to Intel before Nov 29 that would be interesting.

                                                        1. 18

                                                          From the project zero blog post:

                                                          We reported this issue to Intel, AMD and ARM on 2017-06-01

                                                          1. 6

                                                            Good find. Looks like some press has it now, too. And a yc news commenter notes it’s not in their 10-Q, so that’s probably a couple counts in an indictment and a shareholder lawsuit.

                                                          2. 1

                                                            Even if he knew, I think it matters whether this is a recurring event. If he always sells his shares at the end of the year, it would be insane to demand that he doesn’t do it.

                                                            Otherwise people could just start shorting as soon as they see an executive not selling stock, because they can infer now that there is some bad news incoming.

                                                            1. 9

                                                              It’s public information, there’s no need to speculate. He doesn’t.

                                                              1. 1

                                                                Matt Levine is relaying Intel comments that are the opposite of what you’re saying.

                                                                1. 3

                                                                  That article is pretty misleading. It’s true that the November sale was “pursuant to a pre-arranged stock sale plan with an automated sale schedule,” but that stock sale plan was pre-arranged only in October, months after Google had notified Intel of these vulnerabilities.

                                                                  1. 3

                                                                    I thought these all had to be disclosed on Form 4s. Maybe there’s another reporting vehicle I’m unaware of, but “Krzanich’s plan seems to involve getting stock grants at the beginning of each year and then selling as much as he can in the fourth quarter, which he has done consistently for a few years.” is not an accurate description of the record in the linked form 4s. His sales happen in every quarter and this is the only time he’s sold down to Intel’s minimum (eyeballing rather than making a running total, but it seems clear).

                                                            1. 3

                                                              Happy new year! My resolution is largely a single thing:

                                                              Quit work on open-source projects.

                                                              1. 2

                                                                What is your reasoning for this?

                                                                1. 2

                                                                  One shouldn’t have to consider the possibility of getting swatted for contributing to an open-source project.

                                                                  1. 2

                                                                    Struggling to understand how it is that you believe this is a likely outcome.

                                                                    1. 3

                                                                      I came down to the conclusion that it is unlikely, but given that a communities’ individual

                                                                      • harassed my university
                                                                      • tried to sabotage my thesis
                                                                      • doxxed me
                                                                      • published slanderous claims about my person with my full name attached to it on the internet

                                                                      it’s reasonable to think about what the next escalation steps will be.

                                                                      In the end, I found myself thinking about steps to maintain my safety. That’s probably a good time to leave abusive and harassing environments.

                                                                      1. 1

                                                                        That’s horrible! What caused such a psychotic escalation?

                                                                        1. 2
                                                                          • I offered to work on improving the documentation/website of a programming language, especially for people new to the language.
                                                                          • I got the response from leadership that it’s not needed, because “there are no beginners out there, everyone already knows Scala.”
                                                                          • I then announced that I would spend no further effort on it, based on this response, and similar ones.
                                                                          • Leadership then acted like they never said it.

                                                                          That’s basically when the harassment started.

                                                                          1. 2

                                                                            I got the response from leadership that it’s not needed, because “there are no beginners out there, everyone already knows Scala.”

                                                                            With an attitude like that, there certainly will be no beginners.

                                                                            I believe you’re making a mistake in conflating this to all open source projects, but I can understand how the PTSD can cause that.

                                                                            1. 1

                                                                              Yes, that’s my impressions as well.

                                                                              Documentation shapes a community just as much as a community shapes documentation:

                                                                              If your documentation is poor, you will only attract users (and contributors) which are fine with poor documentation. And with users (and contributors) that are fine with poor documentation, improving documentation will never be a priority.

                                                              1. 1

                                                                Published a library that transparently handles standard directories for applications (config dir, cache dir, etc.) across Linux, Windows and macOS. Implemented the library in Java and Rust.

                                                                Wrote some articles on language design, and lessons learned from mistakes.

                                                                1. 3

                                                                  What are they doing differently compared to Microsoft, which tried to do the same thing and failed?

                                                                  1. 3

                                                                    Basically, they make developer tools and a programming language is the ultimate lock-in in that market.

                                                                    I don’t buy the argument that languages are complementary to IDEs. People don’t pay directly for languages. They pay for services around the language, which is what JetBrains sells already. OTOH, hardware and OS are complementary products. If people are tempted to switch to Macs and develop using Swift instead of Java, that might take some money away from IDE vendors. Notice how Apples gives you a “free” IDE and are pushing Swift on the server.

                                                                    He argues that developers might think Kotlin is beneficial but JetBrains are smarter than that. But I think from JetBrains point of view, it doesn’t matter if Kotlin (or its competetirors) have real advantages over Java or not. All it matters is what their customers think.

                                                                    So, yes, they were worried that their IDE business will go down the drain if everyone switches away from Java. They realized how vulnerable they are to shifts in programming language popularity. Their options were:

                                                                    1. Bet on one of the existing languages and be at the mercy of a young and small player
                                                                    2. Support all of the languages and remove your dependency on a specific language (which they might eventually chose to do)
                                                                    3. Create your own language and try to have more control over your destiny (which is what they did)
                                                                    1. 1

                                                                      They have been doing number 2 for basically half a decade already. There is pretty much no popular language they don’t support: C, C++, C#, F#, Groovy, Go, Java, JavaScript, Objective-C, PHP, Python, Ruby, Rust, Scala, SQL, TypeScript, VisualBasic, … plus dozens of language plugins made by language communities.

                                                                      1. 1

                                                                        Different languages need different approaches, but Idea is initially designed for Java. It will work for similar languages (C#), but its killer feature — autocomplete after typing . — will not work for most dynamic languages (js, python, ruby). I tried PyCharm and it works mostly like dumb editor (autocomplete sometimes works but very unreliably), but was quite slow for dumb editor (recent versions are probably much faster, especially compared to Electron-based IDEs).

                                                                        That’s also the reason why Microsoft created Typescript — not for type checker to catch your bugs, but for autocomplete in Visual Studio. Jetbrains designed Kotlin to be statically analyzable.

                                                                        Many other languages are not especially statically analyzable but there are opportunities for other IDE features for them. For example, Java completely lacks repl, but for Clojure repl is killer feature. It’s very convenient to write code while editor is connected to instance of program and being able to update its code on the fly and evaluate expressions. Cider, an emacs tooling for Clojure, even has autocomplete based on runtime information from live process, as opposed to static analysis of source code. Smalltalk IDEs use both static analysis and runtime information AFAIK (I don’t know details).

                                                                        And only Java (maybe C++ too) needs “code generation” feature (i.e. creation of getters, setters and hashCode).

                                                                        So, properly supporting multiple languages might be hard. One-size-fits-all approach might be “support java-like languages fully but only syntax highlighting for others”. Microsoft created Language Server Protocol which is cool, but again it’s designed for Java-like languages (C#, Typescript).

                                                                    1. 26

                                                                      Another item onto the list of stupid, self-sabotaging ideas from Mozilla.

                                                                      • Pocket
                                                                      • Cliqz
                                                                      • Looking Glass
                                                                      • (Anything else I missed?)

                                                                      That said, I’m still a Firefox user, because after all, I still trust the Mozilla Foundation and Community more than the makers of the other browser vendors.

                                                                      1. 10

                                                                        Mozilla has it’s missteps, on the other hand, they are still better than the other Browser Vendors out there and I haven’t seen a viable Firefox Fork out there that works for me. Plus it seems the Looking Glass addon was inert unless specifically enabled by the user, so I don’t see the harm tbh.

                                                                        “Atleast [they are] the prettiest pile of shit.” ~ Some quote I heard somewhere

                                                                        1. 3

                                                                          I would add Mozilla Persona to this list, which was a great idea, but was mismanaged and shut down by Mozilla before it could do anything good.

                                                                          I pretty much lost my faith in Mozilla having any idea what it is doing at that point.

                                                                          1. 5

                                                                            Original Pocket introduction was mishandled, but since Mozilla owns and operates it now, integration with Firefox makes sense.

                                                                            1. 7

                                                                              is it open source now?

                                                                              1. 6

                                                                                My understanding is, it’s not yet. It’s being worked on. I have no idea what kind of work it takes, but the intention is that it will be fully open sourced.

                                                                            2. 4

                                                                              You missed ‘Quantum.’ (The one where they broke their extension API for the sake of alleged performance).

                                                                              1. 45

                                                                                That one I actually like; the performance is much better, and the memory leaks much fewer. Pre-quantum I was on the verge of switching to Chrome because of the performance gap and leaks.

                                                                                1. 11

                                                                                  I agree. The browser engine is noticeably better - if only the software around it were also on the same level. Some lightweight browser like surf or midori should adopt it, instead of WebKit.

                                                                                  1. 1

                                                                                    WebKit is easy to adopt because WebKitGTK and QtWebKit (or whatever it’s called) are well supported and easy to use. And Chromium has CEF. (IIRC Servo is also implementing CEF.)

                                                                                    I don’t think current Gecko is easily embeddable into whatever.

                                                                                    Back in the day Camino on Mac OS was a Gecko browser with a custom Cocoa UI, but doing that today would be way too hard.

                                                                                    1. 2

                                                                                      I should clarify, I was talking about Servo. I don’t really thing there would be a point in using Gecko, since it will probably turn into a legacy project.

                                                                                      1. 2

                                                                                        It seems the other way to me? What they’re doing instead is slowly retrofitting pieces of Servo into Gecko piecemeal. (or at least, some kind of Rust equivalent to the C/C++/JS code being replaced) Servo would then be dead or explicitly turned into some staging ground for Gecko.

                                                                                2. 20

                                                                                  I will go beyond alleging a performance improvement, I will attest to it. Surprisingly enough, the improvement includes Google properties such as Gmail and YouTube. They are both more responsive in Firefox now than Chromium or Chrome.
                                                                                  On the extension side, I do not use a large number. Those which I do, however, still function.

                                                                                  I freely admit that the plural of anecdote is not “data”, but I would feel remiss not to share how impressed I am with Quantum. Pocket has always annoyed me, so I certainly do not see Mozilla’s actions as unimpeachable and am only giving them credit for Quantum because I feel they deserve it.

                                                                                  1. 8

                                                                                    Based on this, Quantum was a balanced update where the team had do sacrifice the old extension API. Also, it’s not that they’ve removed extensions completely. (And no, I’m not talking about you Looking Glass)

                                                                                  2. 8

                                                                                    Quantum is great. uBlock Origin and uMatrix now work on Firefox for Android just as well as on desktop.

                                                                                    1. 3

                                                                                      ublock origin worked on firefox android before firefox quantum no ?

                                                                                      1. 1

                                                                                        IIRC it worked but the UI was somewhat different. Now uMatrix is also available, and both extensions have UI that looks practically identical to the desktop versions.

                                                                                1. 20

                                                                                  A bit unrelated to the linked content, but “Low Hanging Fruit of Programming Language Design” really makes me wish a place/collection/book which documents the lessons (both good and bad) learned in the last few decades of language design.

                                                                                  Language design currently feels like it has plateaued and isn’t really moving forward anymore except in a few specific attributes the language author really cares about.

                                                                                  Having some exhaustive documentation that holds potential design approaches for individual features, possibly with a verdict based on past experiences would help to prevent language designers repeating the same mistakes over and over.

                                                                                  Entries could include:

                                                                                  • The various, bad approaches to operators and operator “overloading”.
                                                                                  • How not to mess up equality and identity.
                                                                                  • Things that don’t belong on a language’s top type.
                                                                                  • Implicit numeric conversions will never work in a satisfying fashion.
                                                                                  • Picking <> for Generics is wrong and broken, as evidenced by all languages that tried it.
                                                                                  • Why ident: Type is way better than Type ident.
                                                                                  • Typeclass coherency is fundamentally anti-modular and doesn’t scale.
                                                                                  • Upper and lower bounds are a – rather uninteresting – subset of context bounds/typeclasses.
                                                                                  • There is no reason to require “builtin/primitve” types to be written differently from user-defined types.
                                                                                  • How to name things in a predictable way.
                                                                                  1. 13

                                                                                    Another good source of lessons would be what Gilad Bracha calls shadow languages. Essentially, any time you find a simple mechanism getting extended with more and more features, until it essentially becomes a (crappy) programming language in its own right. The obvious thing to try in these situations is to throw out that standalone system and instead provide some mechanism in the ‘real’ language for programs to calculate these things in a “first class” way.

                                                                                    Bracha gives examples of ML modules, where e.g. functors are just (crappy) functions, Polymer (which I don’t know anything about) and imports (e.g. conditional imports, renaming, etc.).

                                                                                    Some more examples I can think of:

                                                                                    • Haskell’s typeclass resolution mechanism is basically a crappy logic language, which can be replaced by normal function arguments. Maybe a better generalisation would be implicit arguments (as found in Agda, for example)? One way to programmatically search for appropriate arguments is to use “proof tactics”.
                                                                                    • Haskell types have gained datakinds, type families, etc. which are basically just a (crappy) functional programming language. When types are first-class values (like in Coq, Agda, Idris, etc.) we can use normal datatypes, functions, etc. instead.
                                                                                    • Build/packaging systems. Things like Cabal, NPM, PyPI, RPM, Deb, etc. These are usually declarative, with elaborate dependency solvers. As well as being rather limited in what we can express, some systems require all participants to maintain a certain level of vigilence, to prevent things like ‘malicious solutions’ (e.g. malicious packages claiming to implement veryCommonDependency version 99999999999). I’ve seen a couple of nice approaches to this problem: one is that of Nix/Guix, which provide full functional programming languages for calculating packages and their dependencies (I’ve even written a Nix function which calls out to Tinc and Cabal for solving Haskell package dependencies!); the other is Racket’s old PLaneT packaging system, where programs write their dependencies in-line, and they’re fetched as needed. Unfortunately this latter system is now deprecated in favour of raco, which is just another NPM-alike :(
                                                                                    • Multi-stage programming seems like a language-level concept that could subsume a bunch of existing pain points, like optimisation, configuration, or even packaging and deployment. Why bother with a mountain of flaky bash scripts to orchestrate compilers, build tools, test suites, etc. when we can selectively compile or interpret sub-expressions from the same language? The recent talk “programming should eat itself” looks like the start of some really exciting possibilities in this area!
                                                                                    1. 4

                                                                                      I tried to write something like that, but this is so subjective.

                                                                                      This is probably not a big problem in practice. If you design and publish a language, many helpful trolls will come and tell you all mistakes you made. 😉

                                                                                      1. 6

                                                                                        FWIW, I compiled a list of articles on some of the topics I mentioned above:

                                                                                        Maybe it is interesting for you.

                                                                                        The articles’ conclusion are based on an exhaustive survey of more than a dozen popular languages as well as many minor, but influential ones.

                                                                                        1. 2

                                                                                          Thanks, a few of my writings:

                                                                                          Is there a way to get a feed of your articles? https://soc.github.io/feed/ is empty.

                                                                                          1. 1

                                                                                            Looks like we agree on pretty much everything in the first two articles. :-)

                                                                                            I think there are only bad options for dealing with operators (while operator overloading is pretty much broken altogether), but some approaches are less bad then others.

                                                                                            My opinion is to pretty much use the simplest thing to document, specify and implement, and emphasize the lack of importance of operators to users to stop them from going overboard.

                                                                                            I believe they get way too much attention given how unimportant they are in the grand scheme of things.

                                                                                            Is there a way to get a feed of your articles? https://soc.github.io/feed/ is empty.

                                                                                            I’ll look into that, wasn’t even aware that I had a feed. :-)

                                                                                          2. 2

                                                                                            For equality, I’m not sure if there should be an equals method in Object (or Any or AnyRef).

                                                                                            Equality is not a member of a type. (Unfortunately, in Java everything must be inside a class) Equality often depends on context. For example, when are two URLs equal? Sometimes you want to compare the strings. Sometimes the string without the fragment identifier. Sometimes you want to make a request to look what gets returned for the URLs.

                                                                                            Sometimes, we might prefer to not provide equals at all. For example, does it make sense if locks can be equal?

                                                                                            The argument pro Object.equals is convenience. For many types there is a reasonable default. Manually specifying equality for every hash map instantiation is tedious.

                                                                                            1. 1

                                                                                              For equality, I’m not sure if there should be an equals method in Object (or Any or AnyRef).

                                                                                              I agree. The considerations don’t rely on it, except as a tool for demonstration. If a language isn’t forced (e. g. by the runtime/host language/library ecosystem) to have it on their top-type it makes sense to leave it out.

                                                                                            2. 2

                                                                                              Why are [] better than <> for generics

                                                                                              I feel like this should be a two-part argument:

                                                                                              • why re-using <> for generics is bad
                                                                                              • why [] are better used for generics than for indexing

                                                                                              Your article is pretty convincing on the first part, but curiously silent on the second. What does Scala use for indexing into ordered collections? Or does it avoid them altogether?

                                                                                              1. 4

                                                                                                Scala uses () for indexing into ordered collections, a la vbscript.

                                                                                                As a language developer, I’ve not implemented generics, so I’ve yet to develop strong feelings about <> in that sense
                                                                                                As a language user, <> for generics has never tripped me up. That has mostly been In C# and Java, however, and I think both languages keep the places where <> vs < or > shows up mostly distinct. I’d hardly call it a disastrous choice on this side of things, even if it took some extra work on the part of the language teams for them.

                                                                                                1. 2

                                                                                                  I do know that <> ends up looking painfully ugly, at least in Rust. Also is making it harder to find a nice syntax for constant generics, and is responsible for the ugly turbofish operator.

                                                                                                  1. 1

                                                                                                    I would suppose that is a bit more of a matter of taste, but I’m unsure that [] would be any better on that front, unless ::<> would be replaced by something other than ::[]. Which might be possible if Rust didn’t use [] for indexing. Given the tiny bit of Rust I’ve written, giving up [] for indexing would almost make sense, since you’re often using methods for collection access as it is. I’d have to sit down with the Rust grammar to be sure.

                                                                                                    1. 2

                                                                                                      The important thing to keep in mind is that <> weren’t chosen for any kind of reason except “these are literally the only symbols on the keyboard which kind of look like braces that we can retrofit into an existing language.”

                                                                                                      If you start from a blank slate and ask “what is the best we can do, making the rules of the language simple and easy to understand” the result will be very different.

                                                                                                      Consider two approaches:

                                                                                                      Approach A

                                                                                                      • () brackets: used for method calls, except where they are not: Array construction, array indexing, etc.
                                                                                                      • [] brackets: used for array construction and array indexing
                                                                                                      • {} brackets: used for array construction, method bodies, initializers, …
                                                                                                      • <> “brackets”: used as an operator for comparisons, used for generics

                                                                                                      Approach B

                                                                                                      • () brackets: used for terms, grouping, marks a single expression
                                                                                                      • [] brackets: used for types
                                                                                                      • {} brackets: used for refining a term/type, marks a sequence of statements
                                                                                                      • <> “brackets”: not used, because they are not brackets

                                                                                                      I think no one would say “let’s mix things up and assign brackets to various use-cases randomly” and pick approach A over approach B.

                                                                                                      And yes, Rust would be simpler and easier to read if they kept the use of [] for generics, instead of migrating to <>.

                                                                                                      1. 2

                                                                                                        unless ::<> would be replaced by something other than ::[].

                                                                                                        That’s exactly what I’m thinking. It’s subjective, but I also find that <> makes polymorphic types look claustrophobic, where as [] feels more ‘airy’ and open, due to their shapes.

                                                                                                        Here’s an example from today:

                                                                                                        fn struct_parser<N>(fields: &[Field<N, Rc<binary::Type<N>>>]) -> Rc<ParseExpr<N>> {
                                                                                                        

                                                                                                        vs.

                                                                                                        fn struct_parser[N](fields: &Slice[Field[N, Rc[binary::Type[N]]]]) -> Rc[ParseExpr[N]] {
                                                                                                        

                                                                                                        Ideally I would prefer that the same syntax be used for both value level and type level abstraction and application, but I’ll save that debate for another time…

                                                                                              2. 3

                                                                                                Even a list of how different language designs approach the same problem, with respect and without comparing them, would be a huge improvement over what we have now. Should be easier to compile than a “here are the lessons” document since it’s less subjective.

                                                                                                1. 2

                                                                                                  To compare languages, I’ve used:

                                                                                                  And although I haven’t used it very much, Rosetta code does what you want:

                                                                                                  Of course these are superficial, but surprisingly useful. I draw mostly on the languages I really know, but it’s nice to have an awareness of others. I know about 5 languages really well (written thousands of lines of code in each), 5 more languages a little bit (Lua, Go, etc.), and then there are maybe 10 languages that I don’t know which are “interesting” (Swift, Clojure, etc.)

                                                                                                  I think that posting to lobste.rs or /r/ProgrammingLanguages can help feedback with those. Here is one thread I posted, summarizing a thread from HN:

                                                                                                  https://www.reddit.com/r/ProgrammingLanguages/comments/7e32j8/span_slices_string_view_stringpiece_etc/

                                                                                                  I don’t think there is much hope of getting all the information you want in one place. Because there is so much information out there, and some languages like Swift are new and rapidly developing.

                                                                                                  Personally I maintain a Wiki that is my personal “delicious” (bookmark manager), although I have started to move some of it to the Oil wiki [1]

                                                                                                  [1] https://github.com/oilshell/oil/wiki

                                                                                                  1. 2

                                                                                                    FWIW, I compiled a list of articles on some of the topics I mentioned above:

                                                                                                    Maybe it is interesting for you.

                                                                                                    The articles’ conclusion are based on an exhaustive survey of more than a dozen popular languages as well as many minor, but influential ones.

                                                                                                    (Sorry for the double post.)

                                                                                                    1. 1

                                                                                                      A can also recommend /r/Compilers. At least, I had a nice discussion there recently.

                                                                                                2. 2

                                                                                                  The best things I’ve found on this are interviews with language designers. But it is scattered.

                                                                                                  1. 2

                                                                                                    That would be nice, but I see several problems:

                                                                                                    • Language design depends on the domain. There’s no right answer for every domain. For any language that someone claims is “general purpose”, I will name a domain where virtually no programs in it are written (for good reasons).
                                                                                                    • Almost all languages features interact, so what is right for one language is wrong for another.
                                                                                                    • Some things are subjective, like the two syntax rules you propose. They’re also dependent on the language.
                                                                                                    1. 3

                                                                                                      Kind of agree with your points, but I believe there is a reasonable subset of topics, where one can provide a conclusive verdict based on decades of languages trying various approaches, like for instance abusing <> for generics or ident: Type being better.

                                                                                                  1. 24

                                                                                                    If your score is above your post count you’re doing fantastic. I don’t think anyone in this community treats them as a popularity contest.

                                                                                                    1. 7

                                                                                                      I also don’t have the impression. There’s a couple of highly active users here, they have a lot of karma. I can also tell that by seeing their name all the time.

                                                                                                      Personally, I rarely look up the total score of others and mostly look at mine as a generic “people took interest in my comments this week”. I don’t care that much about the long-term sum. But I do somewhat care about the uptake after investing much time in that platform.

                                                                                                      I sometimes look up the average score of others, but only out of curiosity. That happens maybe once every three months.

                                                                                                      It was a useful info for me though, when I was new to this community. Am I talking to a regular? A newbie? A lurker? This is all useful context.

                                                                                                      1. 3

                                                                                                        It was a useful info for me though, when I was new to this community. Am I talking to a regular? A newbie? A lurker? This is all useful context.

                                                                                                        This is the important part for me. It would be fine to hide the specific score and only show a classification like “newbie”, “link poster”, “active commenter”, “senior” which might include more data like “age of account” and “rank”.

                                                                                                      2. 2

                                                                                                        This is actually a brilliant idea! I think showing the average would be way more helpful and would guide participants toward writing less, higher quality comments.

                                                                                                        I think it’s way more useful:

                                                                                                        If we consider that someone who has written 10 comments and has a score 100 adds much more to a debate than a user with 1000 comments and a score of 1000.

                                                                                                        Currently the user with the better contributions looks worse than the person with the lower quality contributions.

                                                                                                        1. 5

                                                                                                          would guide participants toward writing less, higher quality comments.

                                                                                                          That assumes the comment vote = quality. It really doesn’t. It means it’s what one or more people in that thread in that context wanted to see, what a pile didn’t, or something in between. The metrics are inconsistent. Many comments with info in them also get either no votes or just one. There’s also whether it’s a hot-button topic where taking a certain position always gets a vote boost.

                                                                                                          There’s enough problems connecting comment votes to any objective metric of quality that I don’t use averages for it. I have a guess that it’s probably OK if over 2. People’s responses to individual comments or private messages have been more reliable indicator for me.

                                                                                                          1. 4

                                                                                                            I think showing the average would be way more helpful and would guide participants toward writing less, higher quality comments.

                                                                                                            I don’t think so, a high average score often only shows who’s expressing popular opinions because they receive a lot of upvotes.

                                                                                                            edit: s/get/receive

                                                                                                          2. 1

                                                                                                            Thanks for the reply. I will meditate on that.

                                                                                                          1. 6

                                                                                                            Interestingly, my argument why I don’t like macros goes somewhere along those lines: If you have something repetetive where people use macros to work around, it’s probably a flaw in the host language.

                                                                                                            I’m not saying they are bad, just that I don’t like them.

                                                                                                            Excluded are obviously languages that are fundamentally based on them.

                                                                                                            1. 4

                                                                                                              Ok, so it’s a flaw in the host language. Isn’t it nice to have macro’s to work around them?

                                                                                                              1. 7

                                                                                                                Possible outcome: every project works around the problem in their own slightly incompatible way, and no-one bothers fixing the problem in the host language because it’s easy enough to work around.

                                                                                                                I like macros as a way to cheaply prototype proposed language changes. I don’t want to see them in production code; debugging from the output of a (nonstandardised) code generator is awful but still easier than debugging from the input, which is effectively what the choice between code generation and macros boils down to.

                                                                                                                1. 8

                                                                                                                  I like macros as a way to cheaply prototype proposed language changes. I don’t want to see them in production code; debugging from the output of a (nonstandardised) code generator is awful but still easier than debugging from the input, which is effectively what the choice between code generation and macros boils down to.

                                                                                                                  This has, by the way, happened with Rusts “try!()” (which, after some modifications, became the “?” operator).

                                                                                                                  1. 2

                                                                                                                    Reminds me of Rust’s primary use of macros: emulating the varargs that the language lacks.

                                                                                                                    My cardinal rule about macros is that if I have to know that something is a macro, then the macro is broken and the author is to blame.

                                                                                                                    Rust also messed up in that regard by giving macro invocations special syntax, which acted as an encouragement to macro authors to go overboard with them because “the user immediately sees that it is a macro” – violating the cardinal rule about macros.

                                                                                                                  2. 2

                                                                                                                    Yup, the alternatives are duplication/boilerplate or external codegen until the language catches up. Macros are an decent way to make problems more tractable in the short term (unless your in a wonderous language like Racket), or even to prototype features before they are implemented in the full language. I’d love to see more metaprogramming with a basis in dependant types, but alas there’s still lots of work to be done before that has been made workable for the average programmer.

                                                                                                                    1. 1

                                                                                                                      Sure, that’s why I said they aren’t bad, I just don’t like them.

                                                                                                                      On the other hand, I also don’t have any problem with codegen over macros, its basically the same thing at another phase.

                                                                                                                    2. 1

                                                                                                                      Say that this flaw is becoming obvious a couple of years after the language’s release. In that case the fix may have the consequence of breaking some subset of existing code which is arguably worse than including macros in the language. I don’t know where I want to go with this strawman-like argument other than to say that language design is hard and macros lets the users of the language make up for the designers deficiencies.

                                                                                                                      1. 1

                                                                                                                        I totally appreciate that. I just don’t see “does the language have macros” as the issue people make it. For example, languages with very expressive metaprogramming systems like Ruby have purposefully not included macros and are doing fine.

                                                                                                                        Macros are often an incredibly complex and problematic fix for this, though. Just the patterns list of the Rust macros book is huge and advanced: https://danielkeep.github.io/tlborm/book/pat-README.html

                                                                                                                        (Other languages have nicer systems, I know, but the issue persists: textual replacement is a mess)

                                                                                                                        I totally see their place, for example, we couldn’t define a type checked println!("{:?}", myvalue) in Rust proper without adding a lot of additional things to the language.

                                                                                                                    1. 14

                                                                                                                      It seems the entire premise of this post rests on the fact that some package managers always use the latest version by default. The go dep tool you mentioned in the footnotes will to my experience use the newest version but then pin to it. Additionally, go community relies on several tools like gopherpit or gopkg.in to pin major versions via branches.

                                                                                                                      Semantic Versioning isn’t broke, it’s misused, yes, but if applied correctly it’s a good methodology to manage breakage in machine-visible APIs.

                                                                                                                      Using digest hashes or renaming when behaviour changes is not a method that can be easily understood by both humans and machines; digest hashes have no meaning to a human and renaming a method will require pulling up the changelog and/or documentation.

                                                                                                                      I think it’s a bit unfair to only observe default behaviour when proper usage can have much more power. If I bother to pin versions in RubyGems or NPM, then the entire argument kinda collapses. Including where you merge the minor and revision numbers because “[…] the distinction between minor versions and patch levels is moot”.

                                                                                                                      1. 4

                                                                                                                        I think it’s a bit unfair to only observe default behaviour when proper usage can have much more power. If I bother to pin versions in RubyGems or NPM, then the entire argument kinda collapses.

                                                                                                                        That’s not what I’m saying at all. We are in agreement about what “proper usage” is in today’s world. However, this “proper usage” of everyone narrowly just protecting themselves leads to the issues observed by Steve Losh and Rich Hickey. Rich Hickey and Steve Losh are observing that trying to provide short-term flexibility is polluting the well for everyone. I’m just trying to flesh out a couple of details on how an alternative might work.

                                                                                                                        1. 1

                                                                                                                          Agree. Package managers should use the exact versions specified in the build, otherwise they are not package managers but “random bits from the internet downloaders”.

                                                                                                                          Something that is often overlooked is that there is a lot of things library developers can do to provide smooth migration paths:

                                                                                                                          Deprecation

                                                                                                                          Deprecations should not only be versioned to tell people when things will changed/be removed, but also precisely describe what is going to be deprecated.

                                                                                                                          Migration

                                                                                                                          There is no reason why library authors shouldn’t be able to ship a description about how existing code needs to be changed that can be read by tools which apply those changes.

                                                                                                                          Taking these two things together, you end up with something like

                                                                                                                          @willChange(what: Change, when: Version, why: String, how: Tree => Tree)
                                                                                                                          sealed trait Change
                                                                                                                          object Change {
                                                                                                                            Removal, NoInheritance, NoOverriding, Behavior, ...
                                                                                                                          }
                                                                                                                          

                                                                                                                          This will not work in every case, but even just automating away 60% of the changes will have a profound way on how people can deal with dependency updates and changes.