1. 36

    I think this will not succeed for the same reason that RSS feeds has not (or REST). The problem with “just providing the data” is that businesses don’t want to just be data services.

    They want to advertise to you, watch what you’re doing, funnel you through their sales paths, etc. This is why banks have never ever (until recently, UK is slowly developing this) provided open APIs for viewing your bank statement.

    This is why businesses LOVE apps and hate web sites, always bothering you to install their app. It’s like being in their office. When I click a link from the reddit app, it opens a temporary view of the link. When I’m done reading, it takes me back to the app. I remain engaged in their experience. On the web your business page is one click away from being forgotten. The desire to couple display mechanism with model is strong.

    The UK government is an exception, they don’t gain monetary value from your visits. As a UK citizen and resident, I can say that their web site is a fantastically lucid and refreshing experience. That’s because their goal is, above all, to inform you. They don’t need to “funnel” me to pay my taxes, because I have to do that by law anyway. It’s like reading Wikipedia.

    I would love web services to all provide a semantic interface with automatically understandable schemas. (And also terminal applications, for that matter). But I can’t see it happening until a radical new business model is developed.

    1. 5

      This is why banks have never ever (until recently, UK is slowly developing this) provided open APIs for viewing your bank statement.

      This has happened in all EU/EEA countries after the Payment Services Directive was updated in 2016 (PSD2). It went into effect in September 2019, as far as I remember. It’s been great to see how this open banking has made it possible for new companies to create apps that can e.g. gather your account details across different banks instead of having to rely on the banks’ own (often terrible) apps.

      1. 6

        The problem with PSD2 to my knowledge is that it forces banks to create an API an open access for Account Information Service Providers and Payment Initiation Services Providers, but not an API to you, the customer. So this seems to be a regulation that opens up your bank account to other companies (if you want), but not to the one person who should get API access. Registration as such a provider costs quite some money (I think 5 digits of Euros), so it’s not really an option to register yourself as a provider.

        In Germany, we already seem to have lots of Apps for management of multiple bank accounts, because a protocol called HBCI seems to be common for access to your own account. But now people who use this are afraid that banks could stop this service when they implement PSD2 APIs. And then multi-account banking would only become possible through third-party services - who probably live from collecting and selling your data.

        Sorry if something is wrong. I do not use HBCI, but that’s what I heard from other people.

        1. 1

          I work on Open Banking APIs for a UK credit card provider.

          A large reason I see that the data isn’t made directly available to the customer is because if the customer were to accidentally leak / lose their own data, the provider (HSBC, Barclays etc) would be liable, not you. That means lots of hefty fines.

          You’d also likely be touching some PCI data, so you’d need to be cleared / set up to handle that safely (or having some way to filter it before you received it).

          Also, it requires a fair bit of extra setup and the use of certificate-based authentication (MTLS + signing request objects) means that as it currently sits you’d be need one of those, which aren’t cheap as they’re all EV certs.

          Its a shame, because the customer should get their data. But you may be able to work with intermediaries that may provide an interface for that data, who can do the hard work for you, ie https://www.openwrks.com/

          (originally posted at https://www.jvt.me/mf2/2019/12/7o91a/)

      2. 4

        Yes, this does seem like a naive view of why the web is what it is. It’s not always about content and data. For a government, this makes sense. They don’t need to track you or view your other browsing habits in order to offer you something else they’re selling. Other entities do not have the incentive to make their data easier to access or more widely available.

        1. 6

          That’s very business centric view of the web, there’s a lot more to the internet than businesses peddling things to you. As an example, take a look at the ecosystem around ActivityPub. There are millions of users using services lile Mastodon, Pleroma, Pixelfed, PeetTube, and so on. All of them rely on being able to share data with one another to create a federation. All these projects directly benefit from exposing the data because the overall community grows, and it’s a cooperative effort as opposed to a competitive one.

          1. 3

            It’s a realistic view of the web. Sure, people who are generating things like blogs or tweets may want to share their content without monetizing you, but it’s not going to fundamentally change a business like a bank. What incentive is there for a bank to make their APIs open to you? Or an advertiser? Or a magazine? Or literally any business?

            There’s nothing stopping these other avenues (like the peer-based services you are referring to) from trying to be as open as possible, but it doesn’t mean the mainstream businesses are ever going to follow suit.

            I think it’s also noteworthy that there is very little interesting content on any of those distributed systems, which is why so many people end up going back to Twitter, Instagram, etc.

            1. 1

              My point is that I don’t see business as the primary value of the internet. I think there’s far more value in the internet providing a communication platform for regular people to connect, and that doesn’t need to be commercialized in any way. Businesses are just one niche, and it gets disproportionate focus in my opinion.

        2. 3

          Aye, currently there is little motivation for companies to share data outside silos

          That mind-set isn’t really sustainable in the long term though as it limits opportunity. Data likes to date and there are huge opportunities once that becomes possible.

          The business models to make that worth pursuing are being worked on at high levels.

          1. 1

            Ruben Verborgh, one of the folks behind the Solid initiative 1, has a pretty good essay 2 that details a world in which storage providers compete to provide storage, and application providers compete on offering different views to data that you already own.

            Without getting into Solid any more in this post, I will say that there are a ton of websites run by governments, non-profits, personal blogs, or other situations where semantically available data would be a huge boon. I was looking through a page of NSA funded research groups the other day for McMurdo station 3, and finding what each professor researched on took several mouse clicks per professor. If this data was available semantically, a simple query would be enough to list the areas of research of every group and every professor.

            One can think of a world where brick-and-mortar businesses serve their data semantically on their website, and aggregators (such as Google Maps, Yelp, and TripAdvisor) can aggregate them, and enable others to use the data for these businesses without creating their own scrapers or asking a business to create their own API. Think about a world where government agencies and bureaucracies publish data and documents in an easy to query manner. Yes, the world of web applications is hard to bring the semantic web to due to existing incentives for keeping data siloed, but there are many applications today that could be tagged semantically but aren’t.

            1. 1

              The web has been always used mostly for fluff since day 1, and “web assembly” is going to make it more bloated, like the old browser-side java.

              The world needs user-centric alternatives once again.

            1. 9

              There are some clarifications and technical details in the comments on this issue: https://github.com/ninenines/cowboy/issues/1410

              1. 2

                Rust and Erlang are not the only programming languages with a good concurrency model. C is not dead or dying. Type systems do not objectively make it easier to program. Dynamically typed languages are not on the way out, obsolete or in any way bad. People do not need parametric polymorphism in their fucking text editor extension language. Emacs does not need a fucking web browser to ‘keep up with the kids’ or whatever ridiculous reasoning is being given. And Visual Studio Code isn’t even remotely close to Emacs in any respect at all, it’s a text editor you can write extensions for, just like virtually every other popular text editor in history. Taking lessons from Visual Studio Code is like taking lessons from Sublime Text or any other momentarily popular editor that is and will be forgotten about within a couple of years.

                What Emacs should be taking note from is not Visual Studio Code, it’s vim. Vim is everything Emacs is not: fast, ergonomic and with a language that’s PERFECT for writing very small snippets of but an awful pain to write extensions with. Emacs Lisp is what you get if you write commands to your text editor in a proper programming language (100s of characters just to swap two shortcuts) while Vimscript is what you get if you write extensions to your text editor in a command language (sometimes harder to understand than TECO).

                Vim is also evidence that trying to fix extensions with bindings to extra languages is a terrible idea. Vimscript needs to be improved, not replaced. Emacs Lisp is the same: improvements are necessary, not its replacement with something else. It’s not just bad to replace Emacs Lisp entirely, adding a new language beside it and making Emacs Lisp ‘legacy’ also means that in order to access new parts of the Emacs infrastructure, extensions/modes need to be rewritten anyway.

                The contention made that C needs to go to make contributing to Emacs more accessible is.. frankly insane. C is one of the most widely known languages in the world. It’s very easy and accessible to learn. Rewriting code parts of Emacs in an obscure, esoteric language like Rust is not going to make contributing to Emacs easier, it’s going to make it much harder. It should also be made quite clear that Rust is not a ‘guaranteed safe language’ as is claimed here. Not even close. It’s not at all safe. It literally has a keyword called unsafe that escapes all the safety that the language provides. Under certain conditions that are very easy to check Rust is totally safe, which is what it provides over C, and even outside those conditions, any spots that introduce unsafety are enumerated with grep -Re unsafe, but it’s absolutely untrue that Rust is safe, and any gradual move from C to Rust in Emacs would involve HUGE swathes of unsafe Rust to the point where it’s probably safer to do it in C++ than in Rust, simply because the infrastructure for checking C++ programs for safety are stronger than the infrastructure for checking Rust-programs-that-are-full-of-unsafe for safety.

                A very statically typed language like Rust makes no sense for a text editor like Emacs. The strength of Emacs is that it’s insanely dynamic. It has a relatively small bit written in C, and most of it is actually written in Elisp.

                1. 4

                  Yeah, I very much disagree with his desire to get away from a real Lisp. The value proposition of Emacs is very much about having a dynamic, dynamically-typed, dynamically-scoped (yes, I am aware of the optional lexical scoping) Lisp extension language.

                  You are right that vim makes simple things simple, but man-oh-man is VimScript a hideous misfeature of an extension language. I do think that Emacs could probably stand to have some more sugared ways to do some basic config out of the box — use-package is an example of improved keybinding, for example.

                  I wouldn’t mind a Rust-based runtime, but what I would really love to see is a Lisp-based runtime: a core in Common Lisp, with a full Elisp engine to run all the code from the last fourty-three years, and with full programmability in Common Lisp.

                  But then, I’d also like to see an updated Common Lisp which fixes things like case, pathnames, argument ordering and a few other small bits. And I want a pony.

                  1. 3

                    Lisp is probably one of the few things that makes emacs actually interesting, and I’m not even an emacs user or a lisp user…

                  2. 3

                    It literally has a keyword called unsafe that escapes all the safety that the language provides

                    This is not true, and this (very common) misunderstanding highlights the fact that one should think about – for lack of a better term – marketing, even when creating the syntax for a programming language. unsafe sure makes it sound like all the safety features are turned off, when all it does is allow you to dereference raw pointers and read/write to mutable static variables – all under the auspices of the borrow checker, that is still active.

                    1. 2

                      It is true. As soon as unsafe appears in a program all safety guarantees disappear. That’s literally why it’s called ‘unsafe’. The rust meme that it doesn’t take away all the guarantees ignores that.

                      It turns off enough safety guarantees that you no longer can guarantee safety…

                  1. 14

                    great talk just one note

                    Because, barring magic, you cannot send a function.

                    This is trivial in erlang, and even between nodes in a cluster, and commonly used, not some obscure language feature. So there we go I officially declare Erlang as Dark Magic.

                    1. 4

                      I suppose it’s not that “you cannot send a function”, but more like “you cannot send a closure, if the language allows references which may not resolve from other machines”. Common examples are mutable data (if we want both parties to see all edits), pointers, file handles, process handles, etc.

                      I’m not too familiar with Erlang, but I imagine many of these can be made resolvable if we’re able to forward requests to the originating machine.

                      1. 7

                        It’s possible to implement this in Haskell, with precise control over how the closure gets de/serialized and what kinds of things can enter it. See the transient package for an example. This task is a great example of the things you can easily implement in a pure language, but very dangerous in impure ones.

                      2. 1

                        I don’t know Erlang, but… I can speculate that it doesn’t actually send the functions. It sends their bytecode representation. Or a pointer to the relevant address, if the two computers are guaranteed to share code. I mean, the function has to be transformed into a piece of data somehow.

                        1. 13

                          it doesn’t actually send the functions. It sends their bytecode representation

                          How is that different from not actually sending an integer, but sending its representation?

                          In Erlang any term can be serialized, including functions, so sending a function to another process/node isn’t different from sending any other term. The nodes don’t need to share code.

                          1> term_to_binary(fun (X) -> X + 1 end).
                          <<131,112,0,0,2,241,1,174,37,189,114,105,121,227,76,88,
                            139,139,101,146,181,186,175,0,0,0,7,0,0,...>>
                          
                          1. 1

                            Would that also work if the function contains free variables?

                            That is what’s the result of calling this function:

                            fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                            

                            (Sorry for the pseudo-Erlang)

                            1. 5

                              Yep!

                              1> F1 = fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                              #Fun<erl_eval.7.91303403>
                              2> F2 = binary_to_term(F1(2)).
                              #Fun<erl_eval.7.91303403>
                              3> F2(3).
                              5
                              

                              … or even

                              1> SerializedF1 = term_to_binary(fun(Y) -> term_to_binary(fun (X) -> X + Y end) end).
                              <<131,112,0,0,3,96,1,174,37,189,114,105,121,227,76,88,139,
                                139,101,146,181,186,175,0,0,0,7,0,0,...>>
                              2> F1 = binary_to_term(SerializedF1).                                           #Fun<erl_eval.7.91303403>
                              3> F2 = binary_to_term(F1(2)).                                                  #Fun<erl_eval.7.91303403>
                              4> F2(3).                                                                       5
                              

                              The format is documented here: http://erlang.org/doc/apps/erts/erl_ext_dist.html

                            2. 0

                              How is that different from not actually sending an integer

                              Okay, it’s not. It’s just much more complicated. Sending an integer? Trivial. Sending a plain old data structure? Easy. Sending a whole graph? Doable. Sending code? Scary.

                              Sure, if you have a bytecode compiler and eval, sending functions is a piece of cake. Good luck doing that however without explicit language support. In C for instance.

                              1. 6

                                You can do it, for instance, by sending a DLL file over a TCP connection and linking it into the application receiving it. It’s harder, it’s flakier, it’s platform-dependent, and it’s the sort of thing anyone sane will look at and say “okay but why though”. It’s just that Erlang is designed to be able to do that sort of thing easily, and accepts the tradeoffs necessary, and C/C++ are not.

                                1. 3

                                  The Morris Worm of 1988 used a similar method to infect new hosts.

                        1. 4

                          Response from Stack Overflow’s Architecture Lead: https://meta.stackoverflow.com/a/386499/459877

                          We are aware of it. We are not okay with it.

                          1. 1

                            And think of all the sites that are either not aware of it or completely okay with it. StackOverflow serves programmers, a group that is much more likely than the regular population to care or even just know about tracking. I doubt Facebook gives a damn about Google’s ads fingerprinting users–or even a small site like Bulbapedia (to pick a random example, no offense Bulbapedia.)

                          1. 5

                            I don’t think the author has enough experience using git to be dispensing advice on it.

                            You use reset if you want to locally drop one or more commits, moving the changes back into the index (unless you use reset --hard, which is, in my experience, rarely what you want). You use reset to change the local ‘history’, which is not a problem if you do it locally to e.g. create better commits before pushing to remote. You should never [1] use reset to drop a commit that has already been pushed to a remote repo shared with others.

                            You use revert if you need to ‘publish’ a revert of one or more commits that have already been pushed to a remote repo. revert is just a convenience command to append a new commit that exactly undoes one or more previous commits. It does not change the history.

                            It’s recommended to use git revert instead of git reset in enterprise environment.

                            It’s recommended you try to make sure you don’t need to use git revert. Use git reset as much as you want. If you seem to need a push -f, you’re doing it wrong. Don’t ever [1] do that.

                            [1] For certain values of (n)ever. If you are absolutely sure you know what you’re doing, and your colleagues know as well, you can use push -f.

                            1. 5

                              You probably want the (unfortunately less ergonomic) --force-with-lease, not -f/--force, to avoid accidentally overwriting any changes that have happened after your last fetch, though.

                              1. 1

                                I didn’t even know this existed until magit made it visible in its menus. After some research, --force-with-lease is almost always the right answer (though, YMMV)

                              2. 3

                                I’m comfortable resetting and force pushing to a WIP feature branch, but that’s a privilege of our workflow.

                                1. 4

                                  Another thing you can do is use git commit --fixup and then rebase autosquash at the end.

                                  I wonder if revert would be better understood if it were called “invert” instead.

                                  1. 2

                                    That looks useful! Thanks!

                                2. 2

                                  Reset without --hard is rarely what I want. :)

                                  I use it for example when I merged in master and made a few mistakes resolving conflicts. To try a new merge, I reset hard. Well, I usually reset, then notice I forgot --hard, and then checkout.

                                  To clean up my local history, I usually do rebase -i instead of reset.

                                  I also use reset to undo a rebase. Usually when resolving conflicts failed. That might be my general theme: Use reset to make another try resolving conflicts.

                                  1. 1

                                    rebase -i is the best way to clean up commits (squash/drop/reorder/change comment) if you know how to use it well. sometimes though a reset –hard is nice to just drop changes that you don’t want but still have commits you like from the branch.

                                1. 5

                                  We started using Buildkite in 2014, and I have only good things to say about it. I remember we jokingly called it Keith-as-a-service around the office in the beginning because Keith, their CTO, would always go above in beyond in helping us with any issues we had.

                                  1. 9

                                    This is exactly on the money. With Marzipan, the message from Apple can easily be interpreted as “we don’t want great Mac apps on the Mac, we want apps on the Mac”. The folks who wrote that Electron was a scourge should be considering this scourge alongside it.

                                    1. 14

                                      The folks who wrote that Electron was a scourge should be considering this scourge alongside it.

                                      Quote from the article that called Electron a scourge:

                                      Even Apple, of all companies, is shipping Mac apps with glaring un-Mac-like problems. The “Marzipan” apps on MacOS 10.14 Mojave — News, Home, Stocks, Voice Memos — are dreadfully bad apps. They’re functionally poor, and design-wise foreign-feeling. I honestly don’t understand how Apple decided it was OK to ship these apps.

                                    1. 4

                                      staged = diff --cached

                                      You can actually use --staged instead of --cached, which is much easier to remember. (You might still want to keep the alias, of course.)

                                      1. 5

                                        Terry’s was one of the more interesting interviews I did on usesthis.com.

                                        1. 2

                                          Do you have a link to it?

                                            1. 2

                                              What an interesting interview and a fascinating site! Thanks for sharing.

                                          1. 81

                                            I beg all my fellow crustaceans to please, please use Firefox. Not because you think it’s better, but because it needs our support. Technology only gets better with investment, and if we don’t invest in Firefox, we will lose the web to chrome.

                                            1. 59

                                              Not because you think it’s better

                                              But that certainly helps too. It is a great browser.

                                              • privacy stuff — the cookie container API for things like Facebook Container, built-in tracker blocker, various anti-fingerprinting things they’re backporting from the Tor Browser
                                              • honestly just the UI and the visual design! I strongly dislike the latest Chrome redesign >_<
                                              • nice devtools things — e.g. the CSS Grid inspector
                                              • more WebExtension APIs (nice example: only on Firefox can Signed Pages actually prevent the page from even loading when the signature check fails)
                                              • the fastest (IIRC) WASM engine (+ now in Nightly behind a pref: even better codegen backend based on Cranelift)
                                              • ongoing but already usable Wayland implementation (directly in the official tree now, not as a fork)
                                              • WebRender!!!
                                              1. 7

                                                On the other hand, WebSocket debugging (mostly frame inspection) is impossible in Firefox without an extension. I try not to install any extensions that I don’t absolutely need and Chrome has been treating me just fine in this regard[1].

                                                Whether or not I agree with Google’s direction is now a moot point. I need Chrome to do what I do with extensions.

                                                As soon as Firefox supports WebSocket debugging natively, I will be perfectly happy to switch.

                                                [1] I mostly oppose extensions because of questionable maintenance cycles. I allow uBlock and aXe because they have large communities backing them.

                                                1. 3

                                                  Axe (https://www.deque.com/axe/) seems amazing. I know it wasn’t the focus of your post – but I somehow missed this when debugging an accessibility issue just recently, I wish I had stumbled onto it. Thanks!

                                                  1. 1

                                                    You’re welcome!

                                                    At $work, we used aXe and NVDA to make our webcomponents AA compliant with WCAG. aXe was invaluable for things like contrast and missing role attributes.

                                                  2. 3

                                                    WebSocket debugging (mostly frame inspection) is impossible in Firefox without an extension

                                                    Is it possible with an extension? I can’t seem to find one.

                                                    1. 1

                                                      I have never needed to debug WebSockets and see no reason for that functionality to bloat the basic browser for everybody. Too many extensions might not be a good thing but if you need specific functionality, there’s no reason to hold back. If it really bothers you, run separate profiles for web development and browsing. I have somewhat more than two extensions and haven’t had any problems.

                                                      1. 1

                                                        I do understand your sentiment, but the only extension that I see these days is marked “Experimental”.

                                                        On the other hand, I don’t see how it would “bloat” a browser very much. (Disclaimer: I have never written a browser or contributed to any. I am open to being proved wrong.) I have written a WebSockets library myself, and it’s not a complex protocol. It can’t be too expensive to update a UI element on every (websocket) frame.

                                                    2. 5

                                                      Yes! I don’t know about you, but I love the fact that Firefox uses so much less ram than chrome.

                                                      1. 2

                                                        This was one of the major reasons I stuck with FF for a long time. It is still a pronounced difference.

                                                      2. 3

                                                        honestly just the UI and the visual design! I strongly dislike the latest Chrome redesign >_<

                                                        Yeah, what’s the deal with the latest version of Chrome? All those bubbly menus feel very mid-2000’s. Everything old is new again.

                                                        1. 3

                                                          I found a way to go back to the old ui from https://www.c0ffee.net/blog/openbsd-on-a-laptop/ (it was posted here a few weeks ago):

                                                          Also, set the following in chrome://flags:

                                                          • Smooth Scrolling: (personal preference)
                                                          • UI Layout for the browser’s top chrome: set to “Normal” to get the classic Chromium look back
                                                          • Identity consistency between browser and cookie jar: set to “Disabled” to keep Google from hijacking any Google > - login to sign you into Chrome
                                                          • SafeSearch URLs reporting: disabled

                                                          (emphasis mine)

                                                          1. 1

                                                            Aaaaaaaand they took out that option.

                                                        2. 1

                                                          The Wayland implementation is not usable quite yet, though, but it is close. I tried it under Sway, but it was crashy.

                                                        3. 16

                                                          I switched to Firefox last year, and I have to say I don’t miss Chrome in the slightest.

                                                          1. 13

                                                            And those with a little financial liberty, consider donating to Mozilla. They do a lot of important work free a free and open web.

                                                            1. 10

                                                              I recently came back to Firefox from Vivaldi. That’s another Chromium/Webkit based browser and it’s closed source to boot.

                                                              Firefox has improved greatly in speed as of late and I feel like we’re back in the era of the mid-2000s, asking people to chose Firefox over Chrome this time instead of IE.

                                                              1. 2

                                                                I’d love to switch from Vivaldi, but it’s simply not an option given the current (terrible) state of vertical tab support in Firefox.

                                                                1. 2

                                                                  How is it terrible? The hiding of the regular tab bar is not an API yet and you have to use CSS for that, sure, but there are some very good tree style tab webextensions.

                                                                  1. 2

                                                                    The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore.

                                                                    Mozilla is chasing their idealized “average user” and is determined to push everyone into their one-size-fits-all idea of user interface design – anyone not happy with that can screw off, if it was for Mozilla.

                                                                    It’s 2018 – I don’t see why I even have to argue for vertical tabs and mouse gestures anymore. I just pick a browser vendor which hasn’t been asleep on the wheel for the last 5 years and ships with these features out of the box.

                                                                    And if the web in the future ends up as some proprietary API defined by whatever Google Chrome implements, because Firefox went down, Mozilla has only itself to blame.

                                                                    1. 2

                                                                      The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore. The whole point of moving to WebExtensions was long term support. They couldn’t make significant changes without breaking a lot of the old extensions. The whole point was to unhook extensions from the internals so they can refactor around them and keep supporting them.

                                                                      1. 0

                                                                        That’s like a car manufacturer removing all electronics from a car – sure it makes the car easier to support … but now the car doesn’t even turn on anymore!

                                                                        Considering that cars are usually used for transportation, not for having them sit in the garage, you shouldn’t be surprised that customers buy other cars in the future.

                                                                        (And no, blaming “car enthusiasts” for having unrealistic expectations, like it happens in the case of browser users, doesn’t cut it.)

                                                                        1. 3

                                                                          So you’d rather they didn’t improve it at all? Or would you rather they broke most extensions every release?

                                                                          1. 3

                                                                            I’m not @soc, but I wish Firefox had delayed their disabling of old-style extensions in Firefox 57 until they had replicated more of the old functionality with the WebExtensions API – mainly functionality related to interface customization, tabs, and sessions.

                                                                            Yes, during the time of that delay, old-style extensions would continue to break with each release, but the maintainers of Tree Style Tabs and other powerful extensions had already been keeping up with each release by releasing fixed versions. They probably could have continued updating their extensions until WebExtensions supported their required functionality. And some users might prefer to run slightly-buggy older extensions for a bit instead of switching to the feature-lacking new extensions straight away – they should have that choice.

                                                                            1. 1

                                                                              What’s the improvement? The new API was so bad that they literally had to pull the plug on the existing API to force extension authors to migrate. That just doesn’t happen in cases where the API is “good”, developers are usually eager to adopt them and migrate their code.

                                                                              Let’s not accuse people you disagree with that they are “against improvements” – it’s just that the improvements have to actually exist, and in this case the API clearly wasn’t ready. This whole fiasco feels like another instance of CADT-driven development and the failure of management to reign in on it.

                                                                              1. 3

                                                                                The old extension API provided direct access to the JavaScript context of both the chrome and the tab within a single thread, so installing an XUL extension was disabling multiprocess mode. Multiprocess mode seems like an improvement; in old Firefox, a misbehaving piece of JavaScript would lock up the browser for about a second before eventually popping up a dialog offering to kill it, whereas in a multiprocess browser, it should be possible to switch and close tabs no matter what the web page inside does. The fact that nobody notices when it works correctly seems to make it the opposite of Attention-Deficient-Driven-Design; it’s the “focus on quality of implementation, even at the expense of features” design that we should be encouraging.

                                                                                The logical alternative to “WebExtension For The Future(tm)” would’ve been to just expose all of the relevant threads of execution directly to the XUL extensions. run-this-in-the-chome.xul and run-this-in-every-tab.xul and message pass between them. But at that point, we’re talking about having three different extension APIs in Firefox.

                                                                                Which isn’t to say that I think you’re against improvement. I am saying that you’re thinking too much like a developer, and not enough like the poor sod who has to do QA and Support triage.

                                                                                1. 2

                                                                                  Improving the actual core of Firefox. They’re basically ripping out and replacing large components every other release. This would break large amount of plugins constantly. Hell, plugins wouldn’t even work in Nightly. I do agree with @roryokane that they should have tried to improve it before cutting support. The new API is definitely missing many things but it was the right decision to make for the long term stability of Firefox.

                                                                                  1. 1

                                                                                    They could have made the decision to ax the old API after extension authors adopted it. That adoption failed so hard that they had to force developers to use the new API speaks for itself.

                                                                                    I’d rather have extension that I have to fix from time to time, than no working extensions at all.

                                                                          2. 1

                                                                            Why should Mozilla care that much about your niche use case? They already have a ton of stuff to deal with and barely enough funding.

                                                                            It’s open source, make your own VerticalTabFox fork :)

                                                                            1. 3

                                                                              Eh … WAT? Mozilla went the extra mile with their recent extension API changes to make things – that worked before – impossible to implement with a recent Firefox version. The current state of tab extensions is this terrible, because Mozilla explicitly made it this way.

                                                                              I used Firefox for more than 15 years – the only thing I wanted was to be left alone.

                                                                              It’s open source, make your own VerticalTabFox fork :)

                                                                              Feel free to read my comment above to understand why that doesn’t cut it.

                                                                              Also, Stuff that works >> open source. Sincerely, a happy Vivaldi user.

                                                                              1. 2

                                                                                It’s one of the laws of the internet at this point: Every thread about Firefox is always bound to attract someone complaining about WebExtensions not supporting their pet feature that was possible with the awful and insecure old extension system.

                                                                                If you’re care about “non terrible” (whatever that means — Tree Style Tab looks perfect to me) vertical tabs more than anything — sure, use a browser that has them.

                                                                                But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

                                                                                1. 3

                                                                                  If you’re care about “non terrible” (whatever that means — Tree Style Tab looks perfect to me) vertical tabs more than anything — sure, use a browser that has them.

                                                                                  If you compare the current state of the art of vertical tabs extensions, even Mozilla thinks they suck – just compare them to their own Tab Center experiment: https://testpilot.firefox.com/static/images/experiments/tab-center/details/tab-center-1.1957e169.jpg

                                                                                  Picking just one example: Having the navigation bar at a higher level of the visual hierarchy is just wrong – the tab panel isn’t owned by the navigation bar, the navigation bar belongs to a specific tab! Needless to say, all of the vertical tab extensions are forced to be wrong, because they lack the API do implement the UI correctly.

                                                                                  This is how my browser currently looks like, for comparison: https://i.imgur.com/5dTX8Do.png

                                                                                  But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

                                                                                  You can only go so far alienating the most loyal users that use Firefox for specific purposes until the stop installing/recommending it to their less technically-inclined friends and relatives.

                                                                                  Mozilla is so busy chasing after Chrome that it doesn’t even realize that most Chrome users will never switch. They use Chrome because “the internet” (www.google.com) told them so. As long as Mozilla can’t make Google recommend Firefox on their frontpage, this will not change.

                                                                                  Discarding their most loyal users while trying to get people to adopt Firefox who simply aren’t interested – this is a recipe for disaster.

                                                                              2. 1

                                                                                and barely enough funding

                                                                                Last I checked they pulled in half a billion in revenue (2016). Do you believe this is barely enough?

                                                                                1. 2

                                                                                  For hundreds of millions users?

                                                                                  Yeah.

                                                                            2. 1

                                                                              At least with multi-row tabs in CSS you can’t dragndrop tabs. That’s about as bad as it gets.

                                                                            3. 2

                                                                              Are vertical tabs so essential?

                                                                              1. 3

                                                                                Considering the change in screen ratios over the past ten years (displays get shorter and wider), yes, it absolutely is.

                                                                                With vertical tabs I can get almost 30 full-width tabs on screen, with horizontal tabs I can start fishing for the right tab after about 15, as the tab width gets increasingly smaller.

                                                                                Additionally, vertical tabs reduce the way of travel substantially when selecting a different tab.

                                                                                1. 1

                                                                                  I still miss them, didn’t cripple me, but really hurt. The other thing about Tree (not just vertical) tabs that FF used to have was that the subtree was contextual to the parent tree. So, when you opened a link in a background tab, it was opened in a new tab that was a child of your current tab. For doing like documentation hunting / research it was amazing and I still haven’t found its peer.

                                                                              2. 1

                                                                                It’s at least partially open source. They provide tarballs.

                                                                                1. 4

                                                                                  https://help.vivaldi.com/article/is-vivaldi-open-source/

                                                                                  The chromium part is legally required to be open, the rest of their code is like readable source, don’t get me wrong that’s way better than unreadable source but it’s also very wut.

                                                                                  1. 2

                                                                                    Very wut. It’s a weird uneasy mix.

                                                                                    1. 2

                                                                                      that’s way better than unreadable source but it’s also very wut.

                                                                                      I wouldn’t be sure of that. It makes it auditable, but has legal ramifications should you want to build something like vivaldi, but free.

                                                                                2. 8

                                                                                  firefox does not get better with investment, it gets worse.

                                                                                  the real solution is to use netsurf or dillo or mothra, so that webmasters have to come to us and write websites that work with browsers that are simple enough to be independently maintained.

                                                                                  1. 9

                                                                                    Good luck getting more than 1‰ adoption 😉

                                                                                    1. 5

                                                                                      good luck achieving independence from Google by using a browser funded by Google

                                                                                      1. 1

                                                                                        I can achieve independence from Google without using netsurf, dillo, or mothra; to be quite honest, those will never catch on.

                                                                                        1. 2

                                                                                          can you achieve independence from google in a way that will catch on?

                                                                                          1. 1

                                                                                            I don’t think we’ll ever get the majority of browser share back into the hands of a (relatively) sane organization like Mozilla—but we can at least get enough people to make supporting alternative browsers a priority. On the other hand, the chances that web devs will ever feel pressured to support the browsers you mentioned, is close to nil. (No pun intended.)

                                                                                            1. 1

                                                                                              what is the value of having an alternative, if that alternative is funded by google and sends data to google by default?

                                                                                              1. 1

                                                                                                what is the value of having an alternative

                                                                                                What would you like me to say, that Firefox’s existence is worthless? This is an absurd thing to insinuate.

                                                                                                funded by google

                                                                                                No. I’m not sure whether you’re speaking in hyperbole, misunderstood what I was saying, and/or altogether skipped reading what I wrote. But this is just not correct. If Google really had Mozilla by the balls as you suggest, they would coerce them to stop adding privacy features to their browser that, e.g., block Google Analytics on all sites.

                                                                                                sends data to google by default

                                                                                                Yes, though it seems they’ve been as careful as one could be about this. Also to be fair, if you’re browsing with DNT off, you’re likely to get tracked by Google at some point anyway. But the fact that extensions can’t block this does have me worried.

                                                                                                1. 1

                                                                                                  i’m sorry if i misread something you wrote. i’m just curious what benefit you expect to gain if more people start using firefox. if everyone switched to firefox, google could simply tighten their control over mozilla (continuing the trend of the past 10 years), and they would still have control over how people access the web.

                                                                                                  1. 1

                                                                                                    It seems you’re using “control” in a very abstract sense, and I’m having trouble following. Maybe I’m just missing some context, but what concrete actions have Google taken over the past decade to control the whole of Mozilla?

                                                                                                    1. 1

                                                                                                      Google has pushed through complex standards such as HTTP/2 and new rendering behaviors, which Mozilla implements in order to not “fall behind.” They are able implement and maintain such complexity due to funding they receive from Google, including their deal to make Google the default search engine in Firefox (as I said earlier, I couldn’t find any breakdown of what % of Mozilla’s funding comes from Google).

                                                                                                      For evidence of the influence this funding has, compare the existence of Mozilla’s Facebook Container to the non-existence of a Google Container.

                                                                                                      1. 1

                                                                                                        what % of Mozilla’s funding comes from Google

                                                                                                        No word on the exact breakdown. Visit their 2017 report and scroll all the way to the bottom, and you’ll get a couple of helpful links. One of them is to a wiki page that describes exactly what each search engine gets in return for their investment.

                                                                                                        I would also like to know the exact breakdown, but I’d expect all those companies would get a little testy if the exact amount were disclosed. And anyway, we know what the lump sum is (around half a billion), and we can assume that most of it comes from Google.

                                                                                                        the non-existence of a Google Container

                                                                                                        They certainly haven’t made one themselves, but there’s nothing stopping others from forking one off! And anyway, I think it’s more so fear on Mozilla’s part than any concrete warning from Google against doing so.

                                                                                                        Perhaps this is naïveté on my part, but I really do think Google just want their search engine to be the default for Firefox. In any case, if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla. Remember: Google is in the “web market” first & the “software market” second. Having browser dominance is just one of many means to the same end. I believe their continued funding of Mozilla attests to that.

                                                                                                        1. 2

                                                                                                          It doesn’t have to be a direct threat from Google to make a difference. Direct threats are a very narrow way in which power operates and there’s no reason that should be the only type of control we care about.

                                                                                                          Yes Google’s goal of dominating the browser market is secondary to their goal of dominating the web. Then we agree that Google’s funding of Firefox is in keeping with their long-term goal of web dominance.

                                                                                                          if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla.

                                                                                                          Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance? At least Google doesn’t think so.

                                                                                                          1. 1

                                                                                                            Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance?

                                                                                                            You make some good points, but you’re ultimately using the language of a “black or white” argument here. In my view, if Google were to stop funding Mozilla they would still have other sponsors. And that’s not to mention the huge wave this would make in the press—even if most people don’t use Firefox, they’re at least aware of it. In a strange sense, Google cannot afford to stop funding Mozilla. If they do, they lose their influence over the Firefox project and get huge backlash.

                                                                                                            I think this is something the Mozilla organization were well aware of when they made the decision to accept search engines as a funding source. They made themselves the center of attention, something to be competed over. And in so doing, they ensured their longevity, even as Google’s influence continued to grow.

                                                                                                            Of course this has negative side effects, such as companies like Google having influence over them. But in this day & age, the game is no longer to be free of influence from Google; that’s Round 2. Round 1 is to achieve enough usage to exert influence on what technologies are actually adopted. In that sense, Mozilla is at the discussion table, while netsurf, dillo, and mothra (as much as I’d love to love them) are not and likely never will be.

                                                                                      2. 3

                                                                                        Just switch to Gopher.

                                                                                        1. 5

                                                                                          Just switch to Gopher

                                                                                          I know you were joking, but I do feel like there is something to be said for the simplicity of systems like gopher. The web is so complicated nowadays that building a fully functional web browser requires software engineering on a grand scale.

                                                                                          1. 3

                                                                                            yeah. i miss when the web was simpler.

                                                                                            1. 1

                                                                                              I was partially joking. I know there are new ActivityPub tools like Pleroma that support Gopher and I’ve though about adding support to generate/server gopher content for my own blog. I realize it’s still kinda a joke within the community, but you’re right about there being something simple about just having content without all the noise.

                                                                                        2. 1

                                                                                          Unless more than (rounded) 0% of people use it for Facebook, it won’t make a large enough blip for people to care. Also this is how IE was dominant, because so much only worked for them.

                                                                                          1. 1

                                                                                            yes, it would require masses of people. and yes it won’t happen, which is why the web is lost.

                                                                                        3. 2

                                                                                          I’ve relatively recently switched to FF, but still use Chrome for web dev. The dev tools still seem quite more advanced and the browser is much less likely to lock up completely if I have a JS issue that’s chewing CPU.

                                                                                          1. 2

                                                                                            I tried to use Firefox on my desktop. It was okay, not any better or worse than Chrome for casual browsing apart from private browsing Not Working The Way It Should relative to Chrome (certain cookies didn’t work across tabs in the same Firefox private window). I’d actually want to use Firefox if this was my entire Firefox experience.

                                                                                            I tried to use Firefox on my laptop. Site icons from bookmarks don’t sync for whatever reason (I looked up the ticket and it seems to be a policy problem where the perfect is the enemy of the kinda good enough), but it’s just a minor annoyance. The laptop is also pretty old and for that or whatever reason has hardware accelerated video decoding blacklisted in Firefox with no way to turn it back on (it used to work a few years ago with Firefox until it didn’t), so I can’t even play 720p YouTube videos at an acceptable framerate and noise level.

                                                                                            I tried to use Firefox on my Android phone. Bookmarks were completely useless with no way to organize them. I couldn’t even organize on a desktop Firefox and sync them over to the phone since they just came out in some random order with no way to sort them alphabetically. There was also something buggy with the history where clearing history didn’t quite clear history (pages didn’t show up in history, but links remained colored as visited if I opened the page again) unless I also exited the app, but I don’t remember the details exactly. At least I could use UBO.

                                                                                            This was all within the last month. I used to use Firefox before I used Chrome, but Chrome just works right now.

                                                                                            1. 6

                                                                                              I definitely understand that Chrome works better for many users and you gave some good examples of where firefox fails. My point was that people need to use and support firefox despite it being worse than chrome in many ways. I’m asking people to make sacrifices by taking a principled position. I also recognize most users might not do that, but certainly, tech people might!? But maybe I’m wrong here, maybe the new kids don’t care about an open internet.

                                                                                          1. 14

                                                                                            My problem with make is not that there is a bad design. It is not THAT bad when you look at things like CMake (oops, I did not put a troll disclaimer, sorry :P).

                                                                                            But it has only very large implementations that has a lot of extensions that all are not POSIX. So if you want a simple tool to build a simple project, you have to have a complex tool, with even more complexity than the project itself in many cases…

                                                                                            So a simple tool (redo), available with 2 implementations in shell script and 1 implementation in python does a lot of good!

                                                                                            There is also plan 9 mk(1) which support evaluating the output of a script as mk input (with the <| command syntax), which removes the need for a configure script (build ./linux.c on Linux, ./bsd.c on BSD…).

                                                                                            But then again, while we are at re-designing things, let’s simply not limit outself to the shortcomings of existing software.

                                                                                            The interesting part is that you can entirely build redo as a tiny tiny shell script (less than 4kb), that you can then ship along with the project !

                                                                                            There could then be a Makefile with only

                                                                                            all:
                                                                                                ./redo
                                                                                            

                                                                                            So you would (1) have the simple build-system you want, (2) have it portable as it would be a simple shell portable shell script, (3) still have make build all the project.

                                                                                            You may make me switch to this… ;)

                                                                                              1. 1

                                                                                                Nice! So 2 shell, 1 python and 1 C implementation.

                                                                                                1. 5

                                                                                                  There is also an implementation in C++. That site also has a nice Introduction to redo.

                                                                                                  I haven’t used any redo implementation myself, but I’ve been wondering how they would perform on large code bases. They all seem to spawn several process for each file just to check whether it should be remade. The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                                                                  1. 1

                                                                                                    The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                                                                    No experience, but from the article:

                                                                                                    Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn’t need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

                                                                                                    Since building the dependencies is usually done as part of building a target, I think this probably isn’t even a significant problem on initial build (where the time is going to be dominated by actual building). OTOH I seem to recall that traditional make variants do some optimisation where they run commands directly, rather than passing them via a shell, if they can determine that they do not actually use shell built-ins (not 100% sure this is correct, memory is fallible etc) - the cost of just launching the shell might be significant if you have to do it a lot, I guess.

                                                                                                2. 3

                                                                                                  The biggest problem with Make (imo) is that it is almost impossible to write a large correct Makefile. It is too easy for a dependency to exist, but not be tracked by the Make rules, thus making stale artefacts a problem.

                                                                                                  1. 1

                                                                                                    I had given serious thought to using LD_PRELOAD hooks to detect all dependencies dynamically (and identify e.g. dependencies which hit the network), but never got around to trying it.

                                                                                                    Anyone heard of anything trying that approach?

                                                                                                  2. 2

                                                                                                    Why this obsession with “simple tools for simple projects” though? Why not have one scalable tool that works great for any project?

                                                                                                    (Yeah, CMake is not that tool. But Meson definitely is!)

                                                                                                    1. 3

                                                                                                      Because I wish all my projects to be kept simple. Then there is no need for very powerful tool to build them.

                                                                                                      On the other hand, if you already need a complex tool to do some job, having another simple tool sum up the complexity of both as you will now have to understand and maintain both !

                                                                                                      If we aim for the most simple tool that can cover all situations we face, this will end up with different tools according to what we expect.

                                                                                                      1. 3

                                                                                                        Meson isn’t a simple tool, it requires the whole Python runtime in order to even run --help.

                                                                                                        CMake is a lot more lightweight.

                                                                                                        1. 4

                                                                                                          Have you appreciated how huge CMake actually is? I know I had problems compiling it on an old machine since it required something like a gigabyte of memory to build. A two-stage build that took its precious time.

                                                                                                          CMake is not lightweight, and that’s not its strong suit. To the contrary, it’s good in having everything but the kitchen sink and being considerably flexible (unlike Meson, which has simplicity/rigidity as a goal).

                                                                                                          1. 2

                                                                                                            CMake is incredibly heavyweight.

                                                                                                          2. 1

                                                                                                            I would like to see how it would work out with different implementations and how “stable” meson as a language is.

                                                                                                            1. 1

                                                                                                              Meson is nice, but sadly not suitable for every project. It has limitations that prevent some from using it, limitations neither redo nor autotools have. Such as putting generated files in a subdirectory (sounds simple, right?).

                                                                                                          1. 12

                                                                                                            The two role models I think of immediately are Julia Evans and Simon Peyton Jones. They are both clearly extremely knowledgable, really good at explaining things and – perhaps most importantly – readily admit ignorance. I aspire to do that myself, too, and I suspect part of the reason that Julia and Simon seems to know so much is that they’ll say “I don’t know!”.

                                                                                                            I saw a talk by Simon Peyton Jones last month about linear types in Haskell, and someone from the audience asked “why is it called linear types?” to which Simon – without hesitation – replied something to the effect of “I don’t know!”. For some reason it was very nice seeing such a knowledgable person unabashedly admit ignorance on a topic close to his area of expertise.

                                                                                                            Some examples from Julia’s writing:

                                                                                                            I’m not going to go into how you read that info right now because frankly I don’t know.

                                                                                                            It’s not completely clear to me under what circumstances having swap on a computer at all even makes sense. It seems like swap has some role on desktop computers.

                                                                                                            I was going to say that this isn’t how it works on Linux. But! I went and looked at the docs and apparently there is a posix_spawn system call that does basically this. Shows what I know. Anyway, we’re not going to talk about that.

                                                                                                            1. 3

                                                                                                              I also look up to Simon, and exactly for those reasons. In that same talk he starts saying “I usually don’t understand types, I’m very bad with them”. That straightforward ignorance is something I strive to achieve.

                                                                                                            1. 6

                                                                                                              you have more opportunities to encounter a longer interval than to encounter a shorter interval. And so it makes sense that the average span of time experienced by riders will be longer than the average span of time between buses, because the longer spans are over-sampled.

                                                                                                              This is very obvious now that I read it, but I had never thought about it in that way before.

                                                                                                              1. 4

                                                                                                                Happy to see that single-user mode is now the default on macOS. This means I won’t have to de-multi-user my nix installs anymore: https://gist.github.com/ehamberg/68ff4615f95c1acec8e7b6d83196d2b2 :)

                                                                                                                1. 3

                                                                                                                  There are a number of benefits to using the daemon, even on a machine with one user. While I personally think it’s important that the default installation method is easy and straightforward for new users. Anybody how’s serious about nix and uses it for more than just installing cached builds from nixpkgs should probably be using the daemon.

                                                                                                                1. 2

                                                                                                                  I have used iOS for 6–7 years now and I know about the feature, but on the few occasions I wanted to use it, I very carefully shook my phone from side to side, fearing that people around me would think I was crazy if I actually shook my phone.

                                                                                                                  Not sure I ever actually successfully triggered the undo action. :|

                                                                                                                  1. 4

                                                                                                                    What a curious way to announce this much awaited new Elm release. Does anyone here know more about the ideas behind that? I’d have expected some kind of public beta and a proper release announcement…

                                                                                                                    1. 4

                                                                                                                      Yeah, it’s a bit…different, but it looks like picking and highlighting one feature is what was done for previous releases as well: http://elm-lang.org/blog

                                                                                                                      1. 2

                                                                                                                        Especially given the “is Elm dead?” questions that have been popping up in the past few months. I guess it’s better to be head-down working on the next release, but I think just a little more communication or visibility into the project might have helped alleviate some of the concerns.

                                                                                                                        1. 3

                                                                                                                          This topic was addressed by Evan (creator of Elm) in his recent talk at Elm Europe 2018 titled: “What is success?”

                                                                                                                          1. 2

                                                                                                                            So I watched the video, and this is addressed around the 41 minute mark: “There’s pressure on me to be always be saying everything that’s going on with Elm development, and the trouble is that it’s not always very interesting… it’s like… ‘still working’”.

                                                                                                                            I think “still working” would have been better, though. I don’t think anyone expected weekly updates. Every 2 months updating the Github readme with “still working” would have been fine. And the fear that saying you’re working on X and then it doesn’t pan out, so better to not say anything at all, seems like the worse option.

                                                                                                                            I also think the talk is a little dismissive of Javascript, and the community. Sure, the number of packages is by no means the be-all of a good language ecosystem, but it says something about the platform and its viability. If nothing else, it means there are alternatives within the ecosystem. People have limited time, and very limited time to invest in learning brand new things, so they naturally look for some way to compare the opportunities they have. Is looking at numbers the ideal behaviour? Maybe not, but if I want to sell Elm to my boss and she asks me when the last release was and I say “18 months ago” and she asks if I know when the next one will be and I say “no”… that’s how languages don’t get adopted and ecosystems don’t grow.

                                                                                                                            As a complete outsider, but also as someone who wants Elm to succeed, I think community management is something they need to take really seriously. It seems like Evan really doesn’t want to do it, so fine, have someone else do it. You can dislike that there are persistent questions about the future of your project, but they’re best addressed at the time, not left unanswered.

                                                                                                                            1. 3

                                                                                                                              Personally, I’m not really convinced by those arguments.

                                                                                                                              I especially don’t understand why 18 months since last release, and no known date of new release, are arguments against adoption of the language. Take C or C++ — they rarely have new releases. Is this an argument against adoption? I don’t think so; actually, more like for adoption in my opinion! Slow pace of releases can mean that the languages are mature and stable. I’d be really surprised and annoyed by a boss who would think otherwise.

                                                                                                                              It now occurred to me, that maybe Lua is a good example of a language having a similar development mode as Elm. It’s also evolved behind super tightly closed doors. And new versions are usually dumped on the community out of the blue; though usually with public betas & RCs. But those are published only for fleshing out bugs; language design input is mostly not taken into account. AFAIK, the community is generally OK with this. And the language is totally used and relied upon in numerous niches in the industry (including a large one in game development)!

                                                                                                                              1. 5

                                                                                                                                “Elm” includes the language specification and the compiler.

                                                                                                                                The C language specification rarely has new releases, but the C compiler, gcc, has 4 releases per year. There would be major concern from the community and your boss if gcc activity was perceived as drying up.

                                                                                                                                1. 1

                                                                                                                                  Ah; good one, never thought of it this way; big thanks for pointing this out to me!

                                                                                                                                2. 2

                                                                                                                                  Take C or C++ — they rarely have new releases

                                                                                                                                  C and C++ have been mature and in very wide use for decades, where Elm is a very young language - just a few years old. Same with Lua, it’s been in widespread use for, what, 10 years or more? I think that’s the difference. Elm is still much more of an unknown quantity.

                                                                                                                                  Slow pace of releases can mean that the languages are mature and stable

                                                                                                                                  Sure - when the language is mature and stable. I don’t think anyone would consider Elm to be that way: this new release, if I understand correctly, breaks every package out there until they’re upgraded by their maintainer.

                                                                                                                                  1. 3

                                                                                                                                    Personally, after some initial usage, I currently actually have a surprising impression of Elm being in fact mature. It kinda feels to me as an island of sanity and stability in the ocean of JS ecosystem… (Again, strictly personal opinion, please forgive me should you find this offensive.) I didn’t realize this sentiment so strongly until writing these words here, so I’m also sincerely curious if this could be a sign of me not knowing Elm well enough to stumble upon some warts? Hmh, and for a somewhat more colourful angle, you know what they say: old doesn’t necessarily mean mature, and converse ;P

                                                                                                                                    And — by the way — notably, new releases of Lua actually do also infamously tend to break more or less every package out there :P Newbies tend to be aggravated by this, veterans AFAIU tend to accept it as a cost that enables major improvements to the language.

                                                                                                                                    That said, I think I’m starting to grasp what you’re trying to tell me. Especially the phrase about “unknown quantity”. Still, I think it’s rare for a language to become “corporate grade non-risky”. But then, as much as, say C++ is a “known quantity”, to me it’s especially “known” for being… finicky

                                                                                                                            2. 2

                                                                                                                              Yeah the last release was in Nov 2016.

                                                                                                                              1. 1

                                                                                                                                The devs are active on https://discourse.elm-lang.org/, which might help people see the project activity.

                                                                                                                              2. 1

                                                                                                                                since they recently disallowed using javascript in elm packages, it only makes sense that they’d lead with what that had won them, i.e. function level dead code elimination.

                                                                                                                              1. 6

                                                                                                                                The Typeclassopedia is such a great resource!

                                                                                                                                If anyone wants a PDF or EPUB version, I maintain a Pandoc markdown version at https://github.com/ehamberg/typeclassopedia-md (go to releases to download a ready-made PDF or EPUB file).

                                                                                                                                1. 12

                                                                                                                                  I recommend the episode of DevOps Cafe with Kelsey Hightower discussing about this complexity.

                                                                                                                                  The whole idea is that in the enterprise world, processes and workloads are différents, no 2 différent companies have the same constraints, and k8s is answering this with much complexity and flexibility.

                                                                                                                                  In this episode, John Willis is wishing that something as simple as a docker compose file would be enough to describe applications, wish that Kelsey answers to with few examples of popular demands that cannot be expressed at all with a compose file.

                                                                                                                                  I strongly recommend the podcast and this episode.

                                                                                                                                  1. 13

                                                                                                                                    Link to the episode for the lazy: https://overcast.fm/+I_PQGD1c

                                                                                                                                  1. 4

                                                                                                                                    Isn’t this from 2002? If so that’s probably important context to be aware of. :)

                                                                                                                                    https://scholar.google.com/scholar?cluster=15142864505292191490

                                                                                                                                    There is a what happened? discussion from Reddit from five years ago here.