1. 0

    I use kitty on Manjaro, it’s pretty excellent.

    1. 5

      This article would make a whole lot more sense if its title claimed that the demise is not that of RSS, but rather of end-users’ direct consumption of syndicated feeds via a dedicated RSS reader program. RSS will probably never die for any kind of automated syndication because it’s a robust standard and is sufficiently extensible to accommodate virtually every use case within the context of sharing the title, summary, and metadata of a document.

      I don’t remember the last time I used an RSS reader, including Feedly or something sitting on an sleeping heroku dyno. But just a couple of weeks ago I added a series of RSS feeds for variety of services like GitHub, Travis, etc to a Slack room meant to aggregate service status.

      1. 1

        I agree.

        While this article is an excellent recap of the Syndication Wars (in which I was active in the sidelines) it also tries to resurrect a vaguely Utopian ideal of widely decentralized blogs that all shared stuff via RSS and it would end hunger and cure cancer and give us ponies. The same hope was expressed for stuff like The Well back in the day.

        1. 1

          I follow RSS news feeds via Mastodon. I ended up writing a bot to watch feeds and post them in the timeline. I find it works pretty well, I just have a pinned news column that I browse through.

        1. 1

          And it only takes 15 seconds for the answer to appear…

          I suppose this would be more appealing to hardcore lisp lovers, but looking at the problem I could solve this in about five lines of extremely straightforward lua and it would execute instantly.

          1. 2

            You’re confusing the startup time for embedded ClojureScript compiler on the blog page with the actual performance of the library which will execute instantly if you run it using your local runtime. For example, running it on the JVM I get: “Elapsed time: 0.045081 msecs”.

            1. 1

              Clojure intends users to serialize most data in maps, leading to this specific problem.

            1. 8

              I feel like these points support the rich as a great man theory, as noted in the conclusion. His valuing stability seems to lead to those decisions. If those other languages could get a Rich, they’d have made Clojure. It seems fine to have different values. I agree that making different decisions around development model and static typing make those things harder, and are interesting trade offs, but I find the framing of the whole thing strange.

              1. 5

                Is it really inconceivable that an equally great “Rich” would create a curly brace language with static types that requires more tweaking?

                Edit: I think the whole “great man” discussion is a distraction. I think this language arose out of Rich’s personal vision and decisions, so it’s quite plausible to think he meets Carlyle’s definition–the primary explanation of what Clojure is like is Rich[0]. However, the point that Clojure’s stability is enabled by its status as a dynamic lisp is also right, and more important and interesting than the first point.

                [0] And this tells us nothing about the great man theory writ large! A programming language is not the fate of a continent.

                1. 1

                  Not at all, but they have different values and end up in different places. I hold Jonathan Blow in a similar regard and Jai looks wildly different and cares about very different things. It is a product of a BFDL type though and it’s success or failure will be on his shoulders.

                  I agree that it’s a distraction and basically what I meant by strange is that the post is largely about the “great man” idea and a weird plug for an ideology not well supported by the discussion of the design of Clojure.

                2. 5

                  I think Steve’s entire point is that it’s easy to attribute the success or failure of a particular project to the actions of an individual – but not only is that attribution useless in terms of folks learning from that success, it’s also usually inaccurate and completely subjective. (what is great? etc.) Even more insidious, I think it plays in to the delusions of grandeur I’ve found many engineers to have – the 10x developer myth, ‘mean geniuses’ trend, etc.

                  Even Rich himself never speaks about Clojure in this way (with him being some sort of enlightened genius keeping complexity at bay), so it’s been interesting to see the community run with this “meme”.

                  1. 2

                    It might be useless in terms of learning, since the learning is about concrete technical trade-offs, but I don’t think it’s inaccurate and it doesn’t seem particularly subjective. I also don’t think it’s a myth at all. Is someone like Fabrice Bellard (ffmpeg, qemu) not 10x more productive than average? I don’t really know what the counter argument against the idea of a 10x developer is except that it leads to arrogant people wanting to play the mean genius as you mentioned, which I agree is a bad thing.

                    No one would really speak of themselves that way since it’s extremely taboo but I don’t think that says anything about their impact.

                    1. 1

                      I agree partially, part of this is a semantic argument for me. A more accurate statement instead of 10x might be:

                      “There are certain engineers, who, because of their subject matter expertise and learning experience, are able to be more productive than others in certain areas given equal conditions.”

                      I don’t think anyone would disagree that the above is true. But that’s not what comes to mind (for me at least) when the terms “rockstar engineer” or “10x engineer” are used.

                      Preternatural giftedness in an area happens less than once a generation, and we have no idea how it occurs or how to measure it reliably for that matter. Most of the folks you or I might describe as a 10x engineer are just normal folks with particular interests, the right mentors and teachers, and the time to dedicate to their craft — the 10x theory never accounts for the gifts of knowledge that were given to those folks. What’s more, attributing that productivity to some sort of god given intellect hides the learning process that they had to go through, which I think is most harmful to present to people new to our profession — it engenders something similar to: “oh I could never be like X, they’re just naturally gifted.”

                      Alan Turing was a visionary and left more lasting marks on computing history than most of us will. But he didn’t become that way out of the womb. He worked, he learned, he struggled even, and that combined with being in the right place and working on the right problems at the right time (or the wrong time depending on your view of his persecution) is what brought him success. Casting the story otherwise does a disservice to the journey.

                      EDIT: something that I also didn’t go into is the fact that Clojure is an open source project. Rich hasn’t developed clojure in isolation for a long time now — and I would strongly doubt that his ideas aren’t influenced by his colleagues, either in review or ideation.

                  2. 2

                    A number of other systems have BDFLs - that Clojure got what it got is due to the system that produced Rich: deep diving enterprise C++ coders in the 90s.

                    Arguably Linux has similar characteristics.

                    1. -3

                      It reads strange in isolation, but perhaps not so much knowing Klabnik’s political views. He’s a communist and Antifa supporter and/or member.

                      1. 2

                        That’s telegraphed when one mentions “a people’s history” :)

                        It’s nevertheless strange. If individuals are to be erased, I would guess that Steve (I mean “the people”) should sign his (actually “ours”, not “his”, right?) blog post with a moniker that avoids the filth of individuality.

                        which emphasizes the army over Napoleon, the worker over the CEO.

                        Early in his career Napoleon massacred a mob of people in Paris. I mean, he gave the order to do so. I don’t think he personally aimed a weapon, but anyways it was his decision, I place less blame on the nameless subordinates. And Napoleon gets more credit than his subordinates for any achievements as well.

                        1. 1

                          That’s the problem with the author’s struggles with great man theory. Leaders are generally more responsible than their followers for actions taken. Movements get blamed for poor actions by mobs of people, in the same light. Could another person have the same ideas and comparable success? Yes, absolutely. But this is one person who actually did, so we respect that.

                        2. 0

                          So what you’re saying is that Klabnik’s a decent human being.

                          1. -8

                            If you support pushing a political agenda under the threat of violence, you are a fascist. Klabnik is a fascist.

                            1. 10

                              this comment and your last are both wrong, pejorative, and off-topic.

                              1. 0

                                Pejorative? Maybe. It’s at least accurate. I’m not just name-calling. I’m not sure “pejorative” is even a bad thing — most users of this forum would be justifiably pejorative of anyone openly white-supremacist.

                                Off-topic? Yeah, my snipe at his Russia visit certainly was. Although his [extremist] political ideas do seep into his technical writing, as another commenter has pointed out.

                                Wrong? No. That I hold an unpopular opinion here is made clear by the ream of downvotes, but Klabnik doesn’t hide the fact that he supports an extremist ideology, and he has at least publicly hinted in the past at engaging in physical violence with those holding opposing political views.

                                Frankly, I think most people downvoting me here are afraid to hold everyone to the same standard.

                                1. 6

                                  A word of advice: you think you’re being downvoted for an unpopular opinion. In fact, you are being downvoted because your opinion is the political equivalent of complaining that “HTML is stupid because it’s based on Clojure”.

                                  No one who has any knowledge of fascism would ever suggest that everyone who “supports pushing a political agenda under the threat of violence” is thereby a fascist. There is some debate about the meaning of fascism, but the statement you made is not within the realm of reasonable debate.

                                  1. 0

                                    Thank you for your advice, but I do not need it. Downvoted or not, I believe I leave this with my dignity intact having called out political extremism regardless of flavour. That’s real equality.

                                    you think you’re being downvoted for…. In fact, you are being downvoted because …

                                    I don’t believe either of us can do more than speculate on the exact motivations for everyone else’s downvoting. One other commenter has made clear they disapprove because my comments are “pejorative”. Yes? So what? What is wrong exactly with disapproving of violent political extremism? I believe it to be ethically worse for anyone to not be disapproving of violent political extremism.

                                    your opinion is the political equivalent of complaining that “HTML is stupid because it’s based on Clojure”.

                                    I’m not sure I follow this analogy. In the context of what you wrote that follows, you might be saying that my argument is fundamentally invalid because we don’t agree on the definition of fascism.

                                    I’m sorry, but if you open up a dictionary, you can’t argue that what I have said earlier is “not within the realm of reasonable debate.”

                                    No one who has any knowledge of fascism would ever suggest that everyone who “supports pushing a political agenda under the threat of violence” is thereby a fascist.

                                    People in public discourse on this topic apparently disagree with you, as does Wikipedia and the dictionary.

                                    There is some debate about the meaning of fascism

                                    Wikipedia doesn’t say there is some debate. Wikipedia says it is highly disputed.


                                    Aside from arguing about the definitions of words — and to move this forward — exactly what term would you like me to use to describe the behaviour that I have observed? What word to you most accurately describes the activity that Antifa [though not exclusively] are engaged in, which is using physical violence to shut down political opponents?

                                    1. 3

                                      Fascism is nationalistic, communism is not. Fascism is anti-egalitarian, while communism appeals to the goal of a classless society. Fascism is anti-materialist, communism is materialist. Fascism identifies itself with tradition, communism does not.

                                      Yet communists encouraged political violence in the pursuit of communism.

                                      I acknowledged that there is not an exact, agreed definition of fascism[0]. My claim was that there is essentially universal agreement that what you said is not a definition, and that can be seen from the fact that communism is not fascism, but meets your definition. Where, in wikipedia or the dictionary, or any academic source, is fascism defined as “any use of violence for a political agenda?”


                                      If you want a term, “political violence”, or “extremism” could work Most any word will do better than “fascism”.

                                      [0] I do think that the definitions in that wikipedia article are like venn diagrams with 80% overlap–the precise boundaries are not agreed upon, but there is a lot of common ground.

                                      1. -1

                                        Where, in wikipedia or the dictionary, or any academic source, is fascism defined as “any use of violence for a political agenda?”

                                        Emphasis mine:

                                        A political philosophy, movement, or regime (such as that of the Fascisti) that exalts nation and often race above the individual and that stands for a centralized autocratic government headed by a dictatorial leader, severe economic and social regimentation, and forcible suppression of opposition

                                        https://www.merriam-webster.com/dictionary/fascism

                                        I won’t debate this further, and I’m happy to concede drawing the same distinctions you do. After alll, it’s the ideals that I find more important than how we label them.

                                  2. 3

                                    Do I correctly understand you to be saying:

                                    1. Anti-fascists are the real fascists, and

                                    2. We’re all too afraid to treat anti-fascists as fascists?

                                    1. -1

                                      I don’t think that maps 1:1 with what I had written, no.

                                      1. Anti-fascists are the real fascists

                                      “Anti-fascists are the real fascists” seems to imply that I’m weighing up one wing of political extremism as worse than the other. I certainly don’t see that kind of comparison constructive. Of course, I may just be interpreting the sentiment of your summary incorrectly. As I had written though, I believe everyone should be held to the same standard. Pushing a political ideology by threat of violence is never ok.

                                      1. We’re all too afraid to treat anti-fascists as fascists

                                      I didn’t say “all”. The tech community isn’t a total hive-mind, but there are a worrying number of allowances made for certain kinds of violent political extremism. We should be able to call a spade, a spade.

                                      1. 3

                                        Okay, noted.

                                        1. Are you alleging that Steve threatens violence?

                                        2. If so, is that on the basis that Steve identifies as an anti-fascist (which I don’t know to be true), or is there some additional basis for it?

                                        1. -1

                                          Are you alleging that Steve threatens violence?

                                          I’m going to be hard-pressed to find a nice self-contained, damning sound bite from him that neatly exposes exactly this. They’re smarter than to do that, which is why they conceal their identities when they attack people. I at least take this as a pretty good hint of what he’s up to.

                                          Given that he openly supports Antifa, and given that Antifa’s modus operandi is to use physical violence to silence political opponents †, I don’t think it’s unfair to make the short logical leap that he is in support of physical violence for political ends.

                                          To put it somewhat more flippantly — which sometimes is illuminating — none(?) of us would accept someone today openly supporting Nazism, even if they didn’t openly say “I think we need to put some people in ovens.”

                                          If so, is that on the basis that Steve identifies as an anti-fascist (which I don’t know to be true), or is there some additional basis for it?

                                          I think I addressed this in this first part of this comment, but to summarise: yes.

                                          † Whether those political opponents are extremists or not is up for debate — to some ardent followers, having voted for Trump is damning enough. Personal full disclosure: I am not a US voter, so I have no dog in that fight. I would classify my own political leanings as left-wing liberal, and my [recent] ancestors have suffered through both Nazism and Communism.

                                          1. 4

                                            I appreciate your answering my questions. To the extent that your answers clarified, I appreciate that as well.

                                            If Steve were to sue you for defamation, under California law a jury would decide whether you were making any assertion of fact. Here on lobste.rs, it is my opinion that anyone reading the thread could reasonably have concluded you were, at the least, strongly suggesting that Steve endorses or advocates violence on the part of individuals.

                                            Of course, on close reading, another interpretation could be that you merely believe that opposition to fascism is fundamentally rooted in violence because the rule of law is fundamentally rooted in violence (by the state). However, I don’t believe this was your point and I don’t intend to entertain it.

                                            Your position is a very serious allegation, and the manner in which you raised it was completely inappropriate. In the future, if you are to remain on this forum, either refrain from making allegations of that nature, or make them directly as assertions of fact with clear meanings that don’t have to be teased out of you. Also, make sure that you are not negligent in determining the truth of such claims.

                                            1. 0
                            2. -9

                              I realise now that Klabnik visited Russia recently. On the Russian visa application form, it asks:

                              Have you ever by any means publicly expressed views that justify or glorify terrorist or extremist activities?

                              Does that mean he lied on his application? 🤔

                          1. 4

                            I agree that this is newsworthy, but the lobste.rs story I think we all (at least @friendlysock , @JordiGH and myself ) want to see has the link to the repo (with the license) or at least details on how it’s being open sourced.

                            1. 4

                              It looks like they’re still in the process of releasing their IP, there are more details on the specifics of what’s being released on the MIPS open site. I haven’t found the specifics on the actual license yet though, so I guess we have to wait until the code is released for that.

                              1. 2

                                Though it looks like it might be a non-copyleft license; from the FAQ:

                                If our company builds a MIPS Open implementation, is it required to release its source code for the MIPS core?

                                No, the company can develop and maintain proprietary source code for any core it developed using the MIPS Open architectures.

                                1. 2

                                  Yeah, I’m expecting something like MIT is more likely.

                            1. 3

                              the authors released the source https://github.com/rtqichen/torchdiffeq and there’s a follow-up paper applying this to generative density modeling https://arxiv.org/abs/1810.01367 with code here https://github.com/rtqichen/ffjord

                              1. 2

                                Yes, and I hope no one thinks that. Roughly, unit tests check if the implementation matches the intent and integration tests check if the different implementations work together to match the intent.

                                Your points 1 - 3 typically involve no implementation - they are product identification and development. When we iterate on a solution the boundaries blur a bit as we learn more about the technology as applied to our use case, but we should largely keep them separate.

                                There are ways to involve testing in 1 - 3 but these are typically very human driven, though I think for 3 there are formal methods which require one to - basically - write code. (TLA for example) (https://en.wikipedia.org/wiki/Formal_methods)

                                1. 1

                                  Part of the point in my article is that neither unit tests nor integration tests actually validate that the code matches the intent.

                                  1. 2

                                    You can do specification testing [PDF] where you create a specification for the intent of the code and write tests against the specification.

                                    1. 1

                                      Perhaps we use “intent” in different ways. To me “compute this given function” is an intent. Whether this function is the correct solution to the problem of “avoid pedestrians at the crossing” is an engineering specifications issue.

                                      1. 1

                                        That’s maybe true, at least for code in the large (not for small functions, I’d argue) but let’s say that you write some new code, with good test coverage, and all of that is reviewed well and both you and the reviewer agree that yes the code is correct, and yes the tests are correctly testing that the code is correct. Then you have QA test it, and Product sign off on the functionality. Everyone is happy. Aren’t you now in a situation where the tests do “actually validate that the code matches the intent.”?

                                        1. 1

                                          Right, which is one of the reasons tests are useful.

                                          1. 1

                                            you and the reviewer agree that yes the code is correct

                                            If this were the case, is there any need for the test? The code was validated without tests already.

                                            yes the tests are correctly testing that the code is correct

                                            I don’t believe it. Maybe for a small subset of all code out there (e.g. simple, pure functions with bounded size & time) it is possible for a test to show that it is correct. For the rest.. show me a test, I show you subtle race conditions, resource exhaustion and undefined behavior.

                                            1. 1

                                              For the rest.. show me a test, I show you subtle race conditions, resource exhaustion and undefined behavior.

                                              Ok, that’s a very broad claim and I don’t really buy it, but allow me to qualify my statement: “yes the tests are correctly testing that the code is correct under any real-world scenarios we are likely to encounter, or within our tolerance for risk”. Good enough is, often times, good enough.

                                      1. 19

                                        The best thing about Electron is that Linux is finally becoming a first class platform for desktop apps. Slack, Git Kraken, Atom, Mailspring and so on likely would’ve never seen the light of day on Linux if not for Electron. Electron drastically lowers the barrier for writing and maintaining cross-platform applications, and I think that far outweighs its disadvantages. I don’t really see any insurmountable problems with Electron that can’t be addressed in the long run as the adoption grows.

                                        The reality is that maintaining multiple UIs for different platforms is incredibly expensive, and only a few companies have the resources to dedicate separate development teams for that. The value of having a common runtime that works reasonably well on all platforms can’t be overstated in my opinion. This is especially important for niche platforms like Linux that were traditionally overlooked by many companies.

                                        1. 24

                                          I think a more accurate description would be that Electron makes every platform second class.

                                          It is certainly more egalitarian and even an improvement for platforms previously overlooked, but better than before is not necessarily good.

                                          1. 2

                                            On the other hand, if the web stack becomes the standard then all the platforms improve together in the long run.

                                          2. 15

                                            In a better universe there would be no reason to maintain cross-platform apps. Ideally we would use independently maintained platform tailored apps talking to common protocols. Like, a hypothetical Ubuntu-native VoIP app that could talk to Skype on Windows. Protocols as the point of commonality is far more desirable than a UI toolkit, because a common UI toolkit means that every app works in its own peculiar way on every platform, which sucks.

                                            Unfortunately, we’re living in this universe…

                                            1. 4

                                              Most people prefer applications that are unconstrained by stagnant standards. For example, consider Slack or Discord versus IRC, or web forums versus newsgroups or mailing lists.

                                              At least when the applications are open-source and API-driven, there’s hope for alternative clients for those who want or need them.

                                              1. 2

                                                Most people prefer applications that are unconstrained by stagnant standards.

                                                That’s an interesting thought, thanks.

                                                Although I still think that a single entity evolving a standard is better than every app inventing their own UI conventions.

                                          1. 3

                                            Scheme sounds like it’s pretty close to what the author is describing. It’s a very small and simple core that you can build on, and even create mini languages on top of. The macro system allows you to pretty much express any idea without having to change the core language. As a bonus the runtime is interactive allowing you to explore ideas and see the results immediately.

                                              1. 7

                                                I don’t think these kind of articles are productive since they use different meanings of the same terms. That can be a good thing but not in debates where one seeks agreement. My hypothesis is that there’s at least two definitions of typed/untyped at play:

                                                1. The author and people on that side who are looking at it in a formal way based on academically-accepted definitions of the words.

                                                2. Most of the people saying the programs are untyped probably mean they’re not “explicitly” typed. They don’t require extra typing, limiting structure of their programs, and so on.

                                                If so, then No 2 is a popular definition of “untyped.” It’s academically incorrect. The masses are on board with it. These articles might correct a misuse of the definition. They probably won’t convert people in No 2 to want to use type systems since the articles don’t address what No 2 really like about “untyped” languages. Examples that did are Strongtalk and Typed Racket trying to get benefits of RAD-style languages and stronger typing.

                                                1. 9

                                                  I don’t think the article is trying to convince people to use static typing. What it says is that we should talk about different approaches to enforcing invariants about programs. These can come in form of static types, contracts, test, and so on. The dichotomy of static and dynamic typing is naive, and it distracts from the actual purpose of having a specification.

                                                  1. 5

                                                    I like that interpretation and goal. Plus, increasing awareness of concept and importance of invariants is good itself. Especially since they’ll build more reliable stuff. Maybe even upgrade into high assurance. :)

                                                  2. 2

                                                    They probably won’t convert people…

                                                    The author says:

                                                    When I’m hacking, I write in “untyped” languages. I write programs in Racket, scripts in bash, plugins and tools in JavaScript, papers in latex, build systems in Makefile, and so on.

                                                    So I don’t think “converting” people is the goal of this article.

                                                  1. 81

                                                    I beg all my fellow crustaceans to please, please use Firefox. Not because you think it’s better, but because it needs our support. Technology only gets better with investment, and if we don’t invest in Firefox, we will lose the web to chrome.

                                                    1. 59

                                                      Not because you think it’s better

                                                      But that certainly helps too. It is a great browser.

                                                      • privacy stuff — the cookie container API for things like Facebook Container, built-in tracker blocker, various anti-fingerprinting things they’re backporting from the Tor Browser
                                                      • honestly just the UI and the visual design! I strongly dislike the latest Chrome redesign >_<
                                                      • nice devtools things — e.g. the CSS Grid inspector
                                                      • more WebExtension APIs (nice example: only on Firefox can Signed Pages actually prevent the page from even loading when the signature check fails)
                                                      • the fastest (IIRC) WASM engine (+ now in Nightly behind a pref: even better codegen backend based on Cranelift)
                                                      • ongoing but already usable Wayland implementation (directly in the official tree now, not as a fork)
                                                      • WebRender!!!
                                                      1. 7

                                                        On the other hand, WebSocket debugging (mostly frame inspection) is impossible in Firefox without an extension. I try not to install any extensions that I don’t absolutely need and Chrome has been treating me just fine in this regard[1].

                                                        Whether or not I agree with Google’s direction is now a moot point. I need Chrome to do what I do with extensions.

                                                        As soon as Firefox supports WebSocket debugging natively, I will be perfectly happy to switch.

                                                        [1] I mostly oppose extensions because of questionable maintenance cycles. I allow uBlock and aXe because they have large communities backing them.

                                                        1. 3

                                                          Axe (https://www.deque.com/axe/) seems amazing. I know it wasn’t the focus of your post – but I somehow missed this when debugging an accessibility issue just recently, I wish I had stumbled onto it. Thanks!

                                                          1. 1

                                                            You’re welcome!

                                                            At $work, we used aXe and NVDA to make our webcomponents AA compliant with WCAG. aXe was invaluable for things like contrast and missing role attributes.

                                                          2. 3

                                                            WebSocket debugging (mostly frame inspection) is impossible in Firefox without an extension

                                                            Is it possible with an extension? I can’t seem to find one.

                                                            1. 1

                                                              I have never needed to debug WebSockets and see no reason for that functionality to bloat the basic browser for everybody. Too many extensions might not be a good thing but if you need specific functionality, there’s no reason to hold back. If it really bothers you, run separate profiles for web development and browsing. I have somewhat more than two extensions and haven’t had any problems.

                                                              1. 1

                                                                I do understand your sentiment, but the only extension that I see these days is marked “Experimental”.

                                                                On the other hand, I don’t see how it would “bloat” a browser very much. (Disclaimer: I have never written a browser or contributed to any. I am open to being proved wrong.) I have written a WebSockets library myself, and it’s not a complex protocol. It can’t be too expensive to update a UI element on every (websocket) frame.

                                                            2. 5

                                                              Yes! I don’t know about you, but I love the fact that Firefox uses so much less ram than chrome.

                                                              1. 2

                                                                This was one of the major reasons I stuck with FF for a long time. It is still a pronounced difference.

                                                              2. 3

                                                                honestly just the UI and the visual design! I strongly dislike the latest Chrome redesign >_<

                                                                Yeah, what’s the deal with the latest version of Chrome? All those bubbly menus feel very mid-2000’s. Everything old is new again.

                                                                1. 3

                                                                  I found a way to go back to the old ui from https://www.c0ffee.net/blog/openbsd-on-a-laptop/ (it was posted here a few weeks ago):

                                                                  Also, set the following in chrome://flags:

                                                                  • Smooth Scrolling: (personal preference)
                                                                  • UI Layout for the browser’s top chrome: set to “Normal” to get the classic Chromium look back
                                                                  • Identity consistency between browser and cookie jar: set to “Disabled” to keep Google from hijacking any Google > - login to sign you into Chrome
                                                                  • SafeSearch URLs reporting: disabled

                                                                  (emphasis mine)

                                                                  1. 1

                                                                    Aaaaaaaand they took out that option.

                                                                2. 1

                                                                  The Wayland implementation is not usable quite yet, though, but it is close. I tried it under Sway, but it was crashy.

                                                                3. 16

                                                                  I switched to Firefox last year, and I have to say I don’t miss Chrome in the slightest.

                                                                  1. 13

                                                                    And those with a little financial liberty, consider donating to Mozilla. They do a lot of important work free a free and open web.

                                                                    1. 10

                                                                      I recently came back to Firefox from Vivaldi. That’s another Chromium/Webkit based browser and it’s closed source to boot.

                                                                      Firefox has improved greatly in speed as of late and I feel like we’re back in the era of the mid-2000s, asking people to chose Firefox over Chrome this time instead of IE.

                                                                      1. 2

                                                                        I’d love to switch from Vivaldi, but it’s simply not an option given the current (terrible) state of vertical tab support in Firefox.

                                                                        1. 2

                                                                          How is it terrible? The hiding of the regular tab bar is not an API yet and you have to use CSS for that, sure, but there are some very good tree style tab webextensions.

                                                                          1. 2

                                                                            The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore.

                                                                            Mozilla is chasing their idealized “average user” and is determined to push everyone into their one-size-fits-all idea of user interface design – anyone not happy with that can screw off, if it was for Mozilla.

                                                                            It’s 2018 – I don’t see why I even have to argue for vertical tabs and mouse gestures anymore. I just pick a browser vendor which hasn’t been asleep on the wheel for the last 5 years and ships with these features out of the box.

                                                                            And if the web in the future ends up as some proprietary API defined by whatever Google Chrome implements, because Firefox went down, Mozilla has only itself to blame.

                                                                            1. 2

                                                                              The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore. The whole point of moving to WebExtensions was long term support. They couldn’t make significant changes without breaking a lot of the old extensions. The whole point was to unhook extensions from the internals so they can refactor around them and keep supporting them.

                                                                              1. 0

                                                                                That’s like a car manufacturer removing all electronics from a car – sure it makes the car easier to support … but now the car doesn’t even turn on anymore!

                                                                                Considering that cars are usually used for transportation, not for having them sit in the garage, you shouldn’t be surprised that customers buy other cars in the future.

                                                                                (And no, blaming “car enthusiasts” for having unrealistic expectations, like it happens in the case of browser users, doesn’t cut it.)

                                                                                1. 3

                                                                                  So you’d rather they didn’t improve it at all? Or would you rather they broke most extensions every release?

                                                                                  1. 3

                                                                                    I’m not @soc, but I wish Firefox had delayed their disabling of old-style extensions in Firefox 57 until they had replicated more of the old functionality with the WebExtensions API – mainly functionality related to interface customization, tabs, and sessions.

                                                                                    Yes, during the time of that delay, old-style extensions would continue to break with each release, but the maintainers of Tree Style Tabs and other powerful extensions had already been keeping up with each release by releasing fixed versions. They probably could have continued updating their extensions until WebExtensions supported their required functionality. And some users might prefer to run slightly-buggy older extensions for a bit instead of switching to the feature-lacking new extensions straight away – they should have that choice.

                                                                                    1. 1

                                                                                      What’s the improvement? The new API was so bad that they literally had to pull the plug on the existing API to force extension authors to migrate. That just doesn’t happen in cases where the API is “good”, developers are usually eager to adopt them and migrate their code.

                                                                                      Let’s not accuse people you disagree with that they are “against improvements” – it’s just that the improvements have to actually exist, and in this case the API clearly wasn’t ready. This whole fiasco feels like another instance of CADT-driven development and the failure of management to reign in on it.

                                                                                      1. 3

                                                                                        The old extension API provided direct access to the JavaScript context of both the chrome and the tab within a single thread, so installing an XUL extension was disabling multiprocess mode. Multiprocess mode seems like an improvement; in old Firefox, a misbehaving piece of JavaScript would lock up the browser for about a second before eventually popping up a dialog offering to kill it, whereas in a multiprocess browser, it should be possible to switch and close tabs no matter what the web page inside does. The fact that nobody notices when it works correctly seems to make it the opposite of Attention-Deficient-Driven-Design; it’s the “focus on quality of implementation, even at the expense of features” design that we should be encouraging.

                                                                                        The logical alternative to “WebExtension For The Future(tm)” would’ve been to just expose all of the relevant threads of execution directly to the XUL extensions. run-this-in-the-chome.xul and run-this-in-every-tab.xul and message pass between them. But at that point, we’re talking about having three different extension APIs in Firefox.

                                                                                        Which isn’t to say that I think you’re against improvement. I am saying that you’re thinking too much like a developer, and not enough like the poor sod who has to do QA and Support triage.

                                                                                        1. 2

                                                                                          Improving the actual core of Firefox. They’re basically ripping out and replacing large components every other release. This would break large amount of plugins constantly. Hell, plugins wouldn’t even work in Nightly. I do agree with @roryokane that they should have tried to improve it before cutting support. The new API is definitely missing many things but it was the right decision to make for the long term stability of Firefox.

                                                                                          1. 1

                                                                                            They could have made the decision to ax the old API after extension authors adopted it. That adoption failed so hard that they had to force developers to use the new API speaks for itself.

                                                                                            I’d rather have extension that I have to fix from time to time, than no working extensions at all.

                                                                                  2. 1

                                                                                    Why should Mozilla care that much about your niche use case? They already have a ton of stuff to deal with and barely enough funding.

                                                                                    It’s open source, make your own VerticalTabFox fork :)

                                                                                    1. 3

                                                                                      Eh … WAT? Mozilla went the extra mile with their recent extension API changes to make things – that worked before – impossible to implement with a recent Firefox version. The current state of tab extensions is this terrible, because Mozilla explicitly made it this way.

                                                                                      I used Firefox for more than 15 years – the only thing I wanted was to be left alone.

                                                                                      It’s open source, make your own VerticalTabFox fork :)

                                                                                      Feel free to read my comment above to understand why that doesn’t cut it.

                                                                                      Also, Stuff that works >> open source. Sincerely, a happy Vivaldi user.

                                                                                      1. 2

                                                                                        It’s one of the laws of the internet at this point: Every thread about Firefox is always bound to attract someone complaining about WebExtensions not supporting their pet feature that was possible with the awful and insecure old extension system.

                                                                                        If you’re care about “non terrible” (whatever that means — Tree Style Tab looks perfect to me) vertical tabs more than anything — sure, use a browser that has them.

                                                                                        But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

                                                                                        1. 3

                                                                                          If you’re care about “non terrible” (whatever that means — Tree Style Tab looks perfect to me) vertical tabs more than anything — sure, use a browser that has them.

                                                                                          If you compare the current state of the art of vertical tabs extensions, even Mozilla thinks they suck – just compare them to their own Tab Center experiment: https://testpilot.firefox.com/static/images/experiments/tab-center/details/tab-center-1.1957e169.jpg

                                                                                          Picking just one example: Having the navigation bar at a higher level of the visual hierarchy is just wrong – the tab panel isn’t owned by the navigation bar, the navigation bar belongs to a specific tab! Needless to say, all of the vertical tab extensions are forced to be wrong, because they lack the API do implement the UI correctly.

                                                                                          This is how my browser currently looks like, for comparison: https://i.imgur.com/5dTX8Do.png

                                                                                          But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

                                                                                          You can only go so far alienating the most loyal users that use Firefox for specific purposes until the stop installing/recommending it to their less technically-inclined friends and relatives.

                                                                                          Mozilla is so busy chasing after Chrome that it doesn’t even realize that most Chrome users will never switch. They use Chrome because “the internet” (www.google.com) told them so. As long as Mozilla can’t make Google recommend Firefox on their frontpage, this will not change.

                                                                                          Discarding their most loyal users while trying to get people to adopt Firefox who simply aren’t interested – this is a recipe for disaster.

                                                                                      2. 1

                                                                                        and barely enough funding

                                                                                        Last I checked they pulled in half a billion in revenue (2016). Do you believe this is barely enough?

                                                                                        1. 2

                                                                                          For hundreds of millions users?

                                                                                          Yeah.

                                                                                    2. 1

                                                                                      At least with multi-row tabs in CSS you can’t dragndrop tabs. That’s about as bad as it gets.

                                                                                    3. 2

                                                                                      Are vertical tabs so essential?

                                                                                      1. 3

                                                                                        Considering the change in screen ratios over the past ten years (displays get shorter and wider), yes, it absolutely is.

                                                                                        With vertical tabs I can get almost 30 full-width tabs on screen, with horizontal tabs I can start fishing for the right tab after about 15, as the tab width gets increasingly smaller.

                                                                                        Additionally, vertical tabs reduce the way of travel substantially when selecting a different tab.

                                                                                        1. 1

                                                                                          I still miss them, didn’t cripple me, but really hurt. The other thing about Tree (not just vertical) tabs that FF used to have was that the subtree was contextual to the parent tree. So, when you opened a link in a background tab, it was opened in a new tab that was a child of your current tab. For doing like documentation hunting / research it was amazing and I still haven’t found its peer.

                                                                                      2. 1

                                                                                        It’s at least partially open source. They provide tarballs.

                                                                                        1. 4

                                                                                          https://help.vivaldi.com/article/is-vivaldi-open-source/

                                                                                          The chromium part is legally required to be open, the rest of their code is like readable source, don’t get me wrong that’s way better than unreadable source but it’s also very wut.

                                                                                          1. 2

                                                                                            Very wut. It’s a weird uneasy mix.

                                                                                            1. 2

                                                                                              that’s way better than unreadable source but it’s also very wut.

                                                                                              I wouldn’t be sure of that. It makes it auditable, but has legal ramifications should you want to build something like vivaldi, but free.

                                                                                        2. 8

                                                                                          firefox does not get better with investment, it gets worse.

                                                                                          the real solution is to use netsurf or dillo or mothra, so that webmasters have to come to us and write websites that work with browsers that are simple enough to be independently maintained.

                                                                                          1. 9

                                                                                            Good luck getting more than 1‰ adoption 😉

                                                                                            1. 5

                                                                                              good luck achieving independence from Google by using a browser funded by Google

                                                                                              1. 1

                                                                                                I can achieve independence from Google without using netsurf, dillo, or mothra; to be quite honest, those will never catch on.

                                                                                                1. 2

                                                                                                  can you achieve independence from google in a way that will catch on?

                                                                                                  1. 1

                                                                                                    I don’t think we’ll ever get the majority of browser share back into the hands of a (relatively) sane organization like Mozilla—but we can at least get enough people to make supporting alternative browsers a priority. On the other hand, the chances that web devs will ever feel pressured to support the browsers you mentioned, is close to nil. (No pun intended.)

                                                                                                    1. 1

                                                                                                      what is the value of having an alternative, if that alternative is funded by google and sends data to google by default?

                                                                                                      1. 1

                                                                                                        what is the value of having an alternative

                                                                                                        What would you like me to say, that Firefox’s existence is worthless? This is an absurd thing to insinuate.

                                                                                                        funded by google

                                                                                                        No. I’m not sure whether you’re speaking in hyperbole, misunderstood what I was saying, and/or altogether skipped reading what I wrote. But this is just not correct. If Google really had Mozilla by the balls as you suggest, they would coerce them to stop adding privacy features to their browser that, e.g., block Google Analytics on all sites.

                                                                                                        sends data to google by default

                                                                                                        Yes, though it seems they’ve been as careful as one could be about this. Also to be fair, if you’re browsing with DNT off, you’re likely to get tracked by Google at some point anyway. But the fact that extensions can’t block this does have me worried.

                                                                                                        1. 1

                                                                                                          i’m sorry if i misread something you wrote. i’m just curious what benefit you expect to gain if more people start using firefox. if everyone switched to firefox, google could simply tighten their control over mozilla (continuing the trend of the past 10 years), and they would still have control over how people access the web.

                                                                                                          1. 1

                                                                                                            It seems you’re using “control” in a very abstract sense, and I’m having trouble following. Maybe I’m just missing some context, but what concrete actions have Google taken over the past decade to control the whole of Mozilla?

                                                                                                            1. 1

                                                                                                              Google has pushed through complex standards such as HTTP/2 and new rendering behaviors, which Mozilla implements in order to not “fall behind.” They are able implement and maintain such complexity due to funding they receive from Google, including their deal to make Google the default search engine in Firefox (as I said earlier, I couldn’t find any breakdown of what % of Mozilla’s funding comes from Google).

                                                                                                              For evidence of the influence this funding has, compare the existence of Mozilla’s Facebook Container to the non-existence of a Google Container.

                                                                                                              1. 1

                                                                                                                what % of Mozilla’s funding comes from Google

                                                                                                                No word on the exact breakdown. Visit their 2017 report and scroll all the way to the bottom, and you’ll get a couple of helpful links. One of them is to a wiki page that describes exactly what each search engine gets in return for their investment.

                                                                                                                I would also like to know the exact breakdown, but I’d expect all those companies would get a little testy if the exact amount were disclosed. And anyway, we know what the lump sum is (around half a billion), and we can assume that most of it comes from Google.

                                                                                                                the non-existence of a Google Container

                                                                                                                They certainly haven’t made one themselves, but there’s nothing stopping others from forking one off! And anyway, I think it’s more so fear on Mozilla’s part than any concrete warning from Google against doing so.

                                                                                                                Perhaps this is naïveté on my part, but I really do think Google just want their search engine to be the default for Firefox. In any case, if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla. Remember: Google is in the “web market” first & the “software market” second. Having browser dominance is just one of many means to the same end. I believe their continued funding of Mozilla attests to that.

                                                                                                                1. 2

                                                                                                                  It doesn’t have to be a direct threat from Google to make a difference. Direct threats are a very narrow way in which power operates and there’s no reason that should be the only type of control we care about.

                                                                                                                  Yes Google’s goal of dominating the browser market is secondary to their goal of dominating the web. Then we agree that Google’s funding of Firefox is in keeping with their long-term goal of web dominance.

                                                                                                                  if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla.

                                                                                                                  Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance? At least Google doesn’t think so.

                                                                                                                  1. 1

                                                                                                                    Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance?

                                                                                                                    You make some good points, but you’re ultimately using the language of a “black or white” argument here. In my view, if Google were to stop funding Mozilla they would still have other sponsors. And that’s not to mention the huge wave this would make in the press—even if most people don’t use Firefox, they’re at least aware of it. In a strange sense, Google cannot afford to stop funding Mozilla. If they do, they lose their influence over the Firefox project and get huge backlash.

                                                                                                                    I think this is something the Mozilla organization were well aware of when they made the decision to accept search engines as a funding source. They made themselves the center of attention, something to be competed over. And in so doing, they ensured their longevity, even as Google’s influence continued to grow.

                                                                                                                    Of course this has negative side effects, such as companies like Google having influence over them. But in this day & age, the game is no longer to be free of influence from Google; that’s Round 2. Round 1 is to achieve enough usage to exert influence on what technologies are actually adopted. In that sense, Mozilla is at the discussion table, while netsurf, dillo, and mothra (as much as I’d love to love them) are not and likely never will be.

                                                                                              2. 3

                                                                                                Just switch to Gopher.

                                                                                                1. 5

                                                                                                  Just switch to Gopher

                                                                                                  I know you were joking, but I do feel like there is something to be said for the simplicity of systems like gopher. The web is so complicated nowadays that building a fully functional web browser requires software engineering on a grand scale.

                                                                                                  1. 3

                                                                                                    yeah. i miss when the web was simpler.

                                                                                                    1. 1

                                                                                                      I was partially joking. I know there are new ActivityPub tools like Pleroma that support Gopher and I’ve though about adding support to generate/server gopher content for my own blog. I realize it’s still kinda a joke within the community, but you’re right about there being something simple about just having content without all the noise.

                                                                                                2. 1

                                                                                                  Unless more than (rounded) 0% of people use it for Facebook, it won’t make a large enough blip for people to care. Also this is how IE was dominant, because so much only worked for them.

                                                                                                  1. 1

                                                                                                    yes, it would require masses of people. and yes it won’t happen, which is why the web is lost.

                                                                                                3. 2

                                                                                                  I’ve relatively recently switched to FF, but still use Chrome for web dev. The dev tools still seem quite more advanced and the browser is much less likely to lock up completely if I have a JS issue that’s chewing CPU.

                                                                                                  1. 2

                                                                                                    I tried to use Firefox on my desktop. It was okay, not any better or worse than Chrome for casual browsing apart from private browsing Not Working The Way It Should relative to Chrome (certain cookies didn’t work across tabs in the same Firefox private window). I’d actually want to use Firefox if this was my entire Firefox experience.

                                                                                                    I tried to use Firefox on my laptop. Site icons from bookmarks don’t sync for whatever reason (I looked up the ticket and it seems to be a policy problem where the perfect is the enemy of the kinda good enough), but it’s just a minor annoyance. The laptop is also pretty old and for that or whatever reason has hardware accelerated video decoding blacklisted in Firefox with no way to turn it back on (it used to work a few years ago with Firefox until it didn’t), so I can’t even play 720p YouTube videos at an acceptable framerate and noise level.

                                                                                                    I tried to use Firefox on my Android phone. Bookmarks were completely useless with no way to organize them. I couldn’t even organize on a desktop Firefox and sync them over to the phone since they just came out in some random order with no way to sort them alphabetically. There was also something buggy with the history where clearing history didn’t quite clear history (pages didn’t show up in history, but links remained colored as visited if I opened the page again) unless I also exited the app, but I don’t remember the details exactly. At least I could use UBO.

                                                                                                    This was all within the last month. I used to use Firefox before I used Chrome, but Chrome just works right now.

                                                                                                    1. 6

                                                                                                      I definitely understand that Chrome works better for many users and you gave some good examples of where firefox fails. My point was that people need to use and support firefox despite it being worse than chrome in many ways. I’m asking people to make sacrifices by taking a principled position. I also recognize most users might not do that, but certainly, tech people might!? But maybe I’m wrong here, maybe the new kids don’t care about an open internet.

                                                                                                  1. 8

                                                                                                    No “generic” library or framework I’ve seen ever has been able to deliver 100% re-usability. Even string libraries aren’t entirely reusable; for example, constant-time comparison is required in many security applications, but non-security applications tend to favour raw speed. Of course you could add a flag to make it more generic and re-usable.

                                                                                                    If you keep adding flags like this for components that are large enough you end up with so many flags for each different kind of sub-behaviour (or so many versions variants of the same component) that it becomes unwieldy to use, maintain and performance will suffer too.

                                                                                                    That’s why “use the right tool for the job” is still great advice, and so is Fred Brooks’ old advice to “build one to throw away, you will anyway” when building something new.

                                                                                                    1. 4

                                                                                                      I’ve always worked toward the “guideline” that an abstraction should shoot to cover 80% of the problem, but should be very easy to “punch through” or “escape” for that last 20%

                                                                                                      If possible, I won’t “add a flag” to support a feature, but will instead try to write the library in a way that allows it to be disabled or skipped when needed. Suddenly the worry of the “perfect abstraction” goes away, and you are left with a library that handles most cases perfectly, and allows another lib or custom code to take over when needed.

                                                                                                      1. 3

                                                                                                        That’s a very good approach. I also like the opposite approach, which is the “100% solution” to a narrowly (but clearly!) defined problem. The Scheme SRE notation is an example of this, as is the BPF packet filter virtual machine.

                                                                                                        This allows you to make a tradeoff to choose whether a tool fits your needs.

                                                                                                        1. 1

                                                                                                          I’ve always worked toward the “guideline” that an abstraction should shoot to cover 80% of the problem, but should be very easy to “punch through” or “escape” for that last 20%

                                                                                                          I always liked Python’s convention of exposing everything (almost; I’m not sure if the environment of a closure is easily exposed), and using underscores to indicate when something should be considered “private”.

                                                                                                          I emulate this in Haskell by writing everything in a Foo.Internal module, then having the actual Foo module only export the “public API”.

                                                                                                        2. 1

                                                                                                          This seems like something that should be solved outside the library that deals with string manipulation. For example, in Clojure I’d write a macro that ensured that its body evaluated in a constant time. A naive example might look like:

                                                                                                          (defmacro constant-time [name interval args & body]
                                                                                                            `(defn ~name ~args
                                                                                                               (let [t# (.getTime (java.util.Date.))
                                                                                                                     result# (do ~@body)]
                                                                                                                 (Thread/sleep (- ~interval (- (.getTime (java.util.Date.)) t#)))
                                                                                                                 result#)))
                                                                                                          

                                                                                                          with that I could define a function that would evaluate the body and sleep for the remainder of the interval using it:

                                                                                                          (constant-time compare 1000 [& args]
                                                                                                             (apply = args))
                                                                                                          

                                                                                                          I think that decoupling concerns and creating composable building blocks is key to having reusable code. You end up with lots of Lego blocks that you can put together in different ways to solve problems.

                                                                                                          1. 6

                                                                                                            To me that smells like a brittle hack. On one hand you might end up overestimating the time it will take, thus being slower than necessary, or you could underestimate it, which means you’d still have the vulnerability.

                                                                                                            Also, if the process or system load can be observed at a high enough granularity, it might be easy to distinguish between the time it spends actually comparing and sleeping.

                                                                                                            1. 1

                                                                                                              I specifically noted that this is a naive example. This is my whole point though, you don’t know what the specific requirements might be for a particular situation. A library that deals with string manipulation should not be making any assumptions about timing. It’s much better to have a separate library that deals with providing constant timing and wrapping the string manipulation code using it.

                                                                                                              1. 6

                                                                                                                Except in this case constant time is much more restrictive than wall-clock time. It’s actually important to touch the same number of bits and cache lines – you truly can’t do that by just adding another layer on top; it needs to be integral.

                                                                                                                1. 1

                                                                                                                  In an extreme case like this I have to agree. However, I don’t think this is representative of the general case. Majority of the time it is possible to split up concerns, and you should do that if you’re able.

                                                                                                                  1. 5

                                                                                                                    But that’s the thing. The temptation of having full generality “just around the corner” is exactly the kind of lure that draws people in (“just one more flag, we’re really almost there!”) and causes them to end up with a total mess on their hands. And this was just using a trivial text-book example you could give any freshman!

                                                                                                                    I have a hunch that this is also the same thing that makes ORMs so alluring. Everybody thinks they can beat the impedence mismatch, but in truth nobody can.

                                                                                                                    I guess the only way to truly drive this home is when you implement some frameworks yourself and hit your head against the wall a few times when you truly need to stretch the limitations of the given framework you wrote.

                                                                                                                    1. 2

                                                                                                                      My whole argument is that you shouldn’t make things in monolithic fashion though. Instead of doing the one more flag thing, separate concerns where possible and create composable components.

                                                                                                                      Incidentally, that’s pretty much how entire Clojure ecosystem works. Everything is based around small focused libraries that solve a specific problem. I also happen to maintain a micro-framwork for Clojure. The approach I take there is to make wiring explicit and let the user manage it the way that makes sense for their project.

                                                                                                                      1. 3

                                                                                                                        Monolithic or not, code re-use is certainly a factor in the “software bloat” that everyone complains about. Software is getting larger (in bytes) and slower all around – I claim a huge portion of this is the power of abstraction and re-using components. It just isn’t possible to take the one tiny piece you care about: pull a thread long enough and almost everything comes with it.

                                                                                                                        Note that I’m not really making a value judgement here, just saying there are high costs to writing everything as generically as possible.

                                                                                                                        1. 1

                                                                                                                          You definitely have a point here, on my first job I was tasked to implement a feature, this was a legacy WinAPI app. It involved downloading some data via HTTP (iirc it downloaded information on available updates for the program). Anyways, I was young and inexperienced, especially on the windows platform. The software was pure C mainly, but a few extensions had been coded in C++.

                                                                                                                          So when I wrote my download code, I just used STL iostream for the convenience of the stream operators. Thing is, I was the first C++ code in the code base to use a template library, all the other C++ code was template-free C-with-classes style. The size of the binary doubled for a tiny feature.

                                                                                                                          I rewrote the piece in C, and and the results were as expected, no significant change in size for the EXE. Looking back it makes me shudder what I was tasked to implement and what I implemented. However, I am also not happy with the slimmed down version of my code.

                                                                                                                          Nowadays the STL is just not a big culprit anymore, when you look at deployment strategies that deploy statically-linked go microservices within fat docker images onto some host.

                                                                                                            2. 1

                                                                                                              That constant time comparison doesn’t work, because you can still measure throughput. Send enough requests that you’re CPU bound, and you can see how far above the sleep time your average goes.

                                                                                                          1. 3

                                                                                                            So these are essentially nano scale Vacuum tubes without the need for the vacuum? Cool!

                                                                                                            1. 2

                                                                                                              Yeah that’s the impression I got as well. :)

                                                                                                            1. 13

                                                                                                              Rich has been railing on types for the last few keynotes, but it looks to me like he’s only tried Haskell and Kotlin and that he hasn’t used them a whole lot, because some of his complaints look like complete strawmen if you have a good understanding and experience with a type system as sophisticated as Haskell’s, and others are better addressed in languages with different type systems than Haskell, such as TypeScript.

                                                                                                              I think he makes lots of good points, I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory while designing his own type system (clojure.spec), and if he’s not, why he thinks other models don’t work either.

                                                                                                              1. 14

                                                                                                                One nit: spec is a contract system, not a type system. The former is often used to patch up a lack of the latter, but it’s a distinct concept you can do very different things with.

                                                                                                                EDIT: to see how they can diverge, you’re probably better off looking at what Racket does than what Clojure does. Racket is the main “research language” for contracts and does some pretty fun stuff with them.

                                                                                                                1. 4

                                                                                                                  It’s all fuzzy to me. They’re both formal specifications. They get overlapped in a lot of ways. Many types people are describing could be pre/post conditions and invariants in contract form for specific data or functions on them. Then, a contract system extended to handle all kinds of things past Boolean will use enough logic to be able to do what advanced type systems do.

                                                                                                                  Past Pierce or someone formally defining it, I don’t know as a formal, methods non-expert that contract and type systems in general form are fundamentally that different since they’re used the same in a lot of ways. Interchangeably, it would appear, if each uses equally powerful and/or automated logics.

                                                                                                                  1. 13

                                                                                                                    It’s fuzzy but there are differences in practice. I’m going to assume we’re using non-FM-level type systems, so no refinement types or dependent types for full proofs, because once you get there all of our intuition about types and contracts breaks down. Also, I’m coming from a contract background, not a type background. So take everything I say about type systems with a grain of salt.

                                                                                                                    In general, static types verify a program’s structure, while contracts verify its properties. Like, super roughly, static types are whether a program is sense or nonsense, while contracts are whether its correct or incorrect. Consider how we normally think of tail in Haskell vs, like, Dafny:

                                                                                                                    tail :: [a] -> [a]
                                                                                                                    
                                                                                                                    method tail(s: seq<T>) returns (o: seq<T>)
                                                                                                                    requires s.Len > 0
                                                                                                                    ensures s[0] + o = s
                                                                                                                    

                                                                                                                    The tradeoff is that verifying structure automatically is a lot easier than verifying semantics. That’s why historically static typing has been compile-time while contracts have been runtime. Often advances in typechecking subsumed use cases for contracts. See, for example, how Eiffel used contracts to ensure “void-free programming” (no nulls), which is subsumed by optionals. However, there are still a lot of places where they don’t overlap, such as in loop invariants, separation logic, (possibly existential contracts?), arguably smart-fuzzing, etc.

                                                                                                                    Another overlap is refinement types, but I’d argue that refinement types are “types that act like contracts” versus contracts being “runtime refinement types”, as most successful uses of refinement types came out of research in contracts (like SPARK) and/or are more ‘contracty’ in their formulations.

                                                                                                                    1. 3

                                                                                                                      Is there anything contacts do that dependent types cannot?

                                                                                                                      1. 2

                                                                                                                        Fundamentally? Not really, nor vice versa. Both let you say arbitrary things about a function.

                                                                                                                        In practice contracts are more popular for industrial work because they so far seem to map better to imperative languages than dependent types do.

                                                                                                                        1. 1

                                                                                                                          That makes sense, thanks! I’ve never heard of them. I mean I’ve probably seen people throw the concept around but I never took it for an actual thing

                                                                                                                  2. 1

                                                                                                                    I see the distinction when we talk about pure values, sum and product types. I wonder if the IO monad for example isn’t kind of more on the contract side of things. Sure it works as a type, type inference algorithms work with it, but the sife-effect thing makes it seem more like a pattern.

                                                                                                                  3. 17

                                                                                                                    I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory

                                                                                                                    Isn’t that his thing? He’s made proud statements about his disinterest in theory. And it shows. His jubilation about transducers overlooked that they are just a less generic form of ad-hoc polymorphism, invented to abstract over operations on collections.

                                                                                                                    1. 1

                                                                                                                      wow, thanks for that, never really saw it that way but it totally makes sense. not a regular clojure user, but love lisp, and love the ML family of languages.

                                                                                                                      1. 1

                                                                                                                        So? Theory is useless without usable and easy implementation

                                                                                                                      2. 6

                                                                                                                        seemingly ignoring a lot of research in type theory

                                                                                                                        I’ve come to translate this utterance as “it’s not Haskell”. Are there languages that have been hurt by “ignoring type theory research”? Some (Go, for instance) have clearly benefited from it.

                                                                                                                        1. 12

                                                                                                                          I don’t think rich is nearly as ignorant of Haskell’s type system as everyone seems to think. You can understand this stuff and not find it valuable and it seems pretty clear to me that this is the case. He’s obviously a skilled programmer who’s perspective warrants real consideration, people who are enamored with type systems shouldnt be quick to write him off even if they disagree.

                                                                                                                          I don’t like dynamic languages fwiw.

                                                                                                                          1. 3

                                                                                                                            I dont think we can assume anything about what he knows. Even Haskellers here are always learning about its type system or new uses. He spends most of his time in a LISP. It’s safe to assume he knows more LISP benefits than Haskell benefits until we see otherwise in examples he gives.

                                                                                                                            Best thing tl do is probably come up with lot of examples to run by him at various times/places. See what says for/against them.

                                                                                                                            1. 9

                                                                                                                              I guess I would want hear what people think he’s ignorant of because he clearly knows the basics of the type system, sum types, typeclasses, etc. The clojure reducers docs mention requiring associative monoids. I would be extremely surprised if he didn’t know what monads were. I don’t know how far he has to go for people to believe he really doesn’t think it’s worthwhile. I heard edward kmett say he didn’t think dependent types were worth the overhead, saying that the power to weight ratio simply wasn’t there. I believe the same about haskell as a whole. I don’t think it’s insane to believe that about most type systems and I don’t think hickey’s position stems from ignorance.

                                                                                                                              1. 2

                                                                                                                                Good examples supporting he might know the stuff. Now, we just need more detail to further test the claims on each aspect of languge design.

                                                                                                                                1. 2

                                                                                                                                  From the discussions I see, it’s pretty clear to me that Rich has a better understanding of static typing and its trade offs than most Haskell fans.

                                                                                                                          2. 10

                                                                                                                            I’d love to hear in a detailed fashion how Go has clearly benefited from “ignoring type theory research”.

                                                                                                                            1. 5

                                                                                                                              Rust dropped GC by following that research. Several languages had race freedom with theirs. A few had contracts or type systems with similar benefits. Go’s developers ignored that to do a simpler, Oberon-2- and C-like language.

                                                                                                                              There were two reasons. dmpk2k already said first, which Rob Pike said, that it was designed for anyone from any background to pick up easily right after Google hired them. Also, simplicity and consistency making it easy for them to immediately go to work on codebases they’ve never seen. This fits both Google’s needs and companies that want developers to be replaceable cogs.

                                                                                                                              The other is that the three developers had to agree on every feature. One came from C. One liked stuff like Oberon-2. I dont recall the other. Their consensus is unlikely to be an Ocaml, Haskell, Rust, Pony, and so on. It was something closer to what they liked and understood well.

                                                                                                                              If anything, I thought at the time they shouldve done something like Julia with a mix of productivity features, high C/Python integration, a usable subset people stick to, and macros for just when needed. Much better. I think a Noogler could probably handle a slighty-more-advanced language than Go. That team wanted otherwise…

                                                                                                                              1. 2

                                                                                                                                I have a hard time with a number of these statements:

                                                                                                                                “Rust dropped GC by following that research”? So did C++ also follow research to “drop GC”? What about “C”? I’ve been plenty of type system conversation related to Rust but nothing that I would attribute directly to “dropping GC”. That seems like a bit of a simplification.

                                                                                                                                Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared? I’ve seen Rob Pike talk about wanting to appeal to C and C++ programmers but nothing about ignorning type research. I’d be interested in hearing about that being done and what they thought the benefits were.

                                                                                                                                It sounds like you are saying that the benefit is something familiar and approachable. Is that a benefit to the users of a language or to the language itself? Actually I guess that is more like, is the benefit that it made Go approachable and familiar to a broad swath of programmers and that allowed it to gain broad adoption?

                                                                                                                                If yes, is there anything other than anecdotes (which I would tend to believe) to support that assertion?

                                                                                                                                1. 9

                                                                                                                                  “That seems like a bit of a simplification.”

                                                                                                                                  It was. Topic is enormously complex. Gets worse when you consider I barely knew C++ before I lost my memory. I did learn about memory pools and reference counting from game developers who used C++. I know it keeps getting updated in ways that improve its safety. The folks that understand C++ and Rust keep arguing about how safe C++ is with hardly any argument over Rust since its safety model is baked thoroughly into the language rather than an option in a sea of options. You could say I’m talking about Rust’s ability to be as safe as a GC in most of an apps code without runtime checks on memory accesses.

                                                                                                                                  “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                                                                                                                                  Like with the Rich Hickey replies, this burden of proof is backwards asking us to prove a negative. If assessing what people knew or did, we should assume nothing until we see evidence in their actions and/or informed opinions that they did these things. Only then do we believe they did. I start by comparing what I’ve read of Go to Common LISP, ML’s, Haskell, Ada/SPARK, Racket/Ometa/Rascal on metaprogramming side, Rust, Julia, Nim, and so on. Go has almost nothing in it compared to these. Looks like a mix of C, Wirth’s stuff, CSP like old stuff in 1970’s-1980’s, and maybe some other things. Not much past the 1980’s. I wasn’t the first to notice either. Article gets point across despite its problems the author apologized for.

                                                                                                                                  Now, that’s the hypothesis from observation of Go’s features vs other languages. Lets test it on intent first. What was the goal? Rob Pike tells us here with Moray Taylor having a nicer interpretation. The quote:

                                                                                                                                  The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                                                                  It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                                                                                                                                  So, they’re intentionally dumbing the language down as much as they can while making it practically useful. They’re doing this so smart people from many backgrounds can pick it up easily and go right to being productive for their new employer. It’s also gotta be C-like for the same reason.

                                                                                                                                  Now, let’s look at its prior inspirations. In the FAQ, they tell you the ancestors: “Go is mostly in the C family (basic syntax), with significant input from the Pascal/Modula/Oberon family (declarations, packages), plus some ideas from languages inspired by Tony Hoare’s CSP, such as Newsqueak and Limbo (concurrency).” They then make an unsubstantiated claim, in that section at least, that it’s a new language across the board to make programming better and more fun. In reality, it seems really close to a C-like version of the Oberon-2 experience one developer (can’t recall) wanted to recreate with concurrency and tooling for aiding large projects. I covered the concurrency angle in other comment. You don’t see a lot of advanced or far out stuff here: decades old tech that’s behind current capabilities. LISP’ers, metaprogrammers and REBOL’s might say behind old tech, too. ;)

                                                                                                                                  Now, let’s look at execution of these C, Wirth-like, and specific concurrency ideas into practice. I actually can’t find this part. I did stumble upon its in-depth history of design decisions. The thing I’m missing, if it was correct, is a reference to the claim that the three developers had to agree on each feature. If that’s true, it automatically would hold the language back from advanced stuff.

                                                                                                                                  In summary, we have a language designed by people who mostly didn’t use cutting-edge work in type systems, employed nothing of the sort, looked like languages from the 1970’s-1980’s, considered them ancestors, is admittedly dumbed-down as much as possible so anyone from any background can use it, and maybe involved consensus from people who didn’t use cutting-edge stuff (or even much cutting-edge at 90’s onward). They actually appear to be detractors to a lot of that stuff if we consider the languages they pushed as reflecting their views on what people should use. Meanwhile, the languages I mentioned above used stuff from 1990’s-2000’s giving them capabilities Go doesn’t have. I think the evidence weighs strongly in favor of that being because designers didn’t look at it, were opposed to it for technical and/or industrial reasons, couldn’t reach a consensus, or some combo.

                                                                                                                                  That’s what I think of Go’s history for now. People more knowledgeable feel free to throw any resources I might be missing. It just looks to be a highly-practical, learn/use-quickly, C/Oberon-like language made to improve onboarding and productivity of random developers coming into big companies like Google. Rob Pike even says that was the goal. Seems open and shut to me. I thank the developers of languages like Julia and Nim believing we were smart enough to learn a more modern language, even if we have to subset them for inexperienced people.

                                                                                                                              2. 4

                                                                                                                                It’s easy for non-LtU programmers to pick up, which happens to be the vast majority.

                                                                                                                                1. 3

                                                                                                                                  Sorry, that isn’t detailed. Is there evidence that its easy for these programmers to pick up? What does “easy to pick up” mean? To get something to compile? To create error-free programs? “Clearly benefited” is a really loaded term that can mean pretty much anything to anyone. I’m looking for what the stated benefits are for Go. Is the benefit to go that it is “approachable” and “familiar”?

                                                                                                                                  There seems to be an idea in your statement then that using any sort of type theory research will inherintly make something hard to pick up. I have a hard time accepting that. I would, without evidence, be willing to accept that many type system ideas (like a number of them in Pony) are hard to pick up, but the idea that you have to ignore type theory research to be easy to pick up is hard for me to accept.

                                                                                                                                  Could I create a language that ignores type system theory but using a non-familiar syntax and not be easy to pick up?

                                                                                                                                  1. 5

                                                                                                                                    I already gave you the quote from Pike saying it was specifically designed for this. Far as the how, I think one of its designers explains it well in those slides. The Guiding Principles section puts simplicity above everything else. Next, a slide says Pascal was a minimalist language designed for teaching non-programmers to code. Oberon was similarly simple. Oberon-2 added methods on records (think simpler OOP). The designer shows Oberon-2 and Go code saying it’s C’s syntax with Oberon-2’s structure. I’ll add benefits like automatic, memory management.

                                                                                                                                    Then, the design link said they chose CSP because (a) they understood it enough to implement and (b) it was the easiest thing to implement throughout the language. Like Go itself, it was the simplest option rather than the best along many attributes. There were lots of people who picked up SCOOP (super-easy but with overhead) with probably even more picking up Rust’s method grounded in affine types. Pony is itself doing clever stuff using advances in language. Go language would ignore those since (a) Go designers didn’t know them well from way back when and (b) would’ve been more work than their intent/budget could take.

                                                                                                                                    They’re at least consistent about simplicity for easy implementation and learning. I’ll give them that.

                                                                                                                                2. 3

                                                                                                                                  It seems to me that Go was clearly designed to have a well-known, well-understood set of primitives, and that design angle translated into not incorporating anything fundamentally new or adventurous (unlike Pony and it’s impressive use of object capabilities). It looked already old at birth, but it feels impressively smooth, in the beginning at least.

                                                                                                                                  1. 3

                                                                                                                                    I find it hard to believe that CSP and Goroutines were “well-understood set of primitives”. Given the lack of usage of CSP as a mainstream concurrency mechanism, I think that saying that Go incorporates nothing fundamentally new or adventurous is selling it short.

                                                                                                                                    1. 5

                                                                                                                                      CSP is one of oldest ways people modeled concurrency. I think it was built on Hoare’s monitor concept from years before which Per Brinch Hansen turned into Concurrent Pascal. Built Solo OS with mix of it and regular Pascal. It was also typical in high-assurance to use something like Z or VDM for specifying main system with concurrency done in CSP and/or some temporal logic. Then, SPIN became dominant way to analyze CSP-like stuff automatically with a lot of industrial use for a formal method. Lots of other tools and formalisms existed, though, under banner of process algebras.

                                                                                                                                      Outside of verification, the introductory text that taught me about high-performance, parallel computing mentioned CSP as one of basic models of parallel programming. I was experimenting with it in maybe 2000-2001 based on what those HPC/supercomputing texts taught me. It also tied into Agent-Oriented Programming I was looking into then given they were also concurrent, sequential processes distributed across machines and networks. A quick DuckDuckGo shows a summary article on Wikipedia mentions it, too.

                                                                                                                                      There were so many courses teaching and folks using it that experts in language design and/or concurrency should’ve noticed it a long time ago trying to improve on it for their languages. Many did, some doing better. Eiffel SCOOP, ML variants like Concurrent ML, Chapel, Clay with Wittie’s extensions, Rust, and Pony are examples. Then you have Go doing something CSP-like (circa 1970’s) in the 2000’s still getting race conditions and stuff. What did they learn? (shrugs) I don’t know…

                                                                                                                                      1. 10

                                                                                                                                        Nick,

                                                                                                                                        I’m going to take the 3 different threads of conversation we have going and try to pull them all together in this one reply. I want to thank you for the time you put into each answer. So much of what appears on Reddit, HN, and elsewhere is throw away short things that often feel lazy or like communication wasn’t really the goal. For a long time, I have appreciated your contributions to lobste.rs because there is a thoughtfulness to them and an attempt to convey information and thinking that is often absent in this medium. Your replies earlier today are no exception.


                                                                                                                                        Language is funny.

                                                                                                                                        You have a very different interpretation of the words “well-understood primitives” than I do. Perhaps it has something to do with anchoring when I was writing my response. I would rephrase my statement this way (and I would still be imprecise):

                                                                                                                                        While CSP has been around for a long time, I don’t that prior to Go, that is was a well known or familiar concurrency model for most programmers. From that, I would say it isn’t “well-understood”. But I’m reading quite a bit, based on context into what “well-understood” means here. I’m taking it to me, “widely understood by a large body of programmers”.

                                                                                                                                        And I think that your response Nick, I think it actually makes me believe that more. The languages you mention aren’t ones that I would consider familiar or mainstream to most programmers.

                                                                                                                                        Language is fun like that. I could be anchoring myself again. I rarely ask questions on lobste.rs or comment. I decided to on this occasion because I was really curious about a number of things from an earlier statement:

                                                                                                                                        “Go has clearly benefited from “ignoring type theory research”.

                                                                                                                                        Some things that came to mind when I read that and I wondered “what does this mean?”

                                                                                                                                        “clearly benefited”

                                                                                                                                        Hmmm, what does benefit mean? Especially in reference to a language. My reading of benefit is that “doing X helped the language designers achieve one or more goals in a way that had acceptable tradeoffs”. However, it was far from clear to me, that is what people meant.

                                                                                                                                        “ignoring type theory research”

                                                                                                                                        ignoring is an interesting term. This could mean many things and I think it has profound implications for the statement. Does ignoring mean ignorance? Does it mean willfully not caring? Or does it mean considered but decided not to use?

                                                                                                                                        I’m familiar with some of the Rob Pike and Go early history comments that you referenced in the other threads. In particular related to the goal of Go being designed for:

                                                                                                                                        The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                                                                        It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                                                                                                                                        I haven’t found anything though that shows there was a willful disregard of type theory. I wasn’t attempting to get you to prove a negative, more I’m curious. Has the Go team ever said something that would fall under the heading of “type system theory, bah we don’t need it”. Perhaps they have. And if they have, is there anything that shows a benefit from that.

                                                                                                                                        There’s so much that is loaded into those questions though. So, I’m going to make some statements that are possibly open to being misconstrued about what from your responses, I’m hearing.

                                                                                                                                        “Benefit” here means “helped make popular” because Go on its surface, presents a number of familiar concepts for the programmer to work with. There’s no individual primitive that feels novel or new to most programmers except perhaps the concurrency model. However, upon the first approach that concurrency model is fairly straightforward in what it asks the programmer to grasp when first encountering it. Given Go’s stated goals from the quote above. It allows the programmers to feel productive and “build good software”.

                                                                                                                                        Even as I’m writing that though, I start to take issue with a number of the assumptions that are built into the Pike quote. But that is fine. I think most of it comes down to for me what “good software” is and what “simple” is. And those are further loaded words that can radically change the meaning of a comment based on the reader.

                                                                                                                                        So let me try again:

                                                                                                                                        When people say “Go has clearly benefited from “ignoring type theory research” what they are saying is:

                                                                                                                                        Go’s level of popularity is based, in part, on it providing a set of ideas that should be mostly familiar to programmers who have some experience with the Algol family of languages such as C, C++, Python, Ruby etc. We can further refine that to say that from the Algol family of languages that we are really talking about ones that have type systems that make few if any guarantees (like C). That Go put this familiarity as its primary goal and because of that, is popular.

                                                                                                                                        Would you say that is a reasonable summation?


                                                                                                                                        When I asked:

                                                                                                                                        “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                                                                                                                                        I wasn’t asking you for to prove a negative. I was very curious if any such statements existed. I’ve never seen any. I’ve drawn a number of conclusions about Go based mostly on the Rob Pike quote you provided earlier. I was really looking for “has everyone else as well” or do they know things that I don’t know.

                                                                                                                                        It sounds like we are both mostly operating on the same set of information. That’s fine. We can draw conclusions from that. But I feel at least good in now saying that both you and I are inferring things based on what appears to be mostly shared set of knowledge here and not that I am ignorant of statements made by Go team members.

                                                                                                                                        I wasn’t looking for proof. I was looking for information that might help clear up my ignorance in the area. Related to my ignorance.

                                                                                                                                        1. 2

                                                                                                                                          I appreciate that you saw I was trying to put effort into it being productive and civil. Those posts took a while. I appreciate your introspective and kind reply, too. Now, let’s see where we’re at with this.

                                                                                                                                          Yeah, it looks like we were using words with a different meaning. I was focused on well-understood by PLT types that design languages and folks studying parallelism. Rob Pike at the least should be in both categories following that research. Most programmers don’t know about it. You’re right that Go could’ve been first time it went mainstream.

                                                                                                                                          You also made a good point that it’s probably overstating it to say they never considered. I have good evidence they avoided almost all of it. Other designers didn’t. Yet, they may have considered it (how much we don’t know), assessed it against their objectives, and decided against all of it. The simplest approach would be to just ask them in a non-confrontational way. The other possibility is to look at each’s work to see if it showed any indication they were considering or using such techniques in other work. If they were absent, saying they didn’t consider it in their next work would be reasonable. Another angle would be to look at, like with C’s developers, whether they had a personal preference for simpler or barely any typing consistently avoiding developments in type systems. Since that’s lots of work, I’ll leave it at “Unknown” for now.

                                                                                                                                          Regarding its popularity, I’ll start by saying I agree its simple design reusing existing concepts was a huge element of that. It was Wirth’s philosophy to do same thing for educating programmers. Go adopted that philosophy to modern situation. Smart move. I think you shouldn’t underestimate the fact that Google backed it, though.

                                                                                                                                          There were a lot of interesting languages over the decades with all kinds of good tradeoffs. The ones with major, corporate backing and/or on top of advantageous foundations/ecosystems (eg hardware or OS’s) usually became big in a lasting way. That included COBOL on mainframes, C on cheap hardware spreading with UNIX, Java getting there almost entirely through marketing given its technical failures, .NET/C# forced by Microsoft on its huge ecosystem, Apple pushing Swift, and some smaller ones. Notice the language design is all across the board here in complexity, often more complex than existing languages. The ecosystem drivers, esp marketing or dominant companies, are the consistent thread driving at least these languages’ mass adoption.

                                                                                                                                          Now, mighty Google claims they’re backing for their massive ecosystem a new language. It’s also designed by celebrity researchers/programmers, including one many in C community respect. It might also be a factor in whether developers get a six digit job. These are two, major pulls plus a minor one that each in isolation can draw in developers. Two, esp employment, will automatically make a large number of users if they think Google is serious. Both also have ripple effects where other companies will copy what big company is doing to not get left behind. Makes the pull larger.

                                                                                                                                          So, as I think of your question, I have that in the back of my mind. I mean, those effects pull so hard that Google’s language could be a total piece of garbage and still have 50,000-100,000 developers just going for a gold rush. I think that they simplified the design to make it super-easy to learn and maintain existing code just turbocharges that effect. Yes, I think the design and its designers could lead to significant community without Google. I’m just leaning toward it being a major employer with celebrity designers and fanfare causing most of it.

                                                                                                                                          And then those other languages start getting uptake despite advanced features or learning troubles (esp Rust). Shows they Go team could’ve done better on typing using such techniques if they wanted to and/or knew about those techniques. I said that’s unknown. Go might be the best they could do in their background, constraints, goals, or whatever. Good that at least four, different groups made languages to push programming further into the 90’s and 2000’s instead of just 70’s to early 80’s. There’s at least three creating languages closer to C generating a lot of excitement. C++ is also getting updates making it more like Ada. Non-mainstream languages like Ada/SPARK and Pony are still getting uptake even though smaller.

                                                                                                                                          If anything, the choices of systems-type languages is exploding right now with something for everyone. The decisions of Go’s language authors aren’t even worth worrying about since that time can be put into more appropriate tools. I’m still going to point out that Rob Pike quote to people to show they had very, very specific goals which made a language design that may or may not be ideal for a given task. It’s good for perspective. I don’t know designers’ studies, their tradeoffs, and (given alternatives) they barely matter past personal curiosity and PLT history. That also means I’ll remain too willfully ignorant about it to clear up anyone’s ignorance. At least till I see some submissions with them talking about it. :)

                                                                                                                                          1. 2

                                                                                                                                            Thanks for the time you put into this @nickpsecurity.

                                                                                                                                            1. 1

                                                                                                                                              Sure thing. I appreciate you patiently putting time into helping me be more accurate and fair describing Go designers’ work.

                                                                                                                                              1. 2

                                                                                                                                                And thank you, I have a different perspective on Go now than I did before. Or rather, I have a better understanding of other perspectives.

                                                                                                                              3. 6

                                                                                                                                I don’t see anything of substance in this comment other than “Haskell has a great type system”.

                                                                                                                                I just watched the talk. Rich took a lot of time to explain his thoughts carefully, and I’m convinced by many of his points. I’m not convinced by anything in this comment because there’s barely anything there. What are you referring to specifically?

                                                                                                                                edit: See my perspective here: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe

                                                                                                                                1. 3

                                                                                                                                  That wasn’t my point at all. I agree with what Rich says about Maybes in this talk, but it’s obvious from his bad Haskell examples that he hasn’t spent enough time with the language to justify criticizing its type system so harshly.

                                                                                                                                  Also, what he said about representing the idea of a car with information that might or might not be there in different parts of a program might be correct in Haskell’s type system, but in languages with structural subtyping (like TypeScript) or row polymorphism (like Ur/Web) you can easily have a function that takes a car record which may be missing some fields, fills some of them out and returns an object which has a bit more fields than the other one, like Rich described at some point in the talk.

                                                                                                                                  I’m interested to see where he’s gonna end up with this, but I don’t think he’s doing himself any favors by ignoring existing research in the same fields he’s thinking about.

                                                                                                                                  1. 5

                                                                                                                                    But if you say that you need to go to TypeScript to express something, that doesn’t help me as a Haskell user. I don’t start writing a program in a language with one type system and then switch into a language with a different one.

                                                                                                                                    Anyway, my point is not to have a debate on types. My point is that I would rather read or watch an opinion backed up by real-world experience.

                                                                                                                                    I don’t like the phrase “ignoring existing research”. It sounds too much like “somebody told me this type system was good and I’m repeating it”. Just because someone published a paper on it, doesn’t mean it’s good. Plenty of researchers disagree on types, and admit that there are open problems.

                                                                                                                                    There was just one here the other day!

                                                                                                                                    https://lobste.rs/s/dldtqq/ast_typing_problem

                                                                                                                                    I’ve found that the applicability of types is quite domain-specific. Rich Hickey is very clear about what domains he’s talking about. If someone makes general statements about type systems without qualifying what they’re talking about, then I won’t take them very seriously.

                                                                                                                                2. 4

                                                                                                                                  I don’t have a good understanding of type systems. What is it that Rich misses about Haskells Maybe? Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                                                                                                                                  1. 23

                                                                                                                                    Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                                                                                                                                    It does in a way, but I think people sometimes over-estimate the amount of changes that are required. It depends on whether or not really really care about the returned value. Let’s look at a couple of examples:

                                                                                                                                    First, let’s look at an example. Let’s say that we had a function that was going to get the first element out of a list, so we start out with something like:

                                                                                                                                    getFirstElem :: [a] -> a
                                                                                                                                    getFirstElem = head
                                                                                                                                    

                                                                                                                                    Now, we’ll write a couple of functions that make use of this function. Afterwards, I’ll change my getFirstElem function to return a Maybe a so you can see when, why, and how these specific functions need to change.

                                                                                                                                    First, let’s imagine that I have some list of lists, and I’d like to just return a single list that has the first element; for example I might have something like ["foo","bar","baz"] and I want to get back "fbb". I can do this by calling map over my list of lists with my getFirstElem function:

                                                                                                                                    getFirsts :: [[a]] -> [a]
                                                                                                                                    getFirsts = map getFirstElem
                                                                                                                                    

                                                                                                                                    Next, say we wanted to get an idea of how many elements we were removing from our list of lists. For example, in our case of ["foo","bar","baz"] -> "fbb", we’re going from a total of 9 elements down to 3, so we’ve eliminated 6 elements. We can write a function to help us figure out how many elements we’ve dropped pretty easily by looking at the sum of the lengths of the lists in the input lists, and the overall length of the output list.

                                                                                                                                    countDropped :: [[a]] -> [b] -> Int
                                                                                                                                    countDropped a b =
                                                                                                                                      let a' = sum $ map length a
                                                                                                                                          b' = length b
                                                                                                                                      in a' - b'
                                                                                                                                    

                                                                                                                                    Finally, we probably want to print out our string, so we’ll use print:

                                                                                                                                    printFirsts =
                                                                                                                                      let l = ["foo","bar","baz"]
                                                                                                                                          r = getFirsts l
                                                                                                                                          d = countDropped l r
                                                                                                                                      in print l >> print r >> print d
                                                                                                                                    

                                                                                                                                    Later, if we decide that we want to change our program to look at ["foo","","bar","","baz"]. We’ll see our program crashes! Oh no! the problem is that head doesn’t work with an empty list, so we better go and update it. We’ll have it return a Maybe a so that we can capture the case where we actually got an empty list.

                                                                                                                                    getFirstElem :: [a] -> Maybe a
                                                                                                                                    getFirstElem = listToMaybe
                                                                                                                                    

                                                                                                                                    Now we’ve changed our program so that the type system will explicitly tell us whether we tried to take the head of an empty list or not- and it won’t crash if we pass one in. So what refactoring do we have to do to our program?

                                                                                                                                    Let’s walk back through our functions one-by-one. Our getFirsts function had the type [[a]] -> [a] and we’ll need to change that to [[a]] -> [Maybe a] now. What about the code?

                                                                                                                                    If we look at the type of map we’ll see that it has the type: map :: (c -> d) -> [c] -> [d]. Since both [[a]] -> [a] and [[a]] -> [Maybe a] satisfy the constraint [a] -> [b], (in both cases, c ~ [a], in the first case, d ~ a and in the second d ~ Maybe a). In short, we had to fix our type signature, but nothing in our code has to change at all.

                                                                                                                                    What about countDropped? Even though our types changed, we don’t have to change anything in countDropped at all! Why? Because countDropped is never looking at any values inside of the list- it only cares about the structure of the lists (in this case, how many elements they have).

                                                                                                                                    Finally, we’ll need to update printFirsts. The type signature here doesn’t need to change, but we might want to change the way that we’re printing out our values. Technically we can print a Maybe value, but we’d end up with something like: [Maybe 'f',Nothing,Maybe 'b',Nothing,Maybe 'b'], which isn’t particularly readable. Let’s update it to replace Nothing values with spaces:

                                                                                                                                    printFirsts :: IO ()
                                                                                                                                    printFirsts =
                                                                                                                                      let l = ["foo","","bar","","baz"]
                                                                                                                                          r = map (fromMaybe ' ') $ getFirsts' l
                                                                                                                                          d = countDropped l r
                                                                                                                                      in print l >> print r >> print d
                                                                                                                                    

                                                                                                                                    In short, from this example, you can see that we can refactor our code to change the type, and in most cases the only code that needs to change is code that cares about the value that we’ve changed. In an untyped language you’d expect to still have to change the code that cares about the values you’re passing around, so the only additional changes that we’ve had to do here was a very small update to the type signature (but not the implementation) of one function. In fact, if I’d let the type be inferred (or written a much more general function) I wouldn’t have had to even do that.

                                                                                                                                    There’s an impression that the types in Haskell require you to do a lot of extra work when refactoring, but in practice the changes you are making aren’t materially more or different than the ones you’d make in an untyped language- it’s just that the compiler will tell you about the changes you need to make, so you don’t need to find them through unit tests or program crashes.

                                                                                                                                    1. 3

                                                                                                                                      countDropped should be changed. To what will depend on your specification but as a simple inspection, countDropped ["", "", "", ""] [None, None, None, None] will return -4, which isn’t likely to be what you want.

                                                                                                                                      1. 2

                                                                                                                                        That’s correct in a manner of speaking, since we’re essentially computing the difference between the number of characters in all of the substrings minutes the length of the printed items. Since [""] = [[]], but is printed " ", we print one extra character (the space) compared to the total length of the string, so a negative “dropped” value is sensible.

                                                                                                                                        Of course the entire thing was a completely contrived example I came up with while I was sitting at work trying to get through my morning coffee, and really only served to show “sometimes we don’t need to change the types at all”, so I’m not terribly worried about the semantics of the specification. You’re welcome to propose any other more sensible alternative you’d like.

                                                                                                                                        1. -3

                                                                                                                                          That’s correct in a manner of speaking, since …

                                                                                                                                          This is an impressive contortion, on par with corporate legalese, but your post-hoc justification is undermined by the fact that you didn’t know this was the behavior of your function until I pointed it out.

                                                                                                                                          Of course the entire thing was a completely contrived example …

                                                                                                                                          On this, we can agree. You created a function whose definition would still typecheck after the change, without addressing the changed behavior, nor refuting that in the general case, Maybe T is not a supertype of T.

                                                                                                                                          You’re welcome to propose any other more sensible alternative you’d like.

                                                                                                                                          Alternative to what, Maybe? The hour long talk linked here is pretty good. Nullable types are more advantageous, too, like C#’s int?. The point is that if you have a function and call it as f(0) when the function requires its first argument, but later, the requirement is “relaxed”, all the places where you wrote f(0) will still work and behave in exactly the same way.

                                                                                                                                          Getting back to the original question, which was (1) “what is it that Rich Hickey doesn’t understand about types?” and, (2) “does changing the return type from Maybe T to T cause calling code to break?”. The answer to (2) is yes. The answer to (1), given (2), is nothing.

                                                                                                                                          1. 9

                                                                                                                                            I was actually perfectly aware of the behavior, and I didn’t care because it was just a small toy example. I was just trying to show some examples of when and how you need to change code and/or type signatures, not write some amazing production quality code to drop some things from a list. No idea why you’re trying to be such an ass about it.

                                                                                                                                            1. 3

                                                                                                                                              She did not address question (1) at all. You are reading her response to question (2) as implying something about (1) that makes your response needlessly adverse.

                                                                                                                                        2. 1

                                                                                                                                          This is a great example. To further reinforce your point, I feel like the one place Haskell really shows it’s strength in these refactors. It’s often a pain to figure out what the correct types should be parts of your programs, but when you know this and make a change, the Haskell compiler becomes this real guiding light when working through a re-factor.

                                                                                                                                        3. 10

                                                                                                                                          He explicitly makes the point that “strengthening a promise”, that is from “I might give you a T” to “I’ll definitely give you a T” shouldn’t necessarily be a breaking change, but is in the absence of union types.

                                                                                                                                          1. 2

                                                                                                                                            Half baked thought here that I’m just airing to ask for an opinion on:

                                                                                                                                            Say as an alternative, the producer produces Either (forall a. a) T instead of Maybe T, and the consumer consumes Either x T. Then the producer’s author changes it to make a stronger promise by changing it to produce Either Void T instead.

                                                                                                                                            I think this does what I would want? This change hasn’t broken the consumer because x would match either alternative. The producer has strengthened the promise it makes because now it promises not to produce a Left constructor.

                                                                                                                                            1. 4

                                                                                                                                              When the problem is “I can’t change my mind after I had insufficient forethought”, requiring additional forethought is not a solution.

                                                                                                                                              1. 2

                                                                                                                                                So we’d need a way to automatically rewrite Maybe t to Either (forall a. a) t everywhere - after the fact. ;)

                                                                                                                                        4. 2

                                                                                                                                          Likewise, I wonder what he thinks about Rust’s type system to ensure temporal safety without a GC. Is safe, no-GC operation in general or for performance-critical modules desirable for Clojure practitioners? Would they like a compile to native option that integrates that safe, optimized code with the rest of their app? And if not affine types, what’s his solution that doesn’t involve runtime checks that degrade performance?

                                                                                                                                          1. 7

                                                                                                                                            I’d argue that GC is a perfectly fine solution in vast majority of cases. The overhead from advanced GC systems like the one on the JVM is becoming incredibly small. So, the scenarios where you can’t afford GC are niche in my opinion. If you are in such a situation, then types do seem like a reasonable way to approach the problem.

                                                                                                                                            1. 3

                                                                                                                                              I have worked professionally in Clojure but I have never had to make a performance critical application with it. The high performance code I have written has been in C and CUDA. I have been learning Rust in my spare time.

                                                                                                                                              I argue that both Clojure and Rust both have thread safe memory abstractions, but Clojure’s solution has more (theoretical) overhead. This is because while Rust uses ownership and affine types, Clojure uses immutable data structures.

                                                                                                                                              In particular, get/insert/remove for a Rust HashMap is O(1) amortized while Clojure’s corresponding hash-map’s complexity is O(log_32(n)) for those operations.

                                                                                                                                              I haven’t made careful benchmarks to see how this scaling difference plays out in the real world, however.

                                                                                                                                              1. 4

                                                                                                                                                Having used clojure’s various “thread safe memory abstractions” I would say that the overhead is actual not theoretical.

                                                                                                                                          2. 2

                                                                                                                                            Disclaimer: I <3 types a lot, Purescript is lovely and whatnot

                                                                                                                                            I dunno, I kinda disagree about this. Even in the research languages, people are opting for nominal ADTs. Typescript is the exception, not the rule.

                                                                                                                                            His wants in this space almost require “everything is a dictionary/hashmap”, and I don’t think the research in type theory is tackling his complaints (the whole “place-oriented programming” stuff and positional argument difficulties ring extremely true). M…aybe row types, but row types are not easy to use compared to the transparent and simple Typescript model in my opinion.

                                                                                                                                            Row types help o solve issues generated in the ADT universe, but you still have the nominal typing problem which is his other thing.

                                                                                                                                            His last keynote was very agressive and I think people wrote it off because it felt almost ignorant, but I think this keynote is extremely on point once he gets beyond the maybe railing in the intro

                                                                                                                                          1. 6

                                                                                                                                            I do not find the words “microphone”, “audio”, or “sound” anywhere in this pdf. Are you trying to suggest something?

                                                                                                                                            1. 3

                                                                                                                                              Maybe OP is… hearing things?

                                                                                                                                              1. 2

                                                                                                                                                Sorry forgot to link the audio part in the description. Somebody took the research and extended it to infer key timings from the keyboard sound

                                                                                                                                                1. 2

                                                                                                                                                  On a page labelled Ferros, I found one called KeyTap plus a related work about SSH that was labeled as an academic paper. I think he meant to submit the first. @yogthos can use this link if that’s the case.

                                                                                                                                                1. 4

                                                                                                                                                  I’m mastodon.social/@yogthos

                                                                                                                                                  I mostly post about programming, tech, or humor

                                                                                                                                                  1. 7

                                                                                                                                                    This is precisely why I love Lisp family of languages. Lisps provide a few general primitives that allow users to build their own abstractions as needed. Most languages end up baking a lot of assumptions into the core of the language, and many of those end up being invalidated as time goes on. Development practices change, the nature of problems the language is applied to changes, and so on. Language designers end up having to extend the language to adapt it to new problems, and this ultimately makes the language complex and unwieldy.

                                                                                                                                                    1. 3

                                                                                                                                                      …build their own abstractions as needed.

                                                                                                                                                      Isn’t the author arguing for less of that? Consider:

                                                                                                                                                      On one hand, we have something that was built to accomplish a task… On the other hand, we have something that describes how things can relate to each other… It seems like the popular perception of programming languages falls more in the ‘vacuum cleaner’ camp: that a programming language is just something for translating a description of computation into something machine-readable… I think that this ‘features focused’ development style can cause people to ignore too much of the structure that the features contribute to (or even that the features are destroying structure).

                                                                                                                                                      In theory, Lisp primitives can be used to build a principled, coherent set of abstractions just as Haskell is a syntax sugar over a small set of features. But “abstractions as needed” sounds like ad-hoc solutions to specific problems which is characteristic of languages on the invented side of the spectrum.

                                                                                                                                                      1. 3

                                                                                                                                                        I think these are completely orthogonal concerns. Baking abstractions into the language doesn’t make them any more principled than making them in user space. The abstractions provided by the language generally have to have a large bigger scope, and if you don’t get them right on the first try you end up having to live with that.

                                                                                                                                                        1. 2

                                                                                                                                                          I guess i don’t understand why this article causes you to say “This is precisely why I love Lisp family of languages”. The reasons you give for loving them seem like reasons you shouldn’t love them, according to the article. But I also think Lisp is interesting for those reasons. Not disagreeing with your point, just trying to find the connection to the article.

                                                                                                                                                          1. 2

                                                                                                                                                            The article talks about the ‘features focused’ development style where languages tend to accumulate a lot of special case features for solving very specific problems. Lisps avoid this problem by providing a small set of general patterns that can be applied in a wide range of contexts, and giving users the power to create their own abstractions. This approach allows useful patterns to be discovered through use and experimentation. Once such patterns are identified, then they can be turned into general purpose libraries. At the same time users can still create features that make sense for their problems without having to pollute the core language.

                                                                                                                                                    1. -2

                                                                                                                                                      Clojure can basically perform as well as Java since JVM runs bytecode, and bytecode is bytecode whether compiled from Java or Clojure (roughly speaking).

                                                                                                                                                      That’s not even close to true and it’s embarrassing that he wrote that. Clojure, like most LISPs, has an (eval x) BIF that basically takes a cons structure and runs it directly. And, unlike JavaScript’s string eval, it’s suitable for use in production code, and it gets used in production code. Compiling code dynamically, while allowed, hits a slow path in HotSpot. (I went ahead and reposted someone else’s post on this topic)

                                                                                                                                                      1. 5

                                                                                                                                                        You might be quite surprised as to what’s possible with Clojure and JVM bytecode manipulation. Meanwhile, Clojure apps are typically compiled using AOT, so you’re not evaluating code at runtime in production. It compiles to the same JVM bytecode as everything else. The article you linked primarily talks about startup time, and that’s a completely separate discussion. That’s being addressed via GraalVM.

                                                                                                                                                      1. 2

                                                                                                                                                        Questions 5 and 11 both seem pretty much in line with my experience with programming languages.

                                                                                                                                                        I only know python, or for that matter, had any interest in learning python, because of work. Outside of that I probably still wouldn’t know it; not well enough to write anything signifiant in it. For better or worse, python doesn’t really “do it” for me. I know it and write it purely to get paid. Don’t get me wrong, I quite frankly really enjoy my job, and I don’t have any regrets in learning python, but in a perfect world I wouldn’t have picked it.

                                                                                                                                                        These days, it really takes something special for a language to peek my interest enough to learn purely for the pleasure of it. A case-and-point for me. I originally wrote balistica in Vala. Looking back on it, I think that was the wrong choice, and I would choose several other languages instead if I had to do it all over again. After going through that, picking which language I devote my free time to is a much stricter process.

                                                                                                                                                        Long story short: I don’t know Haskell, is there a convincing argument to devoting my free time to learn it over say Lisp, Scheme, or Ocaml? The three languages that top my “want to learn” list presently.

                                                                                                                                                        1. 5

                                                                                                                                                          If you like stretching your brain to find new tools to solve problems, Haskell is good for that.

                                                                                                                                                          You’ll get some of those same tools from lisp / scheme / ocaml, but they allow mutation where Haskell does not.

                                                                                                                                                          Learn what you like, each new language makes it easier to learn more! Try Haskell sometime, there are lots of cool tools to discover.

                                                                                                                                                          1. 3

                                                                                                                                                            I’d recommend Haskell if you’d like to learn a functional language with an advanced type system, although OCaml is also interesting in that regard. For Lisp family I’d also recommend considering Clojure as it’s one of the more practical Lisps and actually gets use in the industry.

                                                                                                                                                            1. 3

                                                                                                                                                              Learning Lisp or Scheme will teach you different things than learning Haskell or an ML.

                                                                                                                                                              Lisp and Scheme (and Clojure, etc) tend to have fairly weak type systems, very simple syntax, and (because of the latter) very robust macro support. For many people, writing a lisp program means, essentially, writing non-working pseudocode, then writing the macros to make that pseudocode work.

                                                                                                                                                              Haskel, Ocaml, and SML are all strongly and robustly typed. Writing programs in these languages often means figuring out data structures first, then writing functions to manipulate them. Of these, Haskell is a pure language – you’re only writing functions which return values, never functions that manipulate global state. Haskell is also a lazy language, which is something you can get away with not thinking about, until it produces a result that makes you think.

                                                                                                                                                              Haskell has the virtue of a surprisingly rich library ecosystem, but the language itself can also be quite complex. SML, on the other hand, has very few libraries available, but the language is simple enough to learn comprehensively quite quickly. Ocaml splits the difference between those two extremes.

                                                                                                                                                              … Which is all a roundabout way of saying the only convincing argument in any direction is going to be from you, about what you’re interested in learning next.