Threads for emallson

    1. 1

      I’m really curious how implementing a NoSQL database’s interface on top of a traditional RDBMS performs. Has anyone tried benchmarking FerretDB against MongoDB?

      1. 2

        I’ve heard of projects using JSON columns in Postgres extensively, using it effectively as a NoSQL database. Apparently it’s faster too, but I’d love hard numbers over hearsay I’ve heard.

        1. 3

          I am using Postgres JSONB extensively, I think, in practically all of the tables, (may be few that I am forgetting)

          Database and production usage are not significant in size to share benchmarks.

          Here are some rules I follow (may be this can help others) :

          In the system, usually a table contains several ‘classes’ of fields:

          • there are separate fields for things like: IDs, update_time, status, row-life-cycle

          • a JSONB field for access control (ac.). storing things like row owned by [list of user Ids], legal jurisdiction(s), if necessary, and a few other things. Those things generally help with zero-trust access control, by enabling our PEP (Policy enforcement points) to do row level filtering of data efficiently.

          • A row can belong to ‘operation’ data or to ‘model data’. A row in operation data category usually contains separate JSONB fields for each of the ‘entities’ that represent the ‘business relation’ a given row is representing. Operation rows, also, always have fields to enable sharding, and row-lifecycle indicator (a row can be active | can be archived | can be deleted). This kind of row-lifecycle indicator field allows us to tell the indexes to ignore ‘archived’ and ‘to-be-deleted’ rows – so that they do not pollute our indices. Sharding and row-lifecycle fields – have to be their own fields, not in JSONB.

          • Early in design I ran Postgres’s explain plans to make sure I can see all possible ‘table scans’. If I saw a table scan I would decide if I need to add a GIN [1] index to a specific nested JSONB field, or cache data on the application side, or ‘duplicate/denormalize’ a JSONb field into its own field (I do not remember I was ever forced to do that, though).

          • The database access is only through APIs so the generic ‘let me just fetch data any way I want’ – are not allowed. But the query APIs are reasonably ‘composable’ (in some critical areas), so as the system grows, the APIs (and therefore database access) do not need to ‘redesigned’ and ‘rechecked’ for performance, that often.

          • No triggers are allowed in the system. Postgres’s other features dealing with full text searches [2] applied to JSOB text fields are leveraged too. It works well.


          All in all JSONB feature, GIN indexes by now, seem to be very very mature, so I do not feel ‘unequipped’ compared to a document oriented database.
          I think a hybrid approach (where in one table JSOB and typical fields are used) is liberating, and efficient.

          [1] https://pganalyze.com/blog/gin-index [2] https://www.postgresql.org/docs/12/functions-textsearch.html

          1. 1

            I don’t have hard numbers but we have one of these at work. And by “one of these” i mean “most data is thrown into a single jsonb column on a single table.” Would not recommend that structure under any circumstances, but Postgres’ JSONB columns have generally been surprisingly good.

            However, good indices become important much faster than with a traditional table and you end up needing indices on derived fields (e.g. an index on cast(data->>‘foo’ as date) for date queries) much more frequently. Postgres has an index type that lets you quickly query for all rows that have a key present, so we end up (ab)using that a lot to filter result sets without needing a special index for every query.

        1. 23

          LtU does have some clever people. The majority of the predictions seem to be really good / close to truth. The most amusing/true I found were:

          Debates that both sides will lose because the debate will be made irrelevant by a third option: Emacs vs Vim

          We got vscode. I know that’s just one specific case and they weren’t really made irrelevant, but vscode really did take the dev world by storm in comparison. (with people using vi or emacs keybindings inside it)

          I think that we know more or less how the hardware will evolve: several (4-16) heterogeneous(not all with the same performance, some with specialized function unit) CPUs (due to the power wall).

          I take it as 4-16 in total, not 4-16 types. Right on with the recent big-little chips.

          Things from academia that will (start to go in the direction of) mainstream by 2020: Functional reactive programming

          Almost all the new web frameworks and a lot of declarative desktop draws strongly fro FRP.

          In 20 years the prevalent market will be the mobile platforms. The primary language used for development will be JavaScript or some variant, with some of the following characteristics:

          Pretty close if we consider that most websites target mobile now. And just perfect regarding “A misguided spin on lexical scope”.

          Therefore, programming will take on a much more “organic” feel: programming by example, programming with help of machine learning

          Only a year off for the copilot prediction.

          1. 11

            I dint think any vi(m) or emacs user is likely the switch to vscode. Vscode is just the latest in the line of sublime/atom/etc

            1. 2

              It’s much more than just the next sublime according to SO developer surveys. Vscode took some users from virtually every other option. Between 2016 - 2021: (multiple options allowed so there’s overlap)

              Vim fell 26% to 24%

              Vscode rose 7.2% to 71%

              Sublime fell 31% to 20%

              1. 9

                Direct links to the data:

                Given that neovim went from not on the list at all to 5%, I don’t think this supports the idea that meaningful numbers of vim users are switching to vscode. Emacs is virtually unchanged, from 5.2% to 5.3%

              2. 1

                I did (for a while), using the VSCode neovim extension (which literally runs neovim as the backend for editor commands while still getting VSCode’s GUI + language extensions). Ended up switching back emacs when i got a new job because CIDER is still the best Clojure tool imho.

              3. 6

                Debates that both sides will lose because the debate will be made irrelevant by a third option: Emacs vs Vim

                We got vscode. I know that’s just one specific case and they weren’t really made irrelevant, but vscode really did take the dev world by storm in comparison. (with people using vi or emacs keybindings inside it)

                As someone who doesn’t use any of these 3 editors, I don’t think this quite matches up to the prediction. VSCode is a good editor, and it surprised people and rapidly gained dominance for many, but it did not “render emacs vs vim debate irrelevant”. That fight is still raging, whether it’s for the editors themselves or keybindings inside other editors.” If the prediction was something like “a new editor will gain dominance over emacs vs vim” maybe this would count but let’s be honest – most popular editors already did this. Neither Emacs nor Vim have dominance anywhere; the prediction here is merely about the Holy War between the editors. VSCode did not solve this; No editor ever will lol.

                1. 8

                  I’d argue that the distributions of emacs-as-vim and to a lesser extent vim-as-emacs (spacevim) as well as emulation in VS code or jetbrains IDEs is what made the debate w.r.t. UX irrelevant.

                  The “holy war” debate between vim and emacs was made irrelevant by a bigger holy war (one which both vim and emacs take the same side of) which is the fight for free software and “non-corporate” software.. from that pov it was VS code that ended the debate.

                2. 2

                  I can’t imagine a GUI code editor ever making a difference in the Vim/Emacs world. Vim and Emacs are fundamentally different from something like VScode because they are inside a terminal. Some people either prefer to or have to use a terminal for a variety of reasons and I don’t think any level of technological advancement would change most of those reasons (probably ¯_(ツ)_/¯ ).

                  1. 3

                    While I almost exclusively use Emacs in a terminal it is, in fact, a first-class citizen of GUIs (albeit with rather weird habits).

                    1. 1

                      Yeah, one of the nice things about emacs vs vim is that there’s a GUI (so stuff supports it) and there’s only one GUI (so you don’t have to worry about feature differences).

                    2. 2

                      If you look at NeoVim, there are lots of attempts of creating a GUI for it, e.g.:

                      https://www.onivim.io/

                      This suggests that there are a lot of people who want to use NeoVim as a GUI tool, not just terminal. As far as I’m concerted however, I value the terminal side of NeoVim, since I can run it on a Windows machine through SSH and start compilation without having to use RDP nor Windows on my local machine.

                      1. 1

                        Definitely! People want to put Vim and Emacs (mostly Vim) inside any of their existing GUIs or create GUIs just for those tools.

                        But, as you said, the Terminal has special uses that aren’t replaceable and so GUIs like VScode will never directly compete with terminal editors.

                        1. 1

                          Onivim is actually based on vim, not neovim. Don’t remember why.

                          1. 1

                            Thanks, I didn’t realize that. From what I’m searching now, OniVim was based on NeoVim, but OniVim2 switched to Vim due to build issues of NeoVim in author’s environment.

                            https://github.com/onivim/libvim#why-is-libvim-based-on-vim-and-not-neovim

                    1. 6

                      I ran a server with btrfs on it in grad school for several years. Big regret. I chose it because we wanted the transparent compression on a couple of data directories—we weren’t even using the software raid (the server had a hardware raid controller)—but it ended up requiring a lot of attention to keep the system up and running.

                      There are two main issues I ran into:

                      1. BTRFS metadata isn’t compacted/cleaned automatically. This meant that I’d periodically need to run whatever btrfs command did that in order to keep things under control. If most of your data sticks around a while, I don’t think this would be a problem, but we ran simulations that would generate large volumes of output that would be post-processed and then deleted, leaving us with a bunch of metadata for things that no longer exist.

                      2. BTRFS performs very badly if the disk ever actually gets full. Like, cleaning up the used disk space isn’t enough to restore system performance—its only step 1. On several occasions we had runaway simulations fill the entire raid and had to reboot the system after cleaning up in order to get performance back to acceptable levels even for concurrently-running simulations that weren’t IO-bound. IO just took forever to complete until a reboot, despite plentiful free space.

                      I wanted to reformat the server to ext4 before I graduated but then COVID happened and I couldn’t go back into campus to do that, so as far as I know its still running BTRFS.

                      EDIT: interesting that /u/Jamietanna actually experienced that IO wait problem too (https://www.jvt.me/posts/2018/12/22/leaving-btrfs/).

                      Finally, very recently I’ve been receiving a lot of IO wait, which has been bringing my pretty high end hardware to a halt. Thats the first other mention I’ve found of this problem.

                      1. 2

                        I also had big IO wait issues on a btrfs volume. Laptop became unusable.

                      1. 26

                        You’ll be pleased to hear this concept has a name already: literate programming.

                        1. 7

                          That’s just the author’s particular take on this. I’ve seen other takes that are quite different from plain old literate programming.

                          1. 3

                            Yeah Knuth-style tangle/weave definitely shouldn’t be allowed a monopoly on the very good idea of interleaving prose and code. https://github.com/arrdem/source/tree/trunk/projects/lilith/ was my LangJam submission that went in that direction, partly inspired by my previous use of https://github.com/gdeer81/marginalia and various work frustrations at being unable to do eval() in the ReStructuredText files that constitute the majority of my recent writing.

                          2. 2

                            Technically, I think, it’d be “documentation generation” because it doesn’t involve any tangle or weave steps.

                            because these tools do not implement the “web of abstract concepts” hiding behind the system of natural-language macros, or provide an ability to change the order of the source code from a machine-imposed sequence to one convenient to the human mind, they cannot properly be called literate programming tools in the sense intended by Knuth.

                            [my emphasis]

                            1. 3

                              My view of this is:

                              A lot of Literate Programming systems are based on the idea of automatically copy-pasting pieces of code around. I think this is a terrible idea for many of the same reasons why building a program entirely out of C macros is a terrible idea. However, it’s a necessary evil due to the limitations of C; everything has to be in a particular order imposed by the C compiler. If you want your prose to describe an implementation of a function first, and then describe the struct which that function uses on later, the only solution in C is to do macro-like copy/paste to make the generated C code contain the struct definition before the function even though it’s implemented after the function.

                              Many modern languages don’t have this limitation. Many languages lets everything refer to everything else regardless of the order they appear in the file. Thanks to this, I think we can generally treat the function as the unit of code in literate programming systems, and we don’t need elaborate automatic copy/paste systems. As a bonus, all the interactions between code snippets follows simple, well-understood semantics, and you don’t have the common issues with invisible action at a distance you see in many literate programming systems.

                              That’s the basis for our submission at least, where we made a literate programming system (with LaTeX as the outer language) where all the interaction between different code snippets happens through function calls, not macro expansions.

                            2. 1

                              Literate programming was the inspiration for my team’s submission: https://github.com/mortie/lafun-language

                            1. 3

                              Imba’s groundbreaking memoized DOM is an order of magnitude faster than virtual DOM libraries

                              This is an odd statement when the landing page causes very noticeable frame rate drops on my phone (running the latest Firefox mobile).

                              Also funny that I apparently commented on the indentation-based syntax 5 years ago here on lobsters. These days I’m willing to be a little bit more charitable, but I still cannot imagine using indentation-based JSX.

                              1. 18

                                Does anyone else see this as a sign that the languages we use are not expressive enough? The fact that you need an AI to help automate boilerplate points to a failure in the adoption of powerful enough macro systems to eliminate the boilerplate.

                                1. 1

                                  Why should that system be based upon macros and not an AI?

                                  1. 13

                                    Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time. Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                    1. 3

                                      Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time.

                                      Deep learning models don’t change their weights if you don’t purposefully update it. I can foresee an implementation where weights are kept static or updated on a given cadence. That said, I understand that for a language macro system that you would probably want something more explainable than a deep learning model.

                                      Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                      There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                      1. 2

                                        Deep learning models don’t change their weights if you don’t purposefully update it.

                                        If you’re sending data to their servers for copilot to process (my impression is that you are, but i’m not in the alpha and haven’t seen anything concrete on it), then you have no control over whether the weights change.

                                        1. 2

                                          Deep learning models don’t change their weights if you don’t purposefully update it.

                                          Given the high rate of commits on GitHub across all repos, it’s likely that they’ll be updating the model a lot (probably at least once a day). Otherwise, all that new code isn’t going to be taken into account by copilot and it’s effectively operating on an old snapshot of GitHub.

                                          There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                          As far as I can tell, the majority of people (even tech people) are still using software that snoops on them. Just look at the popularity of, for example, VSCode, Apple and Google products.

                                      2. 2

                                        I wouldn’t have an issue with using a perfect boilerplate generating AI (well, beyond the lack of brevity), I was more commenting on the fact that this had to be developed at all and how it reflects on the state of coding

                                        1. 1

                                          Indeed it’s certainly good food for thought.

                                        2. 1

                                          Because programmers are still going to have to program, but instead of being able to deterministically produce the results they want, they’ll have to do some fuzzy NLP incantation to get what you want.

                                        3. 1

                                          I don’t agree on the macro systems point, but I do see it the same. As a recent student of BQN, I don’t see any use for a tool like this in APL-like languages. What, and from what, would you generate, when every character carries significant meaning?

                                          1. 1

                                            I think it’s true. The whole point of programming is abstracting away as many details as you can, so that every word you write is meaningful. That would mean that it’s something that the compiler wouldn’t be able to guess on its own, without itself understanding the problem and domain you’re trying to solve.

                                            At the same time, I can’t deny that a large part of “programming” doesn’t work that way. Many frameworks require long repetitive boilerplate. Often types have to be specified again and again. Decorators are still considered a novel feature.

                                            It’s sad, but at least, I think it means good programmers will have job security for a long time.

                                            1. 1

                                              I firmly disagree. Programming, at least as evolved from computer science, is about describing what you want using primitive operations that the computer can execute. For as long as you’re writing from this directions, code generating tools will be useful.

                                              On the other hand, programming as evolved from mathematics and programming language theory fits much closer to your definition, defining what you want to do without stating how it should be done. It is the job of the compiler to generate the boilerplate after all.

                                              1. 1

                                                We both agree that we should use the computer to generate code. But I want that generation to be automatic, and never involve me (unless I’m the toolmaker), rather than something that I have to do by hand.

                                                I don’t think of it as “writing math”. We are writing in a language in order to communicate. We do the same thing when we speak English to each other. The difference is that it’s a very different sort of language, and unfortunately it’s much more primitive, by the nature of the cognition of the listener. But if we can improve its cognition to understand a richer language, it will do software nothing but good.

                                          1. 4
                                            1. 6

                                              Well, they’re not entirely truthful either – Clojure for instance has solved this issue:

                                              (+ 1/10 2/10) ;; => 3/10
                                              (+ 0.1M 0.2M);; => 0.3M
                                              

                                              I get the point of the post, but it seems a tad awkward to point of the failings of languages that have solved this and doesn’t need a custom implementation of ratios…

                                              1. 10

                                                And Clojure solves it because it tries to follow in the tradition of older schemes/Lisps. I’ve ranted more than once to my colleagues that numbers in mainstream “app-level” (anything that’s not C/C++/Zig/Rust/etc) programming languages are utterly insane.

                                                <soap-box>

                                                Look, yeah- if you’re writing a system binary in C, or developing a physics simulation, or running some scientific number crunching- then you probably want to know how many bytes of memory your numbers will take up. And you should know if/when to use floats and the foot-guns they come with. (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)

                                                But for just about everything else, the programmer just wants the numbers to do actual number things. I shouldn’t have to guess that the number of files in a directory will never go above some arbitrary number that happens to fit in 4 bytes. I shouldn’t have to remember that you can’t compare floats because I had the audacity to try to compute the average of something.

                                                We have this mantra for the last decade or so that “performance doesn’t matter”, “memory is cheap”, “storage is cheap”, “computers are fast”, etc, etc, yet our programming languages still ask us to commit to a number variable taking up an exact number of bytes? Meanwhile, it’s running a garbage collector thread, heap allocates everything, fragments memory, etc. Does anyone else think this is insane? You’re gonna heap allocate and pointer-chase all day, but you can’t grow my variable’s memory footprint when it gets too large for 2,4,8 bytes? You’re gonna lose precision for my third-grade arithmetic operations because you really need that extra performance? I don’t know about that…

                                                </soap-box>

                                                1. 3

                                                  Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.

                                                  Oh man you just dredged up some bad memories. I was working on modifying another grad student’s C++ simulation code, and the performance we were seeing was shocking. Too good, way too good.

                                                  Turns out that they’d made some very specific assumptions that weren’t met by the changes I made so some numbers overflowed and triggered the stopping condition far too early.

                                                  1. 1

                                                    (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)

                                                    In an alternate universe:

                                                    (Even, then, though- why the HELL do most languages insert all these bounds checks on arithmetic that slow everything down?! I know my simulation isn’t going to get anywhere near the limits of floating point.)

                                                    1. 1

                                                      Sure. But isn’t the obvious solution for this to be a compiler flag?

                                                      Less obvious is what the default should be, but I’d still advocate for safety as the default. Sure, you’re not likely to wrap around a Double or whatever, but I’m thinking more about integer-like types like Shorts (“Well, when I wrote it, there was no way to have that many widgets at a time!”).

                                                  2. 4

                                                    same thing with Ruby

                                                    $ irb
                                                    irb(main):001:0> 0.1 + 0.2
                                                    => 0.30000000000000004
                                                    irb(main):002:0> 0.1r + 0.2r
                                                    => (3/10)
                                                    

                                                    and i’m pretty sure that’s the case with Haskell too

                                                    it might be a fair criticism to question why the simplest and most obvious syntax (i.e., no suffix) doesn’t default to arbitrary-precision rationals, as is the case with integers in languages like Ruby, Haskell, etc.

                                                1. 18

                                                  A quick question: is Chrome better than the rest?

                                                  I use Firefox desktop and Duck / Safari mobile as primary browsers, and I’m completely satisfied by the experience. Am I missing out something here with Chrome?

                                                  Lots of articles about ditching Chrome / time to move to Firefox … but people seem to hesitate. That tells me something holds them to Chrome, and I can’t image what that things is.

                                                  1. 10

                                                    A quick question: is Chrome better than the rest?

                                                    They don’t support vertical tabs at all.

                                                    Performance is about the same (slightly better but I’ve never noticed except maybe on Google properties).

                                                    Uses more RAM.

                                                    No, not better at all in my book.

                                                    1. 4

                                                      They don’t support vertical tabs at all.

                                                      I’m not sure what you mean by “they”. I’ve been using Tree Style Tab on Firefox for as long as I can remember.

                                                      1. 7

                                                        “They” is Chrome, not Firefox.

                                                        1. 1

                                                          … and it’s absolute garbage without hacking userChrome.css.

                                                      2. 6

                                                        Out of principle (re: reducing the monopoly), I am trying to switch to Firefox. (I’ve done so on one of my daily drivers, but not both.) To answer your question, though, there is at least one feature where Chrome is unequivocally better than Firefox: the UX for multiple profiles/personas.

                                                        In Chrome/ium, the entry point for profiles is a single icon/click in the main toolbar. Switching profiles is another single click. So, with two clicks, it will either create a new window “running as” that profile, or it will switch to an existing window of that profile. The window acts as a container, so any new tabs (and even “New Window”s) will be for that profile. Everything about the experience makes sense, and is just about the simplest, most straight forward UX design that one could conceive.

                                                        Contrast the above with Firefox’s profile UX. Profiles available in main toolbar? Nope. Open hamburger menu – profiles in there? Nope. How about in the Preferences UI? Nope. So where the heck is it? Well, there are CLI switches available (try not to laugh). -P/--profile to use a certain profile, or --ProfileManager to bring up a GUI widget on startup to pick a profile. Okay, so of course normal users will not use CLI switches. So what do they do? Well, there’s an about:profiles page available. So you have to type that (there’s autocomplete at least, to save you a few keystrokes), and then you click on a button on that page to open a new window launch a new Firefox instance for that profile. And then there is no visual indication in Firefox as to which profile you’re currently using, unless you’ve themed each profile, etc. In Chrome, you assign avatars to profiles, and the current profile’s avatar is displayed in the main bar.

                                                        There’s this feature of Firefox called containers, or container tabs, or multi-account containers. Or something. They.. sort of do the job, in a clunky way. Tabs get a different underline colour based on the container, and different containers have different cookie sets, etc. However, this falls short in a couple ways. In Firefox, preference settings are shared among containers, whereas in Chrome, each profile has independent settings. Also, new tabs don’t (always?) take on the container of the previous/parent tab, so you have to manually set the container of a tab, sometimes.

                                                        Anyway, enough said. The UX for multiple personas is astronomically better in Chrome. It’s not even close.

                                                        1. 1

                                                          Late but:

                                                          Containers is what you look for.

                                                          It is right in the address bar.

                                                          It can even automatically change for sites that obviously belong to one container.

                                                          1. 2

                                                            I tried containers in Firefox. They go maybe 70% of the way towards what I need. It’s a nice try, but not good enough [for me].

                                                            Anyway, nowadays, I use multiple local Linux users to sandbox things with Firefox. They main security concern there is sharing the X display.

                                                        2. 9

                                                          Inertia, ignorance and indolence come to mind.

                                                          1. 6

                                                            That’s the thing I keep coming back to when I see articles like this. It’s not like anything has changed.

                                                            If you’ve gone for like a decade using a browser created by a monopolistic advertising company and after all those years you never saw the problem with it, does anyone really think reading some Wired article is going to finally be the thing that makes you come to your senses?

                                                          2. 4

                                                            Performance can be an issue, as discussed previously.

                                                            Also I think a lot of front-end devs prefer the developer tools from what I’ve heard (although for me, the Firefox ones work fine and have for years, but I’m not a FE dev and I don’t know if there are any concrete benefits here or if it’s just a matter of preference).

                                                            1. 3

                                                              The Firefox dev tools are actually quite nice…except that they become an enormous memory hog and performance black hole if you dump a bunch of JSON into the log. I’ve had the devtools crash Firefox because of a day’s worth of redux debug logs. Never had that issue with chrome

                                                            2. 4

                                                              Chrome (and also safari) has a visibly lower latency in rendering the page. It doesn’t really matter at all if you think about it, but makes the feel of the browser quite different.

                                                              1. 4

                                                                Bugs. Bugs in firefox. Lots of them. Especially annoying while developing.

                                                                1. 2

                                                                  IME Chrome can sometimes perform faster than FF. I’ve really only noticed it when looking at sites with heavy CSS/JS-based animations. also the FF dev tools seem to get bogged down more often than Chrome’s. also, for a while FF performance on Mac OS was much worse than Chrome’s (I forget the details but this was a known issue that may (?) be fixed by now).

                                                                  1. 3

                                                                    I wonder if several years in the future, we’re going to see a bug change there like with the arrival of (now) macOS in the early 2000s. What I’m hinting at is the fact that the problem here are not browsers, but js/whatnot heavy websites, that browsers them try to accommodate, just like Windows was going out of their way to keep backwards compatibility and hide application idiocies. Then came Apple with their “we don’t care about backwards compatibility, this is what you can use”.

                                                                    1. 1

                                                                      I’m skeptical. Long-term, I think browsers will take over native applications as the default app distribution + runtime environment, as browser vendors add more and more native/low-level APIs to the web platform. Then again, I’m not the first person to predict this so who knows.

                                                                      Maybe some day plain HTML/CSS will become a second-class citizen (or people will get used to using other programs to browse the ‘old web’).

                                                                      1. 3

                                                                        Or you’ll have to download a “web-browser” app in your web browser to view actual HTML content, which to be frank, is already happening, given the number of blogging sites that break completely if Javascript is disabled.

                                                                1. 1

                                                                  Unfortunately, there’s no way to influence [optimizations and compilation time] by turning them off or tuning somehow

                                                                  (declaim (optimize (speed 0)))?

                                                                  1. 2

                                                                    I do believe they’re referring to the ability to turn off specific optimizations. Obviously disabling optimizations will do that, but is not suitable for production.

                                                                  1. 15

                                                                    It looks like megacorps are starting to take Bitcoin seriously. What happened to corporate social responsibility? Oh that’s right. It only applies when it doesn’t affect the bottom line.

                                                                    In the immortal words of Pink Floyd, “ha ha, charade you are”.

                                                                    1. 4

                                                                      I would not assume they’re doing this to make money. In large organizations individual incentives are often quite divorced from making money for the organization. Instead, incentives might be “creating a splashy product will get me promoted” or “everyone is doing this, if it happens to turn out to be a big thing I’ll look stupid if I didn’t have a project in this area”.

                                                                      1. 10

                                                                        I’m frankly worried by an uptick in bitcoin adoption by well-known companies and “nerd-celebrities” over the last several months. Here is a selection of links.

                                                                        Last but not least, we have the height of hypocrisy: people can buy a Tesla with Bitcoin. (@skyfaller already called Tesla out upthread).

                                                                        Herd instinct appears to be taking its course.

                                                                        1. 2

                                                                          I’ve only read a small amount about Microsoft’s incentives here. But according to product lead Daniel Buchner (https://github.com/csuwildcat), Microsoft gave him this opportunity after years of toiling away on standards and working at Mozilla. So someone at Microsoft with some influence really pursued the talent and the money to put this together.

                                                                        2. 4

                                                                          social responsibility

                                                                          There is a social benefit to decentralized technology (of which blockchain is one implementation mechanism) as well, which is mainly to do with circumventing centralized censorship and thereby enabling various subcultures to co-exist on the internet (as it used to be before Big Tech began controlling narratives) without compromising on localized moderation[1] of them.

                                                                          [1] cf. ‘decentralized moderation’, eg: https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors

                                                                          1. 5

                                                                            If they wanted a decentralized system, they could have used one that wasn’t so egregiously wasteful, or invested in bringing more efficient options like proof-of-stake to fruition instead of latching onto bitcoin.

                                                                            1. 1

                                                                              Yeah, I’m not sure what’s going on here. From their 2020 docs:

                                                                              Currently, we’re developing support for the following ledgers: Bitcoin, Ethereum, via uPort, Sovrin

                                                                              Our intention is to be chain agnostic, enabling users to choose a DID variant that runs on their preferred ledger.

                                                                              I’ve attempted to play with the API here, but it seems like it has been depreciated. At some point they must have decided to go all in on Bitcoin. Maybe they’re also going to next uphold the promise to develop on other ledgers.

                                                                          2. 2

                                                                            Well sure - it’s right there in the articles of incorporation. For better or for worse, social responsibility isn’t part of the material of operating a business.

                                                                            It’s interesting that Microsoft sees a place to profit here.

                                                                            1. 1

                                                                              It’s never been a genuine thing, and can’t really be.

                                                                            1. 2

                                                                              Are you using vim, but in emacs? Like spawning a subshell in emacs to run vim? Could you elaborate a bit more on that, as you glance over it in the first paragraph?

                                                                              1. 5

                                                                                Very likely that they’re using evil-mode, possibly via one of the several distributions (e.g. spacemacs, doom emacs) that integrates it deeply.

                                                                                1. 1

                                                                                  Yes, I’m using evil-mode with spacemacs on Linux. It ruins you.

                                                                                  1. 1

                                                                                    Out of curiosity, Why do you use vim via emacs and not “normal” vim?

                                                                                    1. 1

                                                                                      org-mode and emacsclient mostly.

                                                                                      1. 1

                                                                                        Besides org-mode, which cadey already mentioned, magit is also a killer app. Even when I am developing in CLion, I have an emacs session just for Magit.

                                                                                1. 2

                                                                                  This is one thing that annoys me from time to time in Rust. I believe that Rust libraries tend to be better about using Result instead of Option (or in addition to—I see plenty of Result<Option<T>, E>) for errors (partly because ? doesn’t work on Option, at least on stable rust), but it is immensely annoying to try to suss out why library code is giving None for an error.

                                                                                  1. 2

                                                                                    I started using Colemak before actually starting to use Vim, and when I switched to (neo)vim and started learning that, I rebound the keys in an…interesting way: nest. On QWERTY, that’d be jkdf—so it is still on the home row but split across both hands.

                                                                                    st are up/down, and are easily usable left-handed to browse. When I used vimium, I liked that because it meant I could left-hand scroll while still using the mouse with my right hand. Now it’s just habit.

                                                                                    ne are right/left (in that order—i.e. they’re inverted: the leftward key moves right). I don’t know why I inverted them. Maybe because M-n for “next window” in xmonad was an easy mnemonic. Maybe just because it felt right at the time. I don’t really use these keys when editing text, but my xmonad keybindings are the same for 2d window navigation and they do get used there.

                                                                                    1. 8

                                                                                      I think the author of this post is correct in surmising that the proliferation of feature-rich, graphical editors such as Visual Studio Code, Atom, and Sublime Text have a direct correlation to the downturn of Emacs usage in recent years. This might seem a little simplistic, but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python. Choosing between the top two interpreted languages for your editor’s scripting system is going to attract more people than choosing a dialect of Lisp. The fact that Emacs Lisp is one of the most widely-used Lisp dialects tells you something about how popular Lisp is for normal day-to-day programming. It’s not something that most are familiar with, so the learning curve to configuring Emacs is high. Meanwhile, VS Code and Atom let you configure the program with JSON and JavaScript, which is something I believe most developers in the world are familiar with at least on a surface level. If you can get the same features from an editor that is written in a familiar language, why would you choose an editor that requires you to learn something entirely different?

                                                                                      I use Emacs, but only for Org-Mode, and I can tell you with experience that editing the configs takes a bit of getting used to. I mostly use Vim and haven’t really compared it to Emacs here because I don’t feel like the two are easily comparable. Although Vim’s esoteric “VimL” scripting language suffers from the same problems as Emacs, the fact that it can be started up and used with relatively minimal configuration means that a lot of users won’t ever have to write a line of VimL in their lives.

                                                                                      1. 14

                                                                                        I might be mistaken, but I don’t think that most “feature-rich, graphical editors”-users don’t customize their editor using “JS or Python”, or at least not in the same way as one would customize Emacs. Emacs is changed by being programmed, your init.el or .emacs is an elisp program that initializes the system (setting the customize-system aside). From what I’ve seen of Atom, VS Code and the like is that you have JSON and perhaps a prettier interface. An Emacs user should be encouraged to write their own commands, that’s why the *scratch* buffer is created. It might just be the audience, but I don’t hear of VS Code users writing their own javascript commands to program their environment.

                                                                                        It’s unusual from outside, I guess. And it’s a confusion that’s reflected in the choice of words. People say “Emacs has a lot of plugins”, as that’s what they are used to from other editors. Eclipse, Atom, etc. offer an interface to extend the “core”. The difference is reflected in the sharp divide between users and (plugin) developers. Compare that to Emacs where you “customize” by extending the environment. For that reason the difference “users” and “developers” is more of a gradient, or that’s at least how I see it. And ultimately, Lisp plays a big part in this.

                                                                                        It was through Emacs that I learned to value Free Software, not as in “someone can inspect the code” or “developers can fork it”, but as in “I can control my user environment”, even with it’s warts. Maybe it’s not too popular, or maybe there are just more easy alternatives nowadays, but I know that I won’t compromise on this. That’s also probably why we’re dying :/

                                                                                        1. 13

                                                                                          Good defaults helps. People like to tweak, but they don’t want to tweak to even get started. There’s also how daunting it can appear. I know with Vim I can get started on any system, and my preferred set of tweaks is less than five lines of simple config statements (Well, Vim is terse and baroque, but it’s basically just setting variables, not anything fancy.). Emacs, there’s a lot to deal with, and a lot has to be done by basically monkey-patching - not very friendly to start with when all you want is say, “keep dired from opening multiple buffers”.

                                                                                          Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                                                                                          1. 3

                                                                                            Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                                                                                            I agree on the defaults (not that I find vanilla Emacs unusable, either), but I don’t really agree with this. It seem to be a common meme that Elisp is a “bad lisp”, which I guess is not wrong when compared to some Scheme and CL implementations (insofar one understands “bad” as “not as good as”). But it’s still a very enjoyable language, and perhaps it’s just me, but I have a lot more fun working with Elisp that with Python, Haskell or whatever. For all it’s deficiencies it has the strong point of being extremely well integrated into Emacs – because the entire thing is built on top of it.

                                                                                            1. 1

                                                                                              I also have a lot more fun working with Elisp than most other languages, but I think in a lot of regards it really does fail. Startup being significantly slower than I feel that it could or should be is my personal biggest gripe. These days, people like to talk about Lisp as a functional language, and I know that rms doesn’t subscribe to that but the fact that by default I’m blocked from writing recursive functions is quite frustrating.

                                                                                          2. 3

                                                                                            It’s true, emacs offers a lot more power, but it requires a time investment in order to really make use of it. Compare that with an editor or IDE where you can get a comfortable environment with just a few clicks. Judging by the popularity of macOS vs Linux for desktop/workstation use, I would imagine the same can be said for editors. Most people want something that “just works” because they’re busy with other problems during the course of their day. These same people probably aren’t all that interested in learning a the Emacs philosophy and getting to work within a Lisp Machine, but there are definitely a good amount of people who are. I don’t think Emacs is going anywhere, but it’s certainly not the best choice for most people anymore.

                                                                                            1. 8

                                                                                              Most people want something that “just works” because they’re busy with other problems during the course of their day.

                                                                                              This has been my experience. I learned to use Vim when I was in school and had lots of free time to goof around with stuff. I could just as easily have ended up using Emacs, I chose Vim more or less at random.

                                                                                              But these days I don’t even use Vim for programming (I still use Vimwiki for notes) because I simply don’t have time to mess around with my editor or remember what keyboard shortcuts the Python plugin uses versus the Rust plugin, or whatever. I use JetBrains IDEs with the Vim key bindings plugin, and that’s pretty much all the customization I do. Plus JB syncs my plugins and settings across different IDEs and even different machines, with no effort on my part.

                                                                                              So, in some sense, I “sold out” and I certainly sacrificed some freedom. But it was a calculated and conscious trade-off because I have work to do and (very) finite time in which to do it.

                                                                                              1. 7

                                                                                                I can’t find it now, but someone notes something along those lines in the thread, saying that Emacs doesn’t offer “instant gratifications”, but requires effort to get into. And at some point it’s just a philosophical discussion on what is better. I, who has invested the time and effort, certainly think it is worth it, and believe that it’s the case for many others too.

                                                                                                1. 3

                                                                                                  IDEs are actually quite complicated and come with their own sets of quirks that people have to learn. I was very comfortable with VS Code because I’ve been using various Microsoft IDE’s through the years, and the UI concepts have been quite consistent among them. But a new user still needs to internalize the project view, the editing view, the properties view, and the runtime view, just as I as a new user of Emacs had to internalize its mechanisms almost 30 years ago.

                                                                                                  It’s “easier” now because of the proliferation of guides and tutorials, and also that GUI interfaces are probably inheritably more explorable than console ones. That said, don’t underestimate the power of M-x apropos when trying to find some functionality in Emacs…

                                                                                                2. 3

                                                                                                  Yeah, use plugins in every editor, text or GUI. I’ve never written a plugin in my life, nor will I. I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                                                                                  1. 3

                                                                                                    I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                                                                                    That’s my point. Emacs offers the possibility that extending the environment isn’t a detour but a method to achieve your goals.

                                                                                                    1. 5

                                                                                                      Writing a new major mode (or, hell, even a new minor mode) is absolutely a detour. I used emacs for the better part of a decade and did each several times.

                                                                                                      I eventually got tired of it, and just went to what had the better syntax support for my primary language (rust) at the time (vim). I already used evil so the switch was easy enough.

                                                                                                      I use VSCode with the neovim backend these days because the language server support is better (mostly: viewing docstrings from RLS is nicer than from a panel in neovim), and getting set up for a new language is easier than vim/emacs.

                                                                                                      1. 1

                                                                                                        It’s not too surprising for me that between automating a task by writing a command and starting an entire new project that the line of a detour can be drawn. But even still, I think it’s not that clear. One might start by writing a few commands, and then bundle them together in a minor mode. That’s little more than creating a map and writing a bare minimal define-minor-mode.

                                                                                                        In general, it’s just like any automation, imo. It can help you in the long term, but it can get out of hand.

                                                                                                  2. 2

                                                                                                    Although I tend to use Vim, I actually have configured Atom with custom JS and CSS when I’ve used it (it’s not just JSON; you can easily write your own JS that runs in the same process space as the rest of the editor, similar to Elisp and Emacs). I don’t think the divide is as sharp as you might think; I think that Emacs users are more likely to want to configure their editors heavily than Atom or VSCode users (because, after all, Elisp configuration is really the main draw of Emacs — without Elisp, Emacs would just be an arcane, needlessly difficult to use text editor); since Atom and VSCode are also just plain old easy-to-use text editors out of the box, with easy built-in package management, many Atom/VSCode users don’t find the need to write much code, especially at first.

                                                                                                    It’s quite easy to extend Atom and VSCode with JS/CSS, really. That was one of the selling points of Atom when it first launched: a modern hackable text editor. VSCode is similar, but appears to have become more popular by being more performant.

                                                                                                  3. 7

                                                                                                    but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python

                                                                                                    I disagree.

                                                                                                    I think most people care that a healthy extension ecosystem that just works and is easy to tap in to is there - they basically never really want to have to create a plugin. To achieve that, you need to attract people to create plugins, which is where your point comes in.

                                                                                                    As a thought experiment, if I’m a developer who’s been using VS Code or some such for the longest time, where it’s trivial to add support for new languages through an almost one-click extension system, what’s the push that has me looking for new editors and new pastures?

                                                                                                    I can see a few angles myself - emacs or vim are definitely snappier, for instance.

                                                                                                    EDIT: I just spotted Hashicorp are officially contributing to the Terraform VS Code extension. At this point I wonder if VS Code’s extension library essentially has critical mass.

                                                                                                    1. 3

                                                                                                      Right: VS Code and Sublime Text aren’t nearly as featureful as Emacs, and they change UIs without warning, making muscle memory a liability instead of an asset. They win on marketing and visual flash for their popularity, which Emacs currently doesn’t have, but Emacs is what you make of it, and rewards experience.

                                                                                                    1. 2

                                                                                                      I started using Debian Stable for my desktop after I unexpectedly had Arch fail to boot to X* (again) right as I was struggling to hit a major paper deadline.

                                                                                                      Previously, I’d switched from Ubuntu to Arch because it let me keep up-to-date packages without the headache of Ubuntu’s dist-upgrade (and incredibly premature use of things like pulseaudio and Unity). It worked 99% of the time, but that 1% nearly fucked me over in a big way.

                                                                                                      I’ve been running Debian Stable for two years now and have yet to ever have it fail to boot to X. During paper deadlines, this is wonderful because if I happen to need to update a library in order to make someone’s code compile, I can just do it and be confident that it won’t cost hours of time getting my system to boot up again.

                                                                                                      (* When Arch broke, it was because I had to update a library (libigraph if memory serves), which in turn necessitated updating libc, which cascaded into updates everywhere and then lo-and-behold the system couldn’t fully boot until I tracked down a change in how systemd user units worked post-update.)

                                                                                                      1. 2

                                                                                                        We do have graph editors, they’re just all proprietary and/or use some arcane format that isn’t nearly as straightforward as text encoding.

                                                                                                        1. 2

                                                                                                          Yup, I faced this problem 6 months ago.

                                                                                                          Even though they are far from being perfect, there are some FOSS graph editors.

                                                                                                          Gephi: https://github.com/gephi/gephi is really nice. The problem I have with it is its stability. Other than that, you can edit and analyse gigantic graphs from a single tool, which is really nice.

                                                                                                          I agree with you about the format problem. Even though graphml is quite widely supported, getting interoperability using that format is quite hard, mostly because of poor implementations. For instance, pygraphML (https://github.com/hadim/pygraphml) which is the de facto standard graphml for python and gephi’s graphml importer are not compatible (problem with the nodes labels if I remember correctly).

                                                                                                          1. 3

                                                                                                            Depending on what area you’re in, GraphML may not even be feasible. I work with a good deal of social network data, and encoding any remotely large dataset in GraphML would be insane both in terms of parsing time and disk usage.

                                                                                                            A large/medium (depending on who you ask) dataset I use for testing is nearly 50GB in ye olde edge list format, which takes about 20 minutes to read and parse. GraphML would take even longer. I use a binary format which reduces it to about 12GB and which takes 15-30 seconds to read depending on disk speed.

                                                                                                            This is fundamentally why there are so many different formats for graph/tree data: such a general structure sees usage in a huge variety of fields, and therefore there are a huge variety of requirements for what it can represent, how efficiently it needs to do so, etc. No one format can possibly meet all of these requirements.

                                                                                                          2. 1

                                                                                                            Which programs do you have in mind?

                                                                                                          1. 47

                                                                                                            It’s a new language, you have to go slow. I don’t get why people think they should automatically be productive in a new language. Yes, it sucks to be slapped upside the head by the ownership model. But if you want the safety advantage (which, presumably, you do, given that you’re using Rust), that means a compiler that you have to please and think like in order to make it to executable code. If you don’t, use C.

                                                                                                            I guess I find it immensely confusing that people clamor for tools to save them from themselves and then summarily reject them when they aren’t lenient enough. It’s clear that, collectively, we believe less in good practices than good tools. After all, what is React but a way to enable lots of [junior] coders to work on parts of a page without stepping on each other?

                                                                                                            Edit: misremembered quote, removed that, thanks angersock

                                                                                                            1. 12

                                                                                                              I don’t get why people think they should automatically be productive in a new language.

                                                                                                              I agree. I am by no means a Rust expert, but I have done some small projects in Rust. The ownership problem in the last example seems trivial to solve. Either you do an early return if you have cached the entry, something like:

                                                                                                              if let Some(cached) = self.cache.get(host) {
                                                                                                                  return Ok(cached);
                                                                                                              }
                                                                                                              
                                                                                                              // Now the immutable borrow is gone.
                                                                                                              

                                                                                                              Or you use the entry function of HashMap. If you find that the entry is occupied, you use get on OccupiedEntry. Otherwise, you have your mutable handle via VacantEntry and you can use it to insert the results into the cache.

                                                                                                              I understand the author’s frustration. I have also been through the two week ‘how do I please the borrow checker’-hell. But once you understand the rules and the usual ways to address ownership issues, it’s relatively smooth sailing. The guarantees that the Rust ownership provides (like no simultaneous mutable/immutable borrows) allow you to do really cool things, like the peek_mut method of BinaryHeap:

                                                                                                              https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html#method.peek_mut

                                                                                                              Basically it restores the heap property through the ‘Drop’ (destructor) trait. It is safe to do this, because it’s a mutable borrow, blocking immutable borrows and thus inconsistent views of the data when the top of the heap is changed and the sift-down operation has not been applied yet. The other day, I used the same trick in some code where I have a sparse vector data structure, which is a wrapper around BTreeMap that automatically removes vector components when their values are changed to 0.

                                                                                                              1. 11

                                                                                                                I get your point, but you are blatantly misquoting the author here. They spent multiple evenings learning rust for their true toy, not an hour. The hour was spent getting 30 lines of code for a different toy example into something that looked like reasonable rust, which didn’t then compile.

                                                                                                                Please don’t quote people out of context, especially when it is so easy to catch.

                                                                                                                1. 7

                                                                                                                  You are right, I double-checked the context and fixed my mistake. Point still stands, however.

                                                                                                                2. 4

                                                                                                                  I’m thinking about Haskell and wondering why this is, because the difference is pretty stark. I didn’t expect to ever actually learn Haskell, so when it finally seemed to be happening after months of messing with it, I was pretty overjoyed. Nobody had told me it would be easy; in fact, I had assumed it would be really difficult and I might not be able to get there. I’m not paying close enough attention, are they marketing Rust as something you can learn pretty easily? That’s not the sense one gets from reading blogs.

                                                                                                                  A lot of people move between Java and JavaScript or C#, and they’re just not prepared for something totally different. This guy is pretty competent though, I don’t think that’s what’s happening here.

                                                                                                                  Maybe systems people are just really impatient for progress?

                                                                                                                  Ultimately, all this negative press will work to its benefit. Programmers like to be elitist about knowing hard things (like Haskell, in years past) so if Rust develops a reputation of impenetrability, that just means in two or three years there will be a lot of young Rust programmers.

                                                                                                                  1. 23

                                                                                                                    Ultimately, all this negative press will work to its benefit.

                                                                                                                    Unfortunately, this was exactly the sort of negative press (right down to an ESR hatchet job quoted endlessly) that killed Ada in industry.

                                                                                                                    For whatever reason “I spent a whole hour writing code that in the end didn’t compile” is considered a damning indictment of a language, whereas “I spent a whole hour writing code that in the end compiled with many subtle correctness issues unaddressed that later bit me in the ass” is considered “wow so productive!”.

                                                                                                                    Our industry is still very immature.

                                                                                                                    1. 15

                                                                                                                      whereas “I spent a whole hour writing code that in the end compiled with many subtle correctness issues unaddressed that later bit me in the ass” is considered “wow so productive!”.

                                                                                                                      Incidentally, the C++ code that the OP wrote actually contains undefined behavior for violating the strict aliasing rule. (There’s a cast and dereference from char* to trie_header_info*, where trie_header_info has a stricter alignment than char.)

                                                                                                                      Actually, the Rust code from the OP (linked in comments) contains the same error, but it is annotated with unsafe. :-)

                                                                                                                      1. 2

                                                                                                                        ESR definitely ran into the issue described by @brinker above, but apart from that I thought he was reasonably fair about what issues he had. I did not read as particularly spiteful to me, just that he found Go more productive and Rust frustrating and immature. That makes sense; Rust is younger and less mature and Go prioritizes productivity above many other things.

                                                                                                                        I’m not sure what “killed” Ada in industry but lots of good languages failed during that era for reasons that had nothing to do with technical superiority. Personally I find Ada kind of dull to read and the situation with GNATS has always confused me. I thought Eiffel looked more like “Ada done right” when I was in college; now I think Ada probably had a better emphasis on types than Eiffel did that probably lent itself to reliability more directly than Eiffel’s design-by-contract stuff. But this is way out on the fringe of anything I know about really; I would greatly enjoy hearing more about Ada now.

                                                                                                                        1. 17

                                                                                                                          ESR’s post generated substantial frustration in the Rust community because a number of the factual claims or statements he makes about Rust are wrong. Members of the Rust community offered corrections for these claims, but ESR reiterated them without edit in his follow up post. Reasonable criticism based on preference or disagreement with design decisions is one thing, claiming untrue things about a language as part of an explanation of why not to use it is another thing entirely.

                                                                                                                          1. 4

                                                                                                                            Well, if it makes you feel better, the big thing I got from it was “NTPsec relies on a certain system call that isn’t a core part of Rust” which isn’t going to weigh on my mind particularly hard. I agree with you that criticism should be factual.

                                                                                                                          2. 14

                                                                                                                            ESR definitely ran into the issue described by @brinker above, but apart from that I thought he was reasonably fair about what issues he had.

                                                                                                                            That wasn’t really my impression: he was factually incorrect about a number of issues in both his initial rant and his follow up (string manipulation in particular), and he mostly seemed to be criticizing Rust for not being what he wanted rather than on its own terms (ok, Rust doesn’t have a first-class syntax for select/kqueue/epoll/IO completion ports – but that’s obviously by design, because Rust is intended to be a runtimeless C replacement, not something with a heavy-weight runtime that papers over the large semantic differences between those calls. If you went into Rust just wanting Go, then just use Go).

                                                                                                                            I’m not sure what “killed” Ada in industry but lots of good languages failed during that era for reasons that had nothing to do with technical superiority.

                                                                                                                            If I had a nickel for every time someone quoted ESR’s “hacker dictionary” entry on Ada being lol so huge and designed by comittee, I’d have…well, a couple of dollars, anyways. You’ll still hear them today, and Ada is still on the small side of languages these days, and still isn’t designed by committee, while a lot of popular languages are.

                                                                                                                            It had some attention on it in the late ‘90s, but every time a discussion got going around it, you’d hear the same things: people parroting ESR with no direct experience of their own, people (generally students, which is understandable, but also working professionals who should have known better) complaining that the compiler rejected their code (and that’s obviously bad because a permissive compiler is more important than catching bugs), etc.

                                                                                                                            Just a general negative tone that kept people from trying it out, which kept candidates for jobs at a minimum, which kept employers from ever giving it serious thought.

                                                                                                                            1. 5

                                                                                                                              I encountered Ada in the early 90s (in college). At that time, Ada was considered a large, bondage and discipline language where one fought the compiler all the way (hmm … much like Rust today). At the time, C has just been standardized and C++ was still years away from its first real standard so Ada felt large and cumbersome. At the time, I liked the idea of Ada, just not the implementation (I found it a bit verbose for my liking). ESR was writing about Ada around this time, and he was was parroting the zeitgeist of the time.

                                                                                                                              Compared to C++ today? It’s lightweight.

                                                                                                                              1. 4

                                                                                                                                Yeah, it’s funny the way initial perceptions sink something long after they cease to be true.

                                                                                                                                I was having these conversations in 1998-1999, after the Ada ‘95 and C++ '98 standardizations, which were really easy to point at and say that “no, actually, Ada’s a lot smaller than C++ at the moment”, but it didn’t really matter, because an off-the-cuff riff on Ada as it had stood in maybe the late '80s was the dominant impression of the language, and no amount of facts or pointing out that paying a bit of cost up front in satisfying the compiler was easily better than paying 10x that cost in chasing bugs and CVEs were capable of changing that.

                                                                                                                                This is more or less what I’m concerned is the unclimbable hill Rust now faces.

                                                                                                                                1. 7

                                                                                                                                  I have one Ada anecdote. When I was an undergrad, one of the CS contests in my state had a rule that you had to use C, but you could use Ada instead if you had written a compiler for it and you used that compiler. Apparently students from the military institute would occasionally show up to the contest with their own Ada compiler, and they were allowed to use it.

                                                                                                                        2. 20

                                                                                                                          In my experience, the problem is thus:

                                                                                                                          C and C++ programmers, hearing “systems language,” expect Rust to be a lot easier for them than it is. Yes, Rust is a systems language that offers the same degree of performance that they do, but it is in many respects wildly different. This difference between expectation and reality leads to a lot of confusion and frustration.

                                                                                                                          On the other hand, people coming from languages like Python and Ruby expect Rust to be hard, and often find that it is easier than they anticipated. Not easy, but easier.

                                                                                                                          1. 7

                                                                                                                            I might amend that to hearing “systems language that will make you unbelievably productive”. The common thread to many rust complaints I’ve seen is that “after considerable investment studying the borrow checker” is relegated to a tiny footnote.

                                                                                                                            There’s quite a gap between “correct” and “provably correct” code. It’s easy to write the former, but convincing the compiler of the latter is difficult. It doesn’t really feel like progress.

                                                                                                                            1. 10

                                                                                                                              To your first point, I do think that Rust could be more up front about the complexity. For example, I think that there could be more done to encourage Rust programmers to read The Rust Programming Language book before attempting a new project. There are a number of common stumbling blocks addressed in the book, and things like changes to Rust’s website could encourage more people to read it.

                                                                                                                              On the second point, I disagree. While there are C and C++ programmers who can write safe code without the ceremony of Rust to back them up, I don’t think this is true for most programmers. That is, while the gap between “correct” and “provably correct” may be large, there is also a large gap between “looks correct but isn’t” and “actually correct” code, and Rust helps the layperson write “actually correct” code with confidence.

                                                                                                                              1. 2

                                                                                                                                This book could also be a source of the mismatch between users that didn’t have much trouble getting over the borrow checker (e.g. me) and those that did.

                                                                                                                                When I was learning rust, I had multiple tabs open with the book all the time and it helped tremendously. However, I’m also familiar with a lot of FP languages (not necessarily fluent; e.g. Haskell, Scala, Clojure, OCaml) so the type system wasn’t an additional point of confusion like it may be for those who’ve spent the majority of their time in C/C++/Java.

                                                                                                                              2. 2

                                                                                                                                By “correct” and “provably correct”, you mean “provably correct by hand” and “mechanically provably correct”, respectively, right? I have no idea how you could write code you know is correct without at least an informal proof sketch in your head that it’s indeed correct. (Or are you saying it’s easy to write correct programs by accident? I’m pretty sure that’s not true.) An informal proof might not satisfy a mechanical proof checker, but it’s a proof nevertheless.

                                                                                                                            2. 3

                                                                                                                              Maybe systems people are just really impatient for progress?

                                                                                                                              That statement sounds so weird after a life-time of C/C++ being virtually the only game in town. I know about Ada and the like, but for non-specialized systems programming, yeesh.

                                                                                                                              1. 4

                                                                                                                                I mean, impatient to make progress on their problem, not impatient for new languages and paradigms.

                                                                                                                                1. 1

                                                                                                                                  Ah, that makes more sense.

                                                                                                                                  I dunno, webbers I run into seem to have stronger time preference than systems people.

                                                                                                                                  Systems is harder initially I think.

                                                                                                                            3. 3

                                                                                                                              the hacker news comments made a really good point - while on the whole rust may not be more complex than say c++ or haskell, it makes you pay most of that complexity up front as the price of admission, whereas in most languages you can start off by using a small subset of the features relatively easily and have working code while you ramp yourself up to the full language.

                                                                                                                              1. 2

                                                                                                                                It’s a new language, you have to go slow.

                                                                                                                                So true. Going from Java -> Ruby took me months to get something near idiomatic code. Ruby -> Golang was much faster (about a month) but still took time. Like you, I’ve realized it takes weeks or months for these ideas to percolate through your brain and become second nature.

                                                                                                                              1. 14

                                                                                                                                As a serious vim user: This is genuinely cool! It might be too late for my fingers to ever abandon Vim, but I applaud any effort to make modal editing more learnable and ubiquitous. The object->verb ordering is probably the single biggest contribution of the modern OOP language world and it just makes sense for text editing commands too.

                                                                                                                                Some constructive criticism: My visceral reaction to seeing Clippy is so bad that it almost makes me not want to read anything else on your pages or watch your videos. You may consider avoiding Clippy and his negative associations.

                                                                                                                                1. 2

                                                                                                                                  The new grammar may bring some benefit, but text objects are IMO not a real problem. Let’s see if I get around to testing this, but I’m wary of it, because Vim takes a long time to master, so this will probably too.

                                                                                                                                  My gut tells me it’s trading something off for something else and the individual user’s mileage will certainly vary.

                                                                                                                                  1. 2

                                                                                                                                    Clippy’s not that bad but it would nice if you could turn him off.

                                                                                                                                    The code would suggest there’s a cat option if it bothers you that much:

                                                                                                                                    https://github.com/mawww/kakoune/blob/5ff8245cc84a15d6a48bd2e19e4c70d4f8ae3f77/src/ncurses_ui.cc#L31

                                                                                                                                    1. 6

                                                                                                                                      It is actually possible to turn it off entirely. I’ve been messing with kakoune and this is the first line in my kakrc:

                                                                                                                                      set global ui_options ncurses_assistant=none
                                                                                                                                      
                                                                                                                                    2. 2

                                                                                                                                      They’re just following in the footsteps of Clippy for nvi.