Threads for emallson

  1. 13

    I feel like this is something like nushell is also trying to solve. I’ve not daily driven nushell, only experimented with it, and I’ve not touched powershell in some years, so I can’t give a definitive answer on it. Both feel better UX and repeatability from a programmatic standpoint.

    ls | where type == "dir" | table
    
    1. 12

      ironically, i think that nushell on *nix systems is a harder sell than powershell on windows because of compat, despite shells being a much larger part of the way people typically work on *nix systems.

      i tried using nushell as my daily driver, and pretty frequently ran into scripts that assumed $SHELL was going to be at least vaguely bash-compatible. this has been true on most *nix boxes for the past 20+ years and has led to things quietly working that really shouldn’t have been written.

      OTOH cmd.exe is a much smaller part of the windows ecosystem and (at least, it seems to me) there is much less reliance on the “ambient default shell behaves like X”, so switching to the new thing mostly just requires learning.

      (i ultimately dropped nushell for other reasons, but this was also over 30 releases ago so its changed a bit since then)

      1. 16

        You’ve brought up several valid points, but I want to split this up a bit.

        The reason using something other than bash (or something extremely close, like zsh) is painful is due to the number of tools that want to source variables into the environment. This is actually a completely tractable problem, and my default *nix shell (fish) has good tools for working with it. It’s certainly higher friction, but it’s not a big deal; there are tools that will run a bash script and then dump the environment out in fish syntax at the end so you can source it, and that works fine 95% of the time. The remaining 5% of the time almost always has a small fish script to handle the specific use-case. (E.g., for ssh-agent, there’s https://github.com/danhper/fish-ssh-agent. Hasn’t been updated since 2020, but that’s because it works and is stable; there’s nothing else I could imagine really doing here.) And you could always set whatever shell for interactive-only if you really want it (so that e.g. bash would remain the default $SHELL).

        PowerShell on Windows actually has to do this, too, for what it’s worth. For example, the way you use the Windows SDK is to source a batch file called vcvarsall.bat (or several very similar variants). If you’re in PowerShell, you have to do the same hoisting trick I outlined above–but again, there are known, good ways to do this in PowerShell, to the point that it’s effectively a non-problem. And PowerShell, like fish, can do this trick on *nix, too.

        Where I see Nushell fall down at the moment is three places. First, it’s just really damn slow. For example, sometimes, when I’m updating packages, I see something fly by in brew/scoop/zypper/whatever that I don’t know what it is. 'foo','bar','baz','quux'|%{scoop home $_} runs all but instantly in PowerShell. [foo bar baz] | each { scoop home $it } in Nushell can only iterate on about one item a second. But on top of that, Nushell has no job control, so if I want to start Firefox from the command line, I have to open a new tab/tmux window/what-have-you so I don’t lock my window. And third, it’s still churning enough that my scripts regularly break. And there are dozens of things like this.

        I really want to like Nushell, and I’m keeping a really close eye on it, but, at the moment, running PowerShell as my daily shell on *nix is entirely doable (even if I don’t normally do it). Nushell…not so much.

        1. 7

          You’re absolutely right. It’s a VERY hard sell.

          There are all the software problems, some of which you’ve detailed, and then there’s IMO the even bigger problem - the human problem :)

          UNIX users don’t just use, love, and build with the “everything is a stream of bytes” philosophy, it almost becomes baked into their DNA.

          Have you ever tried to have a discussion about something like object pipelines or even worse yet, something like what AREXX or Apple’s Open Scripting Architecture used to offer to a hardcore UNIX denizen?

          99 times out of 100 it REALLY doesn’t go well. There’s no malice involved, but the person on the other end can’t seem to conceptualize the idea that there are other modes with which applications, operating systems and desktops can interact.

          As someone whose imagination was kindled very early on with this stuff, I’ve attempted this conversation more times than I care to count and have pretty much given up unless I know that the potential conversation partner has at least had some exposure to other ways of thinking about this.

          I’d say it’s kind of sad, but I suspect it’s just the nature of the human condition.

          1. 5

            I believe that, in the end, it all boils down to the fact that plain text streams are human readable and universal. You can opt-in to interpreting them as some other kind of a data structure using a specialized tool for that particular format, but you can’t really do it the other way around unless the transmission integrity is perfect and all formats are perfectly backward and forward compatible.

          2. 2

            I would argue that scripts that don’t include a shebang at the top of them are more wrong than the shell that doesn’t really know any better what to do with them.

            I don’t want to pollute this thread with my love for nushell, but I have high expectations for it, and I’ve previously maintained thousands-of-lines scripts that passed shell-check and were properly string safe. Something I think many people just avoid thinking about. (example, how do you build up an array of args to pass to a command, and then properly string quote them, without hitting the case where you have an empty array and output an empty string that inevitably evokes an error in whatever command you’re calling – in nushell this doesn’t matter, you just call let opts = ["foo", "bar xyz"]; echo $opts and the right thing happens)

            I’ll just leave a small example, so as to not go overboard here. I ought to compile my thoughts more fully, with more examples. But even something like this: https://github.com/colemickens/nixcfg/blob/02c00ef7a3e1e5dd83f84e6d4698cba905894fc7/.github/clean-actions.nu would not exactly be super fun to implement in bash.

            1. 2

              I would argue that scripts that don’t include a shebang at the top of them are more wrong than the shell that doesn’t really know any better what to do with them.

              Oh, the scripts absolutely are the problem. Unfortunately, that doesn’t mean that they don’t exist. Just another annoying papercut when trying out something new

              1. 2

                My experience with FreeBSD defaulting to csh is that scripts aren’t really the problem. Sure, some of them hard-code bash in the wrong location, but most of them are easy to fix. The real problem is one-liners. A lot of things have ‘just run this command in your shell’ and all of those assume at least a POSIX shell and a lot assume a bash-compatible shell. FreeBSD’s /bin/sh has grown a few bash features in recent years to help with this.

                1. 1

                  FreeBSD’s /bin/sh has grown a few bash features in recent years to help with this.

                  Oh that’s interesting, didn’t know that but makes sense. I’ve noticed this with busybox ash too – it’s been growing bash features, and now has more than dash, which is a sibling in terms of code lineage.

                  A related issue is that C’s system() is defined to run /bin/sh. There is no shebang.

                  If /bin/sh happens to be /bin/bash, then people will start using bash features unconsciously …

                  Also, system() from C “leaks” into PHP, Python, and pretty much every other language. So now you have more bash leakage …

                2. 1

                  example, how do you build up an array of args to pass to a command, and then properly string quote them, without hitting the case where you have an empty array

                  FWIW my blog post is linked in hwayne’s post, and tells you how to do that !!! I didn’t know this before starting to write a bash-compatible shell :)

                  Thirteen Incorrect Ways and Two Awkward Ways to Use Arrays

                  You don’t need any quoting, you just have to use an array.

                  a=( 'with spaces'   'funny $ chars\' )
                  ls -- "${a[@]}"   # every char is preserved;  empty array respected
                  

                  The -- protects against filenames that look like flags.

                  As mentioned at the end of the post, in YSH/Oil it’s just

                  ls -- @a
                  

                  Much easier to remember :)

                  This has worked for years, but we’re still translating it to C++ and making it fast.

                  1. 1

                    Ah, yes, you certainly know what I (erroneously) was referencing (for others since I mis-explained it: https://stackoverflow.com/questions/31489519/omit-passing-an-empty-quoted-argument). Indeed most times starting with an empty array and building up is both reasonable and more appropriate for what’s being composed/expressed.

              2. 1

                I’m very tempted. I tried it out way back when it first came out and checked back in recently on them and it’s amazing at how far the Nushell project has come.

              1. 12

                The HTMX part and discussion of the Islands architecture is funny, because it reminds how Facebook originally positioned React.

                People who use React will know that React expects you to tell it what component to mount your root component to; these days, that means the Application component, as most React projects are SPAs. But React was developed at Facebook specifically for this Islands type approach; you didn’t have a whole application built from React. Instead, you would have a largely static page, often with specific elements that already had static/light JS implementations.

                You could build specific interactive components in React, mount them to static elements of the DOM you wanted to progressively enhance, and the rest of the page was still static. You can probably figure out why this was desirable; in 2013, the desktop Facebook experience was still largely driven by a server-side rendered PHP application; this progressive enhancement model allowed them to enhance the existing application gradually (or strangle it, if you prefer) without doing a stop-the-world rewrite.

                1. 4

                  At $work we’re actually doing exactly that with react. It’s not a common way to use react anymore, but it does still work quite well!

                  1. 2

                    awesome ya that totally makes sense.

                    One thing I don’t dig into much in the article is how these big tools like React are the product of larger projects and feel sort of like they make more sense in bigger scale applications. I’m excited to see people explore islands in more detail and I’m hopeful that they’ll fit into a nice middle ground between hardcore SPA and CRUD site with JS sprinkles.

                    1. 1

                      I’m really curious how implementing a NoSQL database’s interface on top of a traditional RDBMS performs. Has anyone tried benchmarking FerretDB against MongoDB?

                      1. 2

                        I’ve heard of projects using JSON columns in Postgres extensively, using it effectively as a NoSQL database. Apparently it’s faster too, but I’d love hard numbers over hearsay I’ve heard.

                        1. 3

                          I am using Postgres JSONB extensively, I think, in practically all of the tables, (may be few that I am forgetting)

                          Database and production usage are not significant in size to share benchmarks.

                          Here are some rules I follow (may be this can help others) :

                          In the system, usually a table contains several ‘classes’ of fields:

                          • there are separate fields for things like: IDs, update_time, status, row-life-cycle

                          • a JSONB field for access control (ac.). storing things like row owned by [list of user Ids], legal jurisdiction(s), if necessary, and a few other things. Those things generally help with zero-trust access control, by enabling our PEP (Policy enforcement points) to do row level filtering of data efficiently.

                          • A row can belong to ‘operation’ data or to ‘model data’. A row in operation data category usually contains separate JSONB fields for each of the ‘entities’ that represent the ‘business relation’ a given row is representing. Operation rows, also, always have fields to enable sharding, and row-lifecycle indicator (a row can be active | can be archived | can be deleted). This kind of row-lifecycle indicator field allows us to tell the indexes to ignore ‘archived’ and ‘to-be-deleted’ rows – so that they do not pollute our indices. Sharding and row-lifecycle fields – have to be their own fields, not in JSONB.

                          • Early in design I ran Postgres’s explain plans to make sure I can see all possible ‘table scans’. If I saw a table scan I would decide if I need to add a GIN [1] index to a specific nested JSONB field, or cache data on the application side, or ‘duplicate/denormalize’ a JSONb field into its own field (I do not remember I was ever forced to do that, though).

                          • The database access is only through APIs so the generic ‘let me just fetch data any way I want’ – are not allowed. But the query APIs are reasonably ‘composable’ (in some critical areas), so as the system grows, the APIs (and therefore database access) do not need to ‘redesigned’ and ‘rechecked’ for performance, that often.

                          • No triggers are allowed in the system. Postgres’s other features dealing with full text searches [2] applied to JSOB text fields are leveraged too. It works well.


                          All in all JSONB feature, GIN indexes by now, seem to be very very mature, so I do not feel ‘unequipped’ compared to a document oriented database.
                          I think a hybrid approach (where in one table JSOB and typical fields are used) is liberating, and efficient.

                          [1] https://pganalyze.com/blog/gin-index [2] https://www.postgresql.org/docs/12/functions-textsearch.html

                          1. 1

                            I don’t have hard numbers but we have one of these at work. And by “one of these” i mean “most data is thrown into a single jsonb column on a single table.” Would not recommend that structure under any circumstances, but Postgres’ JSONB columns have generally been surprisingly good.

                            However, good indices become important much faster than with a traditional table and you end up needing indices on derived fields (e.g. an index on cast(data->>‘foo’ as date) for date queries) much more frequently. Postgres has an index type that lets you quickly query for all rows that have a key present, so we end up (ab)using that a lot to filter result sets without needing a special index for every query.

                        1. 23

                          LtU does have some clever people. The majority of the predictions seem to be really good / close to truth. The most amusing/true I found were:

                          Debates that both sides will lose because the debate will be made irrelevant by a third option: Emacs vs Vim

                          We got vscode. I know that’s just one specific case and they weren’t really made irrelevant, but vscode really did take the dev world by storm in comparison. (with people using vi or emacs keybindings inside it)

                          I think that we know more or less how the hardware will evolve: several (4-16) heterogeneous(not all with the same performance, some with specialized function unit) CPUs (due to the power wall).

                          I take it as 4-16 in total, not 4-16 types. Right on with the recent big-little chips.

                          Things from academia that will (start to go in the direction of) mainstream by 2020: Functional reactive programming

                          Almost all the new web frameworks and a lot of declarative desktop draws strongly fro FRP.

                          In 20 years the prevalent market will be the mobile platforms. The primary language used for development will be JavaScript or some variant, with some of the following characteristics:

                          Pretty close if we consider that most websites target mobile now. And just perfect regarding “A misguided spin on lexical scope”.

                          Therefore, programming will take on a much more “organic” feel: programming by example, programming with help of machine learning

                          Only a year off for the copilot prediction.

                          1. 11

                            I dint think any vi(m) or emacs user is likely the switch to vscode. Vscode is just the latest in the line of sublime/atom/etc

                            1. 2

                              It’s much more than just the next sublime according to SO developer surveys. Vscode took some users from virtually every other option. Between 2016 - 2021: (multiple options allowed so there’s overlap)

                              Vim fell 26% to 24%

                              Vscode rose 7.2% to 71%

                              Sublime fell 31% to 20%

                              1. 9

                                Direct links to the data:

                                Given that neovim went from not on the list at all to 5%, I don’t think this supports the idea that meaningful numbers of vim users are switching to vscode. Emacs is virtually unchanged, from 5.2% to 5.3%

                              2. 1

                                I did (for a while), using the VSCode neovim extension (which literally runs neovim as the backend for editor commands while still getting VSCode’s GUI + language extensions). Ended up switching back emacs when i got a new job because CIDER is still the best Clojure tool imho.

                              3. 6

                                Debates that both sides will lose because the debate will be made irrelevant by a third option: Emacs vs Vim

                                We got vscode. I know that’s just one specific case and they weren’t really made irrelevant, but vscode really did take the dev world by storm in comparison. (with people using vi or emacs keybindings inside it)

                                As someone who doesn’t use any of these 3 editors, I don’t think this quite matches up to the prediction. VSCode is a good editor, and it surprised people and rapidly gained dominance for many, but it did not “render emacs vs vim debate irrelevant”. That fight is still raging, whether it’s for the editors themselves or keybindings inside other editors.” If the prediction was something like “a new editor will gain dominance over emacs vs vim” maybe this would count but let’s be honest – most popular editors already did this. Neither Emacs nor Vim have dominance anywhere; the prediction here is merely about the Holy War between the editors. VSCode did not solve this; No editor ever will lol.

                                1. 8

                                  I’d argue that the distributions of emacs-as-vim and to a lesser extent vim-as-emacs (spacevim) as well as emulation in VS code or jetbrains IDEs is what made the debate w.r.t. UX irrelevant.

                                  The “holy war” debate between vim and emacs was made irrelevant by a bigger holy war (one which both vim and emacs take the same side of) which is the fight for free software and “non-corporate” software.. from that pov it was VS code that ended the debate.

                                2. 2

                                  I can’t imagine a GUI code editor ever making a difference in the Vim/Emacs world. Vim and Emacs are fundamentally different from something like VScode because they are inside a terminal. Some people either prefer to or have to use a terminal for a variety of reasons and I don’t think any level of technological advancement would change most of those reasons (probably ¯_(ツ)_/¯ ).

                                  1. 3

                                    While I almost exclusively use Emacs in a terminal it is, in fact, a first-class citizen of GUIs (albeit with rather weird habits).

                                    1. 1

                                      Yeah, one of the nice things about emacs vs vim is that there’s a GUI (so stuff supports it) and there’s only one GUI (so you don’t have to worry about feature differences).

                                    2. 2

                                      If you look at NeoVim, there are lots of attempts of creating a GUI for it, e.g.:

                                      https://www.onivim.io/

                                      This suggests that there are a lot of people who want to use NeoVim as a GUI tool, not just terminal. As far as I’m concerted however, I value the terminal side of NeoVim, since I can run it on a Windows machine through SSH and start compilation without having to use RDP nor Windows on my local machine.

                                      1. 1

                                        Definitely! People want to put Vim and Emacs (mostly Vim) inside any of their existing GUIs or create GUIs just for those tools.

                                        But, as you said, the Terminal has special uses that aren’t replaceable and so GUIs like VScode will never directly compete with terminal editors.

                                        1. 1

                                          Onivim is actually based on vim, not neovim. Don’t remember why.

                                          1. 1

                                            Thanks, I didn’t realize that. From what I’m searching now, OniVim was based on NeoVim, but OniVim2 switched to Vim due to build issues of NeoVim in author’s environment.

                                            https://github.com/onivim/libvim#why-is-libvim-based-on-vim-and-not-neovim

                                    1. 6

                                      I ran a server with btrfs on it in grad school for several years. Big regret. I chose it because we wanted the transparent compression on a couple of data directories—we weren’t even using the software raid (the server had a hardware raid controller)—but it ended up requiring a lot of attention to keep the system up and running.

                                      There are two main issues I ran into:

                                      1. BTRFS metadata isn’t compacted/cleaned automatically. This meant that I’d periodically need to run whatever btrfs command did that in order to keep things under control. If most of your data sticks around a while, I don’t think this would be a problem, but we ran simulations that would generate large volumes of output that would be post-processed and then deleted, leaving us with a bunch of metadata for things that no longer exist.

                                      2. BTRFS performs very badly if the disk ever actually gets full. Like, cleaning up the used disk space isn’t enough to restore system performance—its only step 1. On several occasions we had runaway simulations fill the entire raid and had to reboot the system after cleaning up in order to get performance back to acceptable levels even for concurrently-running simulations that weren’t IO-bound. IO just took forever to complete until a reboot, despite plentiful free space.

                                      I wanted to reformat the server to ext4 before I graduated but then COVID happened and I couldn’t go back into campus to do that, so as far as I know its still running BTRFS.

                                      EDIT: interesting that /u/Jamietanna actually experienced that IO wait problem too (https://www.jvt.me/posts/2018/12/22/leaving-btrfs/).

                                      Finally, very recently I’ve been receiving a lot of IO wait, which has been bringing my pretty high end hardware to a halt. Thats the first other mention I’ve found of this problem.

                                      1. 2

                                        I also had big IO wait issues on a btrfs volume. Laptop became unusable.

                                      1. 26

                                        You’ll be pleased to hear this concept has a name already: literate programming.

                                        1. 7

                                          That’s just the author’s particular take on this. I’ve seen other takes that are quite different from plain old literate programming.

                                          1. 3

                                            Yeah Knuth-style tangle/weave definitely shouldn’t be allowed a monopoly on the very good idea of interleaving prose and code. https://github.com/arrdem/source/tree/trunk/projects/lilith/ was my LangJam submission that went in that direction, partly inspired by my previous use of https://github.com/gdeer81/marginalia and various work frustrations at being unable to do eval() in the ReStructuredText files that constitute the majority of my recent writing.

                                          2. 2

                                            Technically, I think, it’d be “documentation generation” because it doesn’t involve any tangle or weave steps.

                                            because these tools do not implement the “web of abstract concepts” hiding behind the system of natural-language macros, or provide an ability to change the order of the source code from a machine-imposed sequence to one convenient to the human mind, they cannot properly be called literate programming tools in the sense intended by Knuth.

                                            [my emphasis]

                                            1. 3

                                              My view of this is:

                                              A lot of Literate Programming systems are based on the idea of automatically copy-pasting pieces of code around. I think this is a terrible idea for many of the same reasons why building a program entirely out of C macros is a terrible idea. However, it’s a necessary evil due to the limitations of C; everything has to be in a particular order imposed by the C compiler. If you want your prose to describe an implementation of a function first, and then describe the struct which that function uses on later, the only solution in C is to do macro-like copy/paste to make the generated C code contain the struct definition before the function even though it’s implemented after the function.

                                              Many modern languages don’t have this limitation. Many languages lets everything refer to everything else regardless of the order they appear in the file. Thanks to this, I think we can generally treat the function as the unit of code in literate programming systems, and we don’t need elaborate automatic copy/paste systems. As a bonus, all the interactions between code snippets follows simple, well-understood semantics, and you don’t have the common issues with invisible action at a distance you see in many literate programming systems.

                                              That’s the basis for our submission at least, where we made a literate programming system (with LaTeX as the outer language) where all the interaction between different code snippets happens through function calls, not macro expansions.

                                            2. 1

                                              Literate programming was the inspiration for my team’s submission: https://github.com/mortie/lafun-language

                                            1. 3

                                              Imba’s groundbreaking memoized DOM is an order of magnitude faster than virtual DOM libraries

                                              This is an odd statement when the landing page causes very noticeable frame rate drops on my phone (running the latest Firefox mobile).

                                              Also funny that I apparently commented on the indentation-based syntax 5 years ago here on lobsters. These days I’m willing to be a little bit more charitable, but I still cannot imagine using indentation-based JSX.

                                              1. 18

                                                Does anyone else see this as a sign that the languages we use are not expressive enough? The fact that you need an AI to help automate boilerplate points to a failure in the adoption of powerful enough macro systems to eliminate the boilerplate.

                                                1. 1

                                                  Why should that system be based upon macros and not an AI?

                                                  1. 13

                                                    Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time. Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                                    1. 3

                                                      Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time.

                                                      Deep learning models don’t change their weights if you don’t purposefully update it. I can foresee an implementation where weights are kept static or updated on a given cadence. That said, I understand that for a language macro system that you would probably want something more explainable than a deep learning model.

                                                      Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                                      There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                                      1. 2

                                                        Deep learning models don’t change their weights if you don’t purposefully update it.

                                                        If you’re sending data to their servers for copilot to process (my impression is that you are, but i’m not in the alpha and haven’t seen anything concrete on it), then you have no control over whether the weights change.

                                                        1. 2

                                                          Deep learning models don’t change their weights if you don’t purposefully update it.

                                                          Given the high rate of commits on GitHub across all repos, it’s likely that they’ll be updating the model a lot (probably at least once a day). Otherwise, all that new code isn’t going to be taken into account by copilot and it’s effectively operating on an old snapshot of GitHub.

                                                          There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                                          As far as I can tell, the majority of people (even tech people) are still using software that snoops on them. Just look at the popularity of, for example, VSCode, Apple and Google products.

                                                      2. 2

                                                        I wouldn’t have an issue with using a perfect boilerplate generating AI (well, beyond the lack of brevity), I was more commenting on the fact that this had to be developed at all and how it reflects on the state of coding

                                                        1. 1

                                                          Indeed it’s certainly good food for thought.

                                                        2. 1

                                                          Because programmers are still going to have to program, but instead of being able to deterministically produce the results they want, they’ll have to do some fuzzy NLP incantation to get what you want.

                                                        3. 1

                                                          I don’t agree on the macro systems point, but I do see it the same. As a recent student of BQN, I don’t see any use for a tool like this in APL-like languages. What, and from what, would you generate, when every character carries significant meaning?

                                                          1. 1

                                                            I think it’s true. The whole point of programming is abstracting away as many details as you can, so that every word you write is meaningful. That would mean that it’s something that the compiler wouldn’t be able to guess on its own, without itself understanding the problem and domain you’re trying to solve.

                                                            At the same time, I can’t deny that a large part of “programming” doesn’t work that way. Many frameworks require long repetitive boilerplate. Often types have to be specified again and again. Decorators are still considered a novel feature.

                                                            It’s sad, but at least, I think it means good programmers will have job security for a long time.

                                                            1. 1

                                                              I firmly disagree. Programming, at least as evolved from computer science, is about describing what you want using primitive operations that the computer can execute. For as long as you’re writing from this directions, code generating tools will be useful.

                                                              On the other hand, programming as evolved from mathematics and programming language theory fits much closer to your definition, defining what you want to do without stating how it should be done. It is the job of the compiler to generate the boilerplate after all.

                                                              1. 1

                                                                We both agree that we should use the computer to generate code. But I want that generation to be automatic, and never involve me (unless I’m the toolmaker), rather than something that I have to do by hand.

                                                                I don’t think of it as “writing math”. We are writing in a language in order to communicate. We do the same thing when we speak English to each other. The difference is that it’s a very different sort of language, and unfortunately it’s much more primitive, by the nature of the cognition of the listener. But if we can improve its cognition to understand a richer language, it will do software nothing but good.

                                                          1. 4
                                                            1. 6

                                                              Well, they’re not entirely truthful either – Clojure for instance has solved this issue:

                                                              (+ 1/10 2/10) ;; => 3/10
                                                              (+ 0.1M 0.2M);; => 0.3M
                                                              

                                                              I get the point of the post, but it seems a tad awkward to point of the failings of languages that have solved this and doesn’t need a custom implementation of ratios…

                                                              1. 10

                                                                And Clojure solves it because it tries to follow in the tradition of older schemes/Lisps. I’ve ranted more than once to my colleagues that numbers in mainstream “app-level” (anything that’s not C/C++/Zig/Rust/etc) programming languages are utterly insane.

                                                                <soap-box>

                                                                Look, yeah- if you’re writing a system binary in C, or developing a physics simulation, or running some scientific number crunching- then you probably want to know how many bytes of memory your numbers will take up. And you should know if/when to use floats and the foot-guns they come with. (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)

                                                                But for just about everything else, the programmer just wants the numbers to do actual number things. I shouldn’t have to guess that the number of files in a directory will never go above some arbitrary number that happens to fit in 4 bytes. I shouldn’t have to remember that you can’t compare floats because I had the audacity to try to compute the average of something.

                                                                We have this mantra for the last decade or so that “performance doesn’t matter”, “memory is cheap”, “storage is cheap”, “computers are fast”, etc, etc, yet our programming languages still ask us to commit to a number variable taking up an exact number of bytes? Meanwhile, it’s running a garbage collector thread, heap allocates everything, fragments memory, etc. Does anyone else think this is insane? You’re gonna heap allocate and pointer-chase all day, but you can’t grow my variable’s memory footprint when it gets too large for 2,4,8 bytes? You’re gonna lose precision for my third-grade arithmetic operations because you really need that extra performance? I don’t know about that…

                                                                </soap-box>

                                                                1. 3

                                                                  Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.

                                                                  Oh man you just dredged up some bad memories. I was working on modifying another grad student’s C++ simulation code, and the performance we were seeing was shocking. Too good, way too good.

                                                                  Turns out that they’d made some very specific assumptions that weren’t met by the changes I made so some numbers overflowed and triggered the stopping condition far too early.

                                                                  1. 1

                                                                    (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)

                                                                    In an alternate universe:

                                                                    (Even, then, though- why the HELL do most languages insert all these bounds checks on arithmetic that slow everything down?! I know my simulation isn’t going to get anywhere near the limits of floating point.)

                                                                    1. 1

                                                                      Sure. But isn’t the obvious solution for this to be a compiler flag?

                                                                      Less obvious is what the default should be, but I’d still advocate for safety as the default. Sure, you’re not likely to wrap around a Double or whatever, but I’m thinking more about integer-like types like Shorts (“Well, when I wrote it, there was no way to have that many widgets at a time!”).

                                                                  2. 4

                                                                    same thing with Ruby

                                                                    $ irb
                                                                    irb(main):001:0> 0.1 + 0.2
                                                                    => 0.30000000000000004
                                                                    irb(main):002:0> 0.1r + 0.2r
                                                                    => (3/10)
                                                                    

                                                                    and i’m pretty sure that’s the case with Haskell too

                                                                    it might be a fair criticism to question why the simplest and most obvious syntax (i.e., no suffix) doesn’t default to arbitrary-precision rationals, as is the case with integers in languages like Ruby, Haskell, etc.

                                                                1. 18

                                                                  A quick question: is Chrome better than the rest?

                                                                  I use Firefox desktop and Duck / Safari mobile as primary browsers, and I’m completely satisfied by the experience. Am I missing out something here with Chrome?

                                                                  Lots of articles about ditching Chrome / time to move to Firefox … but people seem to hesitate. That tells me something holds them to Chrome, and I can’t image what that things is.

                                                                  1. 10

                                                                    A quick question: is Chrome better than the rest?

                                                                    They don’t support vertical tabs at all.

                                                                    Performance is about the same (slightly better but I’ve never noticed except maybe on Google properties).

                                                                    Uses more RAM.

                                                                    No, not better at all in my book.

                                                                    1. 4

                                                                      They don’t support vertical tabs at all.

                                                                      I’m not sure what you mean by “they”. I’ve been using Tree Style Tab on Firefox for as long as I can remember.

                                                                      1. 7

                                                                        “They” is Chrome, not Firefox.

                                                                        1. 1

                                                                          … and it’s absolute garbage without hacking userChrome.css.

                                                                      2. 6

                                                                        Out of principle (re: reducing the monopoly), I am trying to switch to Firefox. (I’ve done so on one of my daily drivers, but not both.) To answer your question, though, there is at least one feature where Chrome is unequivocally better than Firefox: the UX for multiple profiles/personas.

                                                                        In Chrome/ium, the entry point for profiles is a single icon/click in the main toolbar. Switching profiles is another single click. So, with two clicks, it will either create a new window “running as” that profile, or it will switch to an existing window of that profile. The window acts as a container, so any new tabs (and even “New Window”s) will be for that profile. Everything about the experience makes sense, and is just about the simplest, most straight forward UX design that one could conceive.

                                                                        Contrast the above with Firefox’s profile UX. Profiles available in main toolbar? Nope. Open hamburger menu – profiles in there? Nope. How about in the Preferences UI? Nope. So where the heck is it? Well, there are CLI switches available (try not to laugh). -P/--profile to use a certain profile, or --ProfileManager to bring up a GUI widget on startup to pick a profile. Okay, so of course normal users will not use CLI switches. So what do they do? Well, there’s an about:profiles page available. So you have to type that (there’s autocomplete at least, to save you a few keystrokes), and then you click on a button on that page to open a new window launch a new Firefox instance for that profile. And then there is no visual indication in Firefox as to which profile you’re currently using, unless you’ve themed each profile, etc. In Chrome, you assign avatars to profiles, and the current profile’s avatar is displayed in the main bar.

                                                                        There’s this feature of Firefox called containers, or container tabs, or multi-account containers. Or something. They.. sort of do the job, in a clunky way. Tabs get a different underline colour based on the container, and different containers have different cookie sets, etc. However, this falls short in a couple ways. In Firefox, preference settings are shared among containers, whereas in Chrome, each profile has independent settings. Also, new tabs don’t (always?) take on the container of the previous/parent tab, so you have to manually set the container of a tab, sometimes.

                                                                        Anyway, enough said. The UX for multiple personas is astronomically better in Chrome. It’s not even close.

                                                                        1. 1

                                                                          Late but:

                                                                          Containers is what you look for.

                                                                          It is right in the address bar.

                                                                          It can even automatically change for sites that obviously belong to one container.

                                                                          1. 2

                                                                            I tried containers in Firefox. They go maybe 70% of the way towards what I need. It’s a nice try, but not good enough [for me].

                                                                            Anyway, nowadays, I use multiple local Linux users to sandbox things with Firefox. They main security concern there is sharing the X display.

                                                                        2. 9

                                                                          Inertia, ignorance and indolence come to mind.

                                                                          1. 6

                                                                            That’s the thing I keep coming back to when I see articles like this. It’s not like anything has changed.

                                                                            If you’ve gone for like a decade using a browser created by a monopolistic advertising company and after all those years you never saw the problem with it, does anyone really think reading some Wired article is going to finally be the thing that makes you come to your senses?

                                                                          2. 4

                                                                            Performance can be an issue, as discussed previously.

                                                                            Also I think a lot of front-end devs prefer the developer tools from what I’ve heard (although for me, the Firefox ones work fine and have for years, but I’m not a FE dev and I don’t know if there are any concrete benefits here or if it’s just a matter of preference).

                                                                            1. 3

                                                                              The Firefox dev tools are actually quite nice…except that they become an enormous memory hog and performance black hole if you dump a bunch of JSON into the log. I’ve had the devtools crash Firefox because of a day’s worth of redux debug logs. Never had that issue with chrome

                                                                            2. 4

                                                                              Chrome (and also safari) has a visibly lower latency in rendering the page. It doesn’t really matter at all if you think about it, but makes the feel of the browser quite different.

                                                                              1. 4

                                                                                Bugs. Bugs in firefox. Lots of them. Especially annoying while developing.

                                                                                1. 2

                                                                                  IME Chrome can sometimes perform faster than FF. I’ve really only noticed it when looking at sites with heavy CSS/JS-based animations. also the FF dev tools seem to get bogged down more often than Chrome’s. also, for a while FF performance on Mac OS was much worse than Chrome’s (I forget the details but this was a known issue that may (?) be fixed by now).

                                                                                  1. 3

                                                                                    I wonder if several years in the future, we’re going to see a bug change there like with the arrival of (now) macOS in the early 2000s. What I’m hinting at is the fact that the problem here are not browsers, but js/whatnot heavy websites, that browsers them try to accommodate, just like Windows was going out of their way to keep backwards compatibility and hide application idiocies. Then came Apple with their “we don’t care about backwards compatibility, this is what you can use”.

                                                                                    1. 1

                                                                                      I’m skeptical. Long-term, I think browsers will take over native applications as the default app distribution + runtime environment, as browser vendors add more and more native/low-level APIs to the web platform. Then again, I’m not the first person to predict this so who knows.

                                                                                      Maybe some day plain HTML/CSS will become a second-class citizen (or people will get used to using other programs to browse the ‘old web’).

                                                                                      1. 3

                                                                                        Or you’ll have to download a “web-browser” app in your web browser to view actual HTML content, which to be frank, is already happening, given the number of blogging sites that break completely if Javascript is disabled.

                                                                                1. 1

                                                                                  Unfortunately, there’s no way to influence [optimizations and compilation time] by turning them off or tuning somehow

                                                                                  (declaim (optimize (speed 0)))?

                                                                                  1. 2

                                                                                    I do believe they’re referring to the ability to turn off specific optimizations. Obviously disabling optimizations will do that, but is not suitable for production.

                                                                                  1. 15

                                                                                    It looks like megacorps are starting to take Bitcoin seriously. What happened to corporate social responsibility? Oh that’s right. It only applies when it doesn’t affect the bottom line.

                                                                                    In the immortal words of Pink Floyd, “ha ha, charade you are”.

                                                                                    1. 4

                                                                                      I would not assume they’re doing this to make money. In large organizations individual incentives are often quite divorced from making money for the organization. Instead, incentives might be “creating a splashy product will get me promoted” or “everyone is doing this, if it happens to turn out to be a big thing I’ll look stupid if I didn’t have a project in this area”.

                                                                                      1. 10

                                                                                        I’m frankly worried by an uptick in bitcoin adoption by well-known companies and “nerd-celebrities” over the last several months. Here is a selection of links.

                                                                                        Last but not least, we have the height of hypocrisy: people can buy a Tesla with Bitcoin. (@skyfaller already called Tesla out upthread).

                                                                                        Herd instinct appears to be taking its course.

                                                                                        1. 2

                                                                                          I’ve only read a small amount about Microsoft’s incentives here. But according to product lead Daniel Buchner (https://github.com/csuwildcat), Microsoft gave him this opportunity after years of toiling away on standards and working at Mozilla. So someone at Microsoft with some influence really pursued the talent and the money to put this together.

                                                                                        2. 4

                                                                                          social responsibility

                                                                                          There is a social benefit to decentralized technology (of which blockchain is one implementation mechanism) as well, which is mainly to do with circumventing centralized censorship and thereby enabling various subcultures to co-exist on the internet (as it used to be before Big Tech began controlling narratives) without compromising on localized moderation[1] of them.

                                                                                          [1] cf. ‘decentralized moderation’, eg: https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors

                                                                                          1. 5

                                                                                            If they wanted a decentralized system, they could have used one that wasn’t so egregiously wasteful, or invested in bringing more efficient options like proof-of-stake to fruition instead of latching onto bitcoin.

                                                                                            1. 1

                                                                                              Yeah, I’m not sure what’s going on here. From their 2020 docs:

                                                                                              Currently, we’re developing support for the following ledgers: Bitcoin, Ethereum, via uPort, Sovrin

                                                                                              Our intention is to be chain agnostic, enabling users to choose a DID variant that runs on their preferred ledger.

                                                                                              I’ve attempted to play with the API here, but it seems like it has been depreciated. At some point they must have decided to go all in on Bitcoin. Maybe they’re also going to next uphold the promise to develop on other ledgers.

                                                                                          2. 2

                                                                                            Well sure - it’s right there in the articles of incorporation. For better or for worse, social responsibility isn’t part of the material of operating a business.

                                                                                            It’s interesting that Microsoft sees a place to profit here.

                                                                                            1. 1

                                                                                              It’s never been a genuine thing, and can’t really be.

                                                                                            1. 2

                                                                                              Are you using vim, but in emacs? Like spawning a subshell in emacs to run vim? Could you elaborate a bit more on that, as you glance over it in the first paragraph?

                                                                                              1. 5

                                                                                                Very likely that they’re using evil-mode, possibly via one of the several distributions (e.g. spacemacs, doom emacs) that integrates it deeply.

                                                                                                1. 1

                                                                                                  Yes, I’m using evil-mode with spacemacs on Linux. It ruins you.

                                                                                                  1. 1

                                                                                                    Out of curiosity, Why do you use vim via emacs and not “normal” vim?

                                                                                                    1. 1

                                                                                                      org-mode and emacsclient mostly.

                                                                                                      1. 1

                                                                                                        Besides org-mode, which cadey already mentioned, magit is also a killer app. Even when I am developing in CLion, I have an emacs session just for Magit.

                                                                                                1. 2

                                                                                                  This is one thing that annoys me from time to time in Rust. I believe that Rust libraries tend to be better about using Result instead of Option (or in addition to—I see plenty of Result<Option<T>, E>) for errors (partly because ? doesn’t work on Option, at least on stable rust), but it is immensely annoying to try to suss out why library code is giving None for an error.

                                                                                                  1. 2

                                                                                                    I started using Colemak before actually starting to use Vim, and when I switched to (neo)vim and started learning that, I rebound the keys in an…interesting way: nest. On QWERTY, that’d be jkdf—so it is still on the home row but split across both hands.

                                                                                                    st are up/down, and are easily usable left-handed to browse. When I used vimium, I liked that because it meant I could left-hand scroll while still using the mouse with my right hand. Now it’s just habit.

                                                                                                    ne are right/left (in that order—i.e. they’re inverted: the leftward key moves right). I don’t know why I inverted them. Maybe because M-n for “next window” in xmonad was an easy mnemonic. Maybe just because it felt right at the time. I don’t really use these keys when editing text, but my xmonad keybindings are the same for 2d window navigation and they do get used there.

                                                                                                    1. 8

                                                                                                      I think the author of this post is correct in surmising that the proliferation of feature-rich, graphical editors such as Visual Studio Code, Atom, and Sublime Text have a direct correlation to the downturn of Emacs usage in recent years. This might seem a little simplistic, but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python. Choosing between the top two interpreted languages for your editor’s scripting system is going to attract more people than choosing a dialect of Lisp. The fact that Emacs Lisp is one of the most widely-used Lisp dialects tells you something about how popular Lisp is for normal day-to-day programming. It’s not something that most are familiar with, so the learning curve to configuring Emacs is high. Meanwhile, VS Code and Atom let you configure the program with JSON and JavaScript, which is something I believe most developers in the world are familiar with at least on a surface level. If you can get the same features from an editor that is written in a familiar language, why would you choose an editor that requires you to learn something entirely different?

                                                                                                      I use Emacs, but only for Org-Mode, and I can tell you with experience that editing the configs takes a bit of getting used to. I mostly use Vim and haven’t really compared it to Emacs here because I don’t feel like the two are easily comparable. Although Vim’s esoteric “VimL” scripting language suffers from the same problems as Emacs, the fact that it can be started up and used with relatively minimal configuration means that a lot of users won’t ever have to write a line of VimL in their lives.

                                                                                                      1. 14

                                                                                                        I might be mistaken, but I don’t think that most “feature-rich, graphical editors”-users don’t customize their editor using “JS or Python”, or at least not in the same way as one would customize Emacs. Emacs is changed by being programmed, your init.el or .emacs is an elisp program that initializes the system (setting the customize-system aside). From what I’ve seen of Atom, VS Code and the like is that you have JSON and perhaps a prettier interface. An Emacs user should be encouraged to write their own commands, that’s why the *scratch* buffer is created. It might just be the audience, but I don’t hear of VS Code users writing their own javascript commands to program their environment.

                                                                                                        It’s unusual from outside, I guess. And it’s a confusion that’s reflected in the choice of words. People say “Emacs has a lot of plugins”, as that’s what they are used to from other editors. Eclipse, Atom, etc. offer an interface to extend the “core”. The difference is reflected in the sharp divide between users and (plugin) developers. Compare that to Emacs where you “customize” by extending the environment. For that reason the difference “users” and “developers” is more of a gradient, or that’s at least how I see it. And ultimately, Lisp plays a big part in this.

                                                                                                        It was through Emacs that I learned to value Free Software, not as in “someone can inspect the code” or “developers can fork it”, but as in “I can control my user environment”, even with it’s warts. Maybe it’s not too popular, or maybe there are just more easy alternatives nowadays, but I know that I won’t compromise on this. That’s also probably why we’re dying :/

                                                                                                        1. 13

                                                                                                          Good defaults helps. People like to tweak, but they don’t want to tweak to even get started. There’s also how daunting it can appear. I know with Vim I can get started on any system, and my preferred set of tweaks is less than five lines of simple config statements (Well, Vim is terse and baroque, but it’s basically just setting variables, not anything fancy.). Emacs, there’s a lot to deal with, and a lot has to be done by basically monkey-patching - not very friendly to start with when all you want is say, “keep dired from opening multiple buffers”.

                                                                                                          Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                                                                                                          1. 3

                                                                                                            Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                                                                                                            I agree on the defaults (not that I find vanilla Emacs unusable, either), but I don’t really agree with this. It seem to be a common meme that Elisp is a “bad lisp”, which I guess is not wrong when compared to some Scheme and CL implementations (insofar one understands “bad” as “not as good as”). But it’s still a very enjoyable language, and perhaps it’s just me, but I have a lot more fun working with Elisp that with Python, Haskell or whatever. For all it’s deficiencies it has the strong point of being extremely well integrated into Emacs – because the entire thing is built on top of it.

                                                                                                            1. 1

                                                                                                              I also have a lot more fun working with Elisp than most other languages, but I think in a lot of regards it really does fail. Startup being significantly slower than I feel that it could or should be is my personal biggest gripe. These days, people like to talk about Lisp as a functional language, and I know that rms doesn’t subscribe to that but the fact that by default I’m blocked from writing recursive functions is quite frustrating.

                                                                                                          2. 3

                                                                                                            It’s true, emacs offers a lot more power, but it requires a time investment in order to really make use of it. Compare that with an editor or IDE where you can get a comfortable environment with just a few clicks. Judging by the popularity of macOS vs Linux for desktop/workstation use, I would imagine the same can be said for editors. Most people want something that “just works” because they’re busy with other problems during the course of their day. These same people probably aren’t all that interested in learning a the Emacs philosophy and getting to work within a Lisp Machine, but there are definitely a good amount of people who are. I don’t think Emacs is going anywhere, but it’s certainly not the best choice for most people anymore.

                                                                                                            1. 8

                                                                                                              Most people want something that “just works” because they’re busy with other problems during the course of their day.

                                                                                                              This has been my experience. I learned to use Vim when I was in school and had lots of free time to goof around with stuff. I could just as easily have ended up using Emacs, I chose Vim more or less at random.

                                                                                                              But these days I don’t even use Vim for programming (I still use Vimwiki for notes) because I simply don’t have time to mess around with my editor or remember what keyboard shortcuts the Python plugin uses versus the Rust plugin, or whatever. I use JetBrains IDEs with the Vim key bindings plugin, and that’s pretty much all the customization I do. Plus JB syncs my plugins and settings across different IDEs and even different machines, with no effort on my part.

                                                                                                              So, in some sense, I “sold out” and I certainly sacrificed some freedom. But it was a calculated and conscious trade-off because I have work to do and (very) finite time in which to do it.

                                                                                                              1. 7

                                                                                                                I can’t find it now, but someone notes something along those lines in the thread, saying that Emacs doesn’t offer “instant gratifications”, but requires effort to get into. And at some point it’s just a philosophical discussion on what is better. I, who has invested the time and effort, certainly think it is worth it, and believe that it’s the case for many others too.

                                                                                                                1. 3

                                                                                                                  IDEs are actually quite complicated and come with their own sets of quirks that people have to learn. I was very comfortable with VS Code because I’ve been using various Microsoft IDE’s through the years, and the UI concepts have been quite consistent among them. But a new user still needs to internalize the project view, the editing view, the properties view, and the runtime view, just as I as a new user of Emacs had to internalize its mechanisms almost 30 years ago.

                                                                                                                  It’s “easier” now because of the proliferation of guides and tutorials, and also that GUI interfaces are probably inheritably more explorable than console ones. That said, don’t underestimate the power of M-x apropos when trying to find some functionality in Emacs…

                                                                                                                2. 3

                                                                                                                  Yeah, use plugins in every editor, text or GUI. I’ve never written a plugin in my life, nor will I. I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                                                                                                  1. 3

                                                                                                                    I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                                                                                                    That’s my point. Emacs offers the possibility that extending the environment isn’t a detour but a method to achieve your goals.

                                                                                                                    1. 5

                                                                                                                      Writing a new major mode (or, hell, even a new minor mode) is absolutely a detour. I used emacs for the better part of a decade and did each several times.

                                                                                                                      I eventually got tired of it, and just went to what had the better syntax support for my primary language (rust) at the time (vim). I already used evil so the switch was easy enough.

                                                                                                                      I use VSCode with the neovim backend these days because the language server support is better (mostly: viewing docstrings from RLS is nicer than from a panel in neovim), and getting set up for a new language is easier than vim/emacs.

                                                                                                                      1. 1

                                                                                                                        It’s not too surprising for me that between automating a task by writing a command and starting an entire new project that the line of a detour can be drawn. But even still, I think it’s not that clear. One might start by writing a few commands, and then bundle them together in a minor mode. That’s little more than creating a map and writing a bare minimal define-minor-mode.

                                                                                                                        In general, it’s just like any automation, imo. It can help you in the long term, but it can get out of hand.

                                                                                                                  2. 2

                                                                                                                    Although I tend to use Vim, I actually have configured Atom with custom JS and CSS when I’ve used it (it’s not just JSON; you can easily write your own JS that runs in the same process space as the rest of the editor, similar to Elisp and Emacs). I don’t think the divide is as sharp as you might think; I think that Emacs users are more likely to want to configure their editors heavily than Atom or VSCode users (because, after all, Elisp configuration is really the main draw of Emacs — without Elisp, Emacs would just be an arcane, needlessly difficult to use text editor); since Atom and VSCode are also just plain old easy-to-use text editors out of the box, with easy built-in package management, many Atom/VSCode users don’t find the need to write much code, especially at first.

                                                                                                                    It’s quite easy to extend Atom and VSCode with JS/CSS, really. That was one of the selling points of Atom when it first launched: a modern hackable text editor. VSCode is similar, but appears to have become more popular by being more performant.

                                                                                                                  3. 7

                                                                                                                    but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python

                                                                                                                    I disagree.

                                                                                                                    I think most people care that a healthy extension ecosystem that just works and is easy to tap in to is there - they basically never really want to have to create a plugin. To achieve that, you need to attract people to create plugins, which is where your point comes in.

                                                                                                                    As a thought experiment, if I’m a developer who’s been using VS Code or some such for the longest time, where it’s trivial to add support for new languages through an almost one-click extension system, what’s the push that has me looking for new editors and new pastures?

                                                                                                                    I can see a few angles myself - emacs or vim are definitely snappier, for instance.

                                                                                                                    EDIT: I just spotted Hashicorp are officially contributing to the Terraform VS Code extension. At this point I wonder if VS Code’s extension library essentially has critical mass.

                                                                                                                    1. 3

                                                                                                                      Right: VS Code and Sublime Text aren’t nearly as featureful as Emacs, and they change UIs without warning, making muscle memory a liability instead of an asset. They win on marketing and visual flash for their popularity, which Emacs currently doesn’t have, but Emacs is what you make of it, and rewards experience.

                                                                                                                    1. 2

                                                                                                                      I started using Debian Stable for my desktop after I unexpectedly had Arch fail to boot to X* (again) right as I was struggling to hit a major paper deadline.

                                                                                                                      Previously, I’d switched from Ubuntu to Arch because it let me keep up-to-date packages without the headache of Ubuntu’s dist-upgrade (and incredibly premature use of things like pulseaudio and Unity). It worked 99% of the time, but that 1% nearly fucked me over in a big way.

                                                                                                                      I’ve been running Debian Stable for two years now and have yet to ever have it fail to boot to X. During paper deadlines, this is wonderful because if I happen to need to update a library in order to make someone’s code compile, I can just do it and be confident that it won’t cost hours of time getting my system to boot up again.

                                                                                                                      (* When Arch broke, it was because I had to update a library (libigraph if memory serves), which in turn necessitated updating libc, which cascaded into updates everywhere and then lo-and-behold the system couldn’t fully boot until I tracked down a change in how systemd user units worked post-update.)

                                                                                                                      1. 2

                                                                                                                        We do have graph editors, they’re just all proprietary and/or use some arcane format that isn’t nearly as straightforward as text encoding.

                                                                                                                        1. 2

                                                                                                                          Yup, I faced this problem 6 months ago.

                                                                                                                          Even though they are far from being perfect, there are some FOSS graph editors.

                                                                                                                          Gephi: https://github.com/gephi/gephi is really nice. The problem I have with it is its stability. Other than that, you can edit and analyse gigantic graphs from a single tool, which is really nice.

                                                                                                                          I agree with you about the format problem. Even though graphml is quite widely supported, getting interoperability using that format is quite hard, mostly because of poor implementations. For instance, pygraphML (https://github.com/hadim/pygraphml) which is the de facto standard graphml for python and gephi’s graphml importer are not compatible (problem with the nodes labels if I remember correctly).

                                                                                                                          1. 3

                                                                                                                            Depending on what area you’re in, GraphML may not even be feasible. I work with a good deal of social network data, and encoding any remotely large dataset in GraphML would be insane both in terms of parsing time and disk usage.

                                                                                                                            A large/medium (depending on who you ask) dataset I use for testing is nearly 50GB in ye olde edge list format, which takes about 20 minutes to read and parse. GraphML would take even longer. I use a binary format which reduces it to about 12GB and which takes 15-30 seconds to read depending on disk speed.

                                                                                                                            This is fundamentally why there are so many different formats for graph/tree data: such a general structure sees usage in a huge variety of fields, and therefore there are a huge variety of requirements for what it can represent, how efficiently it needs to do so, etc. No one format can possibly meet all of these requirements.

                                                                                                                          2. 1

                                                                                                                            Which programs do you have in mind?