[vim’s] development seemed slow and sometimes rejected clear improvements for no obvious reason. I wasn’t the only person who felt like this: a motivated group of developers eventually forked Vim to create Neovim.
Apparently I cannot say this enough to matter but this myth needs to die. This is not why neovim was created. Thiago wanted to rewrite the core of vim around a different message loop paradigm. He did not try to get his work merged into vim. It would have been extremely difficult to do so in a backward compatible way. He understood that and decided to fork to pursue those ends on a shorter timeline than a major vim release schedule. The idea that vim was resistant to contributions came from other less genuine actors.
I really worry about increasing dependence on LLMs becoming a local minimum that leaves so much on the table. Typing is slow and waiting for responses from LLMs is slow. I get using them for “refactor this code base to do X Y and Z…” and if they work great. But for small snippets of code, its sad.
This is even worse in the data science space. Most operations that you want to do on an SQL table or pandas dataframe follow very specific patterns. It’s just awkward to remember those 20 operations and none of the tools in those spaces focus on the UI aspects.
Why will 100 people today look at a column of mostly numbers with a couple of obviously errant strings, and have to remember the commands to fix that? Why don’t we have tools that look at the column, and suggest one or more transformations based on the general shape of it. Why don’t our tools let us toggle between those different transformations.
I wonder about this backwards in time, too. Back in the day, they wrote Unix using a computer with less capabilities than my headphones by writing stuff out on paper and then retyping it into a punchcard. Did we lose focus when we went to video terminals because it was suddenly “so easy” to just try stuff out and run it instead of really thinking it through beforehand?
Because the metal does its things quite a few times, and if the high-calorie-per-flop meat thinks a little bit, the metal doesn’t have to do as much. Then the metal goes much faster and it wastes less time of all the other high-calorie meat that has to consume what the metal does.
You have the world’s most powerful neural network (for the next couple of years) on your shoulders. It makes sense to train the neural network instead of using a fixed algorithm that doesn’t benefit from training.
Apparently I cannot say this enough to matter but this myth needs to die. This is not why neovim was created. Thiago wanted to rewrite the core of vim around a different message loop paradigm. He did not try to get his work merged into vim. It would have been extremely difficult to do so in a backward compatible way. He understood that and decided to fork to pursue those ends on a shorter timeline than a major vim release schedule. The idea that vim was resistant to contributions came from other less genuine actors.
I really worry about increasing dependence on LLMs becoming a local minimum that leaves so much on the table. Typing is slow and waiting for responses from LLMs is slow. I get using them for “refactor this code base to do X Y and Z…” and if they work great. But for small snippets of code, its sad.
This is even worse in the data science space. Most operations that you want to do on an SQL table or pandas dataframe follow very specific patterns. It’s just awkward to remember those 20 operations and none of the tools in those spaces focus on the UI aspects.
Why will 100 people today look at a column of mostly numbers with a couple of obviously errant strings, and have to remember the commands to fix that? Why don’t we have tools that look at the column, and suggest one or more transformations based on the general shape of it. Why don’t our tools let us toggle between those different transformations.
I wonder about this backwards in time, too. Back in the day, they wrote Unix using a computer with less capabilities than my headphones by writing stuff out on paper and then retyping it into a punchcard. Did we lose focus when we went to video terminals because it was suddenly “so easy” to just try stuff out and run it instead of really thinking it through beforehand?
I’ve been thinking about this for over ten years and I still have no answer. On the one hand, we have capabilities we didn’t before, like emulating the CPU in an assembler to run unit tests and on the other, yes, it is way easier to just try stuff out instead of thinking about it beforehand.
I am not sure emulation is a good example :-)
Microsoft BASIC was developed in the 1970s on a PDP10 using an 8080 emulator.
The ARM was designed in the 1980s using an emulator written in BBC BASIC running on a BBC Micro.
No, I meant running an emulated CPU in the cross-assembler for said CPU to run tests.
Why on earth should fallible, expensive, high-calorie-per-flop meat do slowly and poorly what the cheap fast metal can do for us?
Because the metal does its things quite a few times, and if the high-calorie-per-flop meat thinks a little bit, the metal doesn’t have to do as much. Then the metal goes much faster and it wastes less time of all the other high-calorie meat that has to consume what the metal does.
You have the world’s most powerful neural network (for the next couple of years) on your shoulders. It makes sense to train the neural network instead of using a fixed algorithm that doesn’t benefit from training.