In other words, vi -> vim -> neovim would be a reasonable learning path, but beginners don’t do that and the NeoVim team actively recruits people to their cause without any consideration for the important of a progressive learning approach.
Wait wha? I cant use a tool because I didnt learn its ancestors? I didnt learn vim from vi, just like I am sure people forgot to take the journey of
ed -> ex -> vi -> vim. Are we expected to learn emacs from TECO emacs -> gosling -> Gnu -> Xemacs -> Gnu ?
NeoVim looks in $XDG_CONFIG_HOME for its configuration files which means that it follows the ~/.config/… location convention that is now the Linux standard. I love this! I love their concern for this standard. Unfortunately, after more than two decades, no one cares because you already are maintaining your Vim configuration in a dotfiles repo and providing symbolic links.
I errr version ~/.config too so ¯_(ツ)_/¯. Also is it that terrible to have
cat ~/.config/init.vim " I am lazy and have lots of ~/.vimrc stuff, pretend for my old self set runtimepath^=~/.vim runtimepath+=~/.vim/after let &packpath = &runtimepath source ~/.vimrc
The second thing listed on NeoVim’s comparison page is the 42 different defaults from Vim. These are completely > and totally irrelevant because anyone using Vim should always disable all the defaults and begin with a clean slate in their vimrc file.
You mean the defaults that all the distributions and every vimrc file in existence has? I guess I must force new users to learn why backspace is a bit weird and how that still relates to ex?
… the biggest being full shell integration for extensibility, not supporting Lua and NodeJS plugins. NeoVim has made itself into a serious joke among those who know and use Vi/m as has been down for decades for all the right reasons. … json_decode are just silly when commands like jq exist. They even renamed viminfo to shada for nothing but vain not-invented-here reasons. And Lua and Python support? Pffff. Please. You can be glad you learned to use Vi/m correctly and without a bunch of unnecessary bloat that would directly affect your performance on every other system with Vi while diminishing your ability to actually use your most powerful tool, the shell in which Vi/m is running.
? json_decode is in vim too?
The if_perl has been dropped. Nothing screams “we are all morons” more than dropping Perl support from something that has had it for 2 decades just because you buy into the trendy Perl-hate.
I guess python or lua is bloat, but Perl is not? In some ways we should pour one out for Perl, but it is on the wane, do you care that mzscheme is also gone now?
NeoVim removed several core tools used regular by Vim users for seriously important use cases:
Maybe I am missing something?
ex - binary not installed (vim does) nvim -e ?, I bet a symlink would work too (you know how vim does this right?)
:ex - not accessible from vim command line You got me, but I guess
Qis a bad key?
view - cannot run vi in read-only mode nvim -R ?
… etc etc
Again, incredibly inexperienced decisions from people who never actually learned to use Vim for anything significant in the first place. The fact that they removed :shell completely confirms they don’t value shell integration which is the basis of all of Vim’s magical power. The fact that they removed vimdiff shows none of them have ever worked on any cybersecurity project of any significance.
Ok so :shell is gone (I had to look it up, didnt miss it since tmux and or CTRL-Z and or :!), but vimdiff is still there, I would be surprised if I suddenly could not do a git merge
I kinda gave up at this point, it feels like old man yells at cloud
I like the idea of broot, but it glitched out between the UI and
rm and I wound up blowing away a lot of files by accident.
I never got any such report. Would it be possible to have more details (in a GH issue or in the chat if needed) ?
Chat probably works, its hard to reproduce (and I dont want to :))
I was using the previous sort by size magics (which seems to now be
--sort-by-size), finding dirs and then asking broot to delete.
rm under the hood which at the time freaked broot out (it tried to refresh and put me in the top level), because of lag I would end up hitting the key combination again and then … toasting the parent (very large dir)
I might give it another spin and see if it lags out on me again
Humm is this wise, the urban legend has always been asan has big security holes.
The major specific vulnerability described here seems to be specific to suid binaries, which Firefox is not.
Levine’s classic. Probably the first online book I have referenced in a “publication”; my high-school (A-Levels) project was an x86 disassembler. Actually, it was a database course management project, but I changed it a month before graduation, and my teacher refused to grade it; so she let me do disassembly with the tacit understanding that I wasn’t gonna get any help from her, and I was at the mercy of the outside graders. (I also changed the implementation language from Pascal to C and x86 assembly)
It was my first real program. And I bled. The x86 binary format is not for the faint of heart, at least not a non-programming teenager.
And I didn’t do well ;-)
Trivia time: John Levine is (was?) the moderator of comp.compilers in the 90s, which I read religiously. He would edit posts with his own addenda (“[I think the Dragon book has this algorithm” – John]“), etc. I didn’t realize it was an edit, so I took on to asking questions in usenet and other online fora, but if I ever had a doubt about my question, I would add ”-John" addendum at the end with my own alternate theories. To this day, my teenage alias is archived with that embarrassing signature for perpetuity.
Another trivia. One night I refused to go hang out with my teenage friends because I wanted to read up on SML/NJ, and work through some of Norman Ramsey’s “Hacker Challenges”. Ramsey was then at Harvard, IIRC, and had a page of challenges for “elite” “hackers”. Me being a “blackhat”, then, totally misunderstood the label; I spent close to a month studying SML/NJ, because one of Ramsey’s challenges was an optimizing linker for SML/NJ. I poured over Levine’s book and all sorts of publications trying to live up to this challenge. I thought writing a linker for SML/NJ would label me “elite” and give admission into exclusive IRC channels for top-criminals ;-)
That month was the last time I had casual friends for the next decade. Everyone of those boys moved on and I never noticed us growing apart. The next time I looked up from this “research”, it was 2 years later and I was by now a Unix programmer (up, or down, from an Win 9x script-kiddie.) I missed prom, home-coming, graduation, new year’s eves .. the entire millennium, and I didn’t even care. I had better things to do.
I found a new, different kind of pride. I was no longer another immigrant Somali kid “hustling” in America; I now had role models. I was better than bad-ass, I was curious. And I am grateful to this day I did!
Hey, that’s me! Thank you for sharing :)
Np, I like the approach and kinda want it to work in firefox, maybe I will fiddle a pull request at some point.
It more-or-less works on firefox right now. Seems to crash occasionally, but I’ve been able to use it. The only thing is I haven’t looked into packaging it.
Hey, could you describe the format of the annotations? People have tried to do stuff like this before (in a more centralized way) and the hard — I would almost say “insoluble” — problem is how you refer to the text being annotated, when the page it’s on can change arbitrarily.
(IIRC, much of the complexity of Ted Nelson’s (in)famous Xanadu had to do with this. In Xanadu, links could point to any range within a document.)
This is kind of reminding me of the Chrome’s fragment anchors - kind of a worse is better version of Xanadu’s.
Right now, the annotations contain a pair of
(path, offset)tuples. The first
(path, offset)indicates where an annotation starts, and the second one where it ends. The
pathis for selecting a DOM node, and the offset is for textual offset within that node. I’m aware that this is not how it’s always done in other systems. For instance, I’ve seen a pattern matching approach (“text that starts with A, and ends with B”); , it seems like this would malfunction on pages here similar text is often repeated. However, this approach would prevent structural changes (wrapping the content in a
div, say) from mangling the annotation.
You’re completely right that pages changing presents a serious problem. There was a writeup by hypothes.is (I’m struggling to find it now) about the sort of annotation model they use, which involves several different schemas (e.g., a combination of both formats I’ve described above). I am hoping to apply a similar strategy to Matrix Highlight. However, it’s currently in its early stages, so I’ve had to worry about other parts of the software.
Ultimately, no annotation format is resilient to a page being completely replaced or fundamentally altered. Another hope of mine is to integrate with a web archiving technology (e.g. WARC) and store an archive of the page being annotated at the time it was accessed. This way, even if the original changes, the old, annotated version will remain intact.