IRC on libera is just like IRC on freenode, just slightly better. Things like getting cloaks are smoother and well automated. I think the shift was a good opportunity for the libera admins to organise some things and they did (and do!) a great job.
Group and cloak registration was completely non-functional on Freenode for years. I only got a proper group and a cloak for my project after migration to Libera.
My client of choice is irssi, though. Nothing against weechat, it’s just not my thing.
I’ve done something similar to the repetition aspect of devil in vim with vim-movefast.
Of course lot of the things that devil does already only require single keys in vim due to its modal nature, but there are still some things that require either a chord (<C-u>) or multiple keys (gt) which are repeatable and can be reduced to single keys with vim-movefast. I’ve come to be heavily reliant on single key scrolling with <Space>jjjk rather than <C-d><C-d><C-d><C-u>.
So, this is funny, but I’ve found myself using both the Godot and Tic-80 built in text editors for a lot of things, and only missing Vim keybinds at the extremes.
I realise that it’s not related to tic-80 but you reminded me…
I played a bit of TIS-100 on Steam recently where you program a series of extremely limited pseudo assemblers. I found the built-in editor to be so frustratingly not-vim that I played the game by finding the save files on my file system and editing them directly in vim, then reloading the game in Steam to run them.
I’ve been using Migadu for a few years now; they’re great. The best thing about them is that I always get very quick replies from their support teams when needed.
I host the email for a few dozen accounts on their largest account, and it works smoothly. The webmail is okay. No calendar integration or the like, which was a pain point for a few of the users when I migrated from an old GMail service when Google decided to start charging for it.
Best thing to do if you’re considering it is to look at how much mail you’ve sent and received in previous months. I think there’s a half-decent Thunderbird add-on that’ll summarise that information for you if nothing else.
Also, if you’re keen on moving but are pushing the limit on sends then remember that there’s no reason you need to always use their SMTP service! I often use my ISP’s (sadly now undocumented) relay and never had any bother.
Yes, I’m on the Micro plan, and I haven’t come close to the limits. If there were one day when I exceeded the limits I’m sure they wouldn’t mind; if I had higher email flux in general though I’d be happy to pay more.
I just started using them for some things, and the amount of configurability they give you is crazy (in a great way, that is). I’m going to move all of my mail hosting to them some day.
I work in a context that only exists because copyright laws protect creators and enable them to create on a professional and not hobby basis. So I’m in no way a believer that copyright is inherently evil.
But archive.org is my only way of accessing the vast amount of literature (fiction and non-fiction!) between the end of the public domain (1920s) and, say about 2010. It is really disingenuous for publishers to claim that they are losing money from these works being available, because they are not publishing them themselves.
I would like to share the books I read and loved as a child with my children. Often archive.org is the only way to do that.
I would have a lot more sympathy for the publishers if they provided a blessed legal version of archive.org with their entire back catalog (unedited!). I would in all seriousness pay for access to that.
I used to work professionally as a musician. In theory, copyright law protected my livelihood; in practice, copyright was how various thugs would shakedown music venues for royalties, and often I would be told by venue owners/managers that we needed preapproval on every song we played.
Copyright is inherently unfair; some artists will get lucky and their particular expression of a popular idea will be legally protected, while most artists will be ineligible for the same protection.
The law in this case only holds up if the person who owns the copyright has enough money and time to pursue enforcing it.
When we talk laws and rules, I think that point gets glossed over. Lessig (1999) identifies four elements that regulate behavior (online): Laws, norms, markets, and technology.
Code/architecture – the physical or technical constraints on activities (e.g. locks on doors or firewalls on the Internet)
Market – economic forces
Law – explicit mandates that can be enforced by the government
Norms – social conventions that one often feels compelled to follow
In the case of copyright I feel like the market piece dominates the rest as the major players have so much money invested in the system.
I would love to see an update to copyright law that requires publication (ideally with no DRM that in any way impedes fair use) to protect works. It annoys me immensely that companies can simultaneously refuse to sell something to potential customers at any price and claim lost revenue when those same people acquire copies elsewhere.
It would be difficult to implement well. A publisher could easily withdraw something from publication long enough for copyright to lapse and then reintroduce it without having to pay the author, but I can imagine that, with sufficient safeguards, a rule that copyright lapsed three years after last in-copyright publication would be implementable.
The original Queen Anne statute had a term of 14 years. I proposed life of author + 15 years a few months back and was blasted by a copyright abolitionist for my pains 😉
Another, related issue is that once copyright is equated with property, it can be sold and signed away to entities with much wider powers than an author or a composer, and the work is no longer a cultural artifact but an investment item.
I don’t want to abolish copyright, but I wouldn’t even go as long as life+. Maybe the 28 years the US used for a long time. It seems to me that long copyrights lose their original purpose and just become a way to seek rents.
Just had an interesting idea for a novel. In a world where copyright is limited only by the lifetime of the author, building up a valuable enough body of work might put quite the price on one’s own head.
For books, at least, the vast majority of the revenue is made in the first 7 years (and more than half of that in the first two) typically. This changes a bit with things like TV or film adaptations, which take a long time to organise. I could imagine copyright permitting redistribution of an original work after 10 years and permitting derived works without a license after a bit longer.
I think one of the big problems with copyright at the moment is that it isn’t tailored for domains. Software, for example, is completely obsolete when it comes out of copyright (the first ever copyrighted program is still under copyright). I doubt anyone benefits from, for example, Windows NT 4 or MacOS 9 still being under copyright, but the fact that they are makes (legally) emulating environments to run old software difficult.
That makes sense. To retain a trademark you have to use it and take efforts to defend it. It makes sense to me to have a “use it or lose it” to other intellectual properties.
I think the major problem would be encoding that into a law that’s enforceable and not easilly gamed. We don’t want publishers doing 10 book runs every 5 years and throwing them in the landfill to retain their rights.
Interesting enough one of the reason George Lucas got toy rights back for Star Wars was part of the license that was originally sold off said toy publishers had to make toys every year and they neglected to do that. (paraphrasing)
You could have full rights return to the author on the expiry of three years, who would then have three years to fulfill the publication requirement. And maybe a provision for an author to provide notice that they’re going to publish before the publisher’s three years are up.
I would like to share the books I read and loved as a child with my children. Often archive.org is the only way to do that.
Just to add to that, re-issuing books is not always the straightforward affair it’s sometimes made to be.
One of my favourite things in this world, which I’ve always carried with me when I moved, is an obscure children’s book I’ve had since I was a child. It has large, paged-sized illustrations scattered throughout it, like most children’s book have. Thing is, it was published at a time when things were… expensive where I’m from. So the illustrations aren’t coloured, they’re sort of in the style of 1940s Walt Disney newspaper comics. That was obviously not a problem for me, or most other children my age: we had crayons, so we all coloured our illustrations.
I can’t quite describe how popular this was. 30+ years later we still ask “what colour did you make the balloon?” and we all remember what colour we made it. The book was a hit among children our age, it was fun and easy to follow along. And, while not a colouring book, and very much a “serious” book otherwise – it’s literally a novel! – the fact that we could “contribute” to it to some degree was a huge part of the experience. For many people my age, this book was the gateway into reading, the first “long” book they’d ever finished, the first book they’d read cover to cover more than once and so on.
Anyway, this book was republished a few years ago, which I learned from a friend of mine who’d bought it for her child, along with a pack of crayons. However, in keeping with the times, the editor decided to republish the book with full-colour illustrations, which she only realized when they unwrapped the damn thing and her child flipped through it and asked what he was supposed to do with the crayons now. Presumably someone in marketing thought the children might get bored or whatever. The book is still fun but… it’s not quite the same book, you know?
Thing is, an uncoloured copy of it is not easy to get these days. One pops up in used bookstores every once in a while, but precisely because it was so popular, and especially after it was reissued in a slightly different format, it’s crazy expensive.
@nickspoon’s question sent me down the Internet rabbit hole last evening. I had no idea it had ever been published in English, I could’ve sworn it was mostly an Eastern European thing!
Given that’s the case, and based on how many people voted both your and nickspoon’s question, I think it’s worth sharing – the English translation was published as “The Adventures of Dunno and His Friends”. archive.org actually carries a 1980 English edition that’s very similar to mine, which is a local edition – it has a handful of full-colour illustrations but most of them is black and white. Some modern editions coloured some, or all of those black and white illustrations, too.
I think it was published pretty much all over Eastern Europe; I know people from Albania, Poland, and the Czech Republic who’ve read it. I’m not sure if it was equally popular all over Europe and across all generations. It was hugely popular in my generation. Basically everyone I know in my age group (35-40) has read it.
Two huge disclaimers: 1) this is a book from a very different era, and from a very foreign place. Some of it may have aged poorly, especially if read by someone who doesn’t know its original cultural context – not to the point where it’s offensive, I think, but nonetheless, 2) please don’t take this as my endorsing anything about it other than that it was fun for young kids to read back then.
I work in a context that only exists because copyright laws protect creators and enable them to create on a professional and not hobby basis.
While I don’t disagree with at statement, I don’t think the inverse is true, that having no copyright would prevent a professional basis. Another facet to this is that copyright can certainly also prevent creators from the same thing in many different ways.
I also think large parts of culture that creators rely on only even exist because copyright for some reason (time, waiving, etc.) didn’t exist or was broken.
Social networks succeed by membership and use, and they catch on by being discussed. How the heck do you even pronounce this word? How does a listener know how to spell that sound, in order to find it and join? These are aspects of a product, especially of a social product, that ought to be foolproof.
“tumbler” is an existing English word, and the “blr” part of Tumblr can’t really be pronounced in any other way in English. “Tumblra” doesn’t work, for example.
Nostr isn’t close to any existing word, and could as easily be pronounced “noster”, “nostra”, “nose-ter”, ““no-sitter” etc.
Tumble is a pretty normal English word, to this non-native speaker.
Nostr makes me think of Latin (Nostradamus, Nostrum), and how the (probable?) English pronunciation would be different than many other parts of the world.
Very thorough article! I’d be interested in an updated version.
3 years isn’t that long but in that time LSP has “happened”, so I would rather use an LSP server for actual code editing so I get a consistent experience with editing other languages. I should be able to edit lisp using the same LSP client (vim-lsp, ALE, neovim native LSP etc.) and therefore the same mappings I use in e.g python and typescript, to get completions, go to definition, find references, semantic highlighting (although this is presumably much simpler and easier to do with regex in lisp than lots of other languages).
The really interesting stuff (to me) in the article is about the REPL integrations and debugging, so having that decoupled from the editing support would be very interesting.
Got to give Lenovo some credit for how they complied, though. They could have made it so you would have to go through a complicated process of adding the keys you need to boot anything other than Windows. But they put a simple toggle switch to enable the most common case (you are booting a Linux distribution that uses the Microsoft 3rd party shim for secure boot).
I mean, for all we know, Lenovo might be getting a good deal out of this as well – sure, there may be no money to be taken straight from customers for it, but that’s not the only source of money nor the only metric for a good deal. Just because it was a contractual obligation (if it was) doesn’t mean it was a bad deal or that Lenovo were forced to make it.
Not sayin’ you’re not right to suspect Microsoft being nasty here, just that I’ve found it best to distribute suspicion evenly (and in very generous amounts :-D) among corporate actors.
Well, I will say that assuming this was actually a contractual obligation, the culpability is on Microsoft, and Lenovo did the best they could. I’m not letting Microsoft off the hook at all here.
Because some tech people still work at M$ for some reason. It’s mind boggling how legions of people who dedicate their life to understanding technology end up creating walled gardens.
Ok. One shouldn’t be surprised by people being willing to do questionable things for money. But this is literally working on something that they are probably going to be working around one or two employers later.
Or is it different in the US? Is it like really simple to just call one of your friends at M$ and get them cooperating, unlike from EU?
The overwhelming majority of developers in the world will not be effected by this. It’s not like a .NET developer is going to go out of their way to replace the OS on their work laptop.
I have done exactly that and it’s great for me, and actually great for my company, to have at least one person who knows their way around a linux machine (my system dependencies better match cloud dependencies and build chains, if nothing else). And dotnet works really well natively on linux now.
I would hope that developers as a group are curious enough that this is pretty common. Realistically I’m sure that the “majority” will only ever use Windows but I like to think and hope it’s not the “overwhelming majority”.
It’s not like M$ is somehow worse than all the other big tech companies. I’m sure one could quibble over details, but at a high-level, I’d argue they are all basically the same in terms of walled-garden love.
Sure, but M$ walling off the general purpose computer segment would affect me way more than Google with Apple claiming the phone segment. I make a living off Linux on my (and some of my clients’) productive general purpose computers after all.
And I am surprised that other devs don’t mind. I though the idea was that we are the ones who could always walk away and disrupt as much as we’d like.
If this continues, we won’t be able to install Linux and Kodi on a NUC to provide 4k video for the occasional movie screening in our favorite coffee. It would have to be Windows. Meaning, among other things, no SSH remote administration, license fees, reboots in the middle of the movie and so on.
We’d have to buy either underpowered RPis or some unlocked industrial devices at much higher price point.
Or my aunt’s laptop. She can’t afford a new one, so she has an old Lenovo with Linux. Making the device useful for couple more years. Do we just throw them away once the Windows support ends? Or do we run insecure and/or slow devices?
I kinda like how things are now. The freedom of compatible hardware and software without artificial barriers is just more productive.
For the record, I agree with the “unkind” assessment of my comment, but I was definitely not trolling.
I truly believe that people who help build these walled gardens should think about harm they are doing instead of just throwing arms up in the air “it’s just a job”.
IMO this requirement is entirely reasonable. I think it’s more important for a company to provide strong defaults for the majority of its customers than to make things easy for competitors, and that shouldn’t change just because a company is the dominant player. Notice I said defaults, not forced restrictions. Booting an alternative OS is an esoteric task that only a tiny percentage of PC customers will ever do, and I think it’s fine to require such customers to tweak a security setting in their firmware, particularly since they don’t have to disable Secure Boot altogether. Defaults can never be perfect for everyone, but I think that having the strongest security by default for the majority of users is the right call here.
I doubt that it would be legal in Norway: Lock-in mechanisms (innelåsende mekanismer), hereunder missing compatibility, is unfair according to Forbrukertilsynet.
If you don’t know Forbrukertilsynet, they were the first to forbid DRM on music, for similar reasons.
Regarding this article, I don’t think that this is a real lock-in mechanism. It is a simple toggle to enable the shim and also another toggle that can disable secure boot altogether. I don’t think it is a lock-in to require a few keypresses to change some settings when the default is argubly more secure.
Yes, video shouldn’t be any different. Famously, as DVD-Jon found out, it’s not illegal to break DRM. Note that this is specifically about bought music and bought video.
I don’t think it is a lock-in to require a few keypresses to change some settings when the default is argubly more secure.
That sounds like a key point. But it also depends on their informational obligations: How many expectations are broken, and do you need to be Matthew Garrett to debug it? I don’t think I could have guessed “the 3rd party key” if I hadn’t read the news today. If you can’t do it with the knowledge that can be expected of a customer, well, that’s a dark pattern, which is also forbidden.
I should add what I feel the law should be similar to the concept where they are required to allow it to be easily turned off so the user can actually own the machine, while at the same time be required to fix any issues that come up for that device during the lifetime + some extra duration of it (so no disabling of the functionality ten years later).
So installing your own operating system doesn’t violate the warranty, and the warranty of a fitness for purpose includes fixing any issues booting up the device using third party software - so if you have a disk with debian on it that boots on a few other contemporary machines but doesn’t on this particular laptop - or even better multiple separate machines all being this laptop - then they are required to fix it.
I think EU might be slowly getting there with the sideloading mandate. I don’t think they will explicitly cover different OSes, but the wording might be loose enough for courts to pry it wide open. Also, FSFE is probably lobbying as much as possible to include the OSes explicitly.
I’m convinced they’re in as well, I’m not very hopeful. It’s going to include too many holes. If you think FSFE is lobbying, what do you think MS is doing? They’ve been proven that they abuse their monopoly and that they don’t stop on “lobbying” and go straight for bribe and corruption.
EU might get the doors cracked a little. But wide open? I’m not hopeful.
I think requiring users to install their own security keys in order to install Linux would be unreasonable. But IMO, having to change one toggle switch in the BIOS is fine.
Yeah, I mean, slippery slope and all, but the first time I installed Linux on my computer like 20 years ago I had to fiddle with way more BIOS settings than a toggle switch. As long as I don’t have to recite incantations at the EFI shell, this just falls under the old “tinker with your machine until you get to the login prompt” dance that I do about twice during a laptop’s useful life.
The 400 is worth it IMHO, relative to the plain rpi4. The whole keyboard acts as passive heatsink. It outperforms rpi4 with fancy cooling solutions. And it is cheaper than these rpi4 after you add the cost of the cooling solutions.
I hadn’t heard this before and had been looking into cooling cases for an rpi4. So I had a quick search and found this article, really interesting that yes, the rpi400 stays passively cooler than the rpi4 in an active Argon case. Awesome!
I started out with vim-wiki, then decided I only needed a very minimal subset of features and moved to vim-waikiki for simple and easy markdown formatted note management.
I have my notes in a git repo backing a man.sr.ht site, which is exactly that: a git-based wiki. It gives me an easy front end for reading notes in the browser. The downside is of course that there’s no remote note editing. Fine for my needs though.
vg: Shell function to open grep results directly in Vim using the quickfix. A bit of expounding here, in a small blog post.
rg foo (with ripgrep) to simply view results
vg foo to use the results as a jumping point for editing/exploring in Vim.
A ton of aliases to shorten frequently used commands. Few examples:
When I want to sync a Git repo, I run gf to fetch, glf to review the fetched commits, then gm to merge. git pull if I’m lazy and want to automatically merge without reviewing
glp to review new local commits before pushing with gp. Both glf (“git log for fetch”) and glp (“git log for push”) are convenient because my shell prompt shows me when I’m ahead or behind a remote branch: https://files.emnace.org/Photos/git-prompt.png
tl to list tmux sessions, then ta session to attach to one. I name tmux sessions with different first letters if I can help it, so instead of doing ta org or ta config, I can be as short as ta o and ta c
Also, “aliases” for Git operations that I relegate to Fugitive. Technically these are shell functions, but they exist mostly just to shorten frequently used commands.
Instead of gs for git status, I do vs and open the interactive status screen from Fugitive (which after a recent-ish update a few years ago, is very Magit-like, if you’re more familiar).
When I’m faced with a merge conflict, I do vm to immediately open Vim targeting all the merge conflicts. The quickfix is populated, and I jump across conflicts with [n and ]n thanks to a reduced version of vim-unimpaired.
ez: Probably my favorite one. A script to run FZF, fuzzy-find file names, and open my editor for those files. Robust against whitespaces and other special characters. I also have a short blog post expounding on it.
watchrun: A convenience command to watch paths with inotifywait and run a command for each changed file. For example, watchrun src -- ctags -a to incrementally update a tags file.
notify-exit: Run a command and shoot off a libnotify notification if it finishes (whether with a successful exit code or not). I have it aliased to n for brevity (using a symlink). For example, n yarn build to kick off a long-running build and be notified when it’s done.
Also, a remote counterpart rnotify-exit, which I have aliased to rn (using a symlink). For example, rn ian@hostname yarn build on a remote machine (within my LAN) to kick off a build, and have it still notify on my laptop.
And a slew of scripts that are a bit more integrated with tools I use, e.g.:
rofi-pass-type, rofi-pass-clip, and gen-pass: A set of scripts using Rofi to fuzzy-find passwords in my password manager, Unix pass, and automatically type, copy to clipboard, or generate a new one, respectively.
I used to have a ton of these git shortcuts but at some point I threw them out again because I kept forgetting them. The only thing I use daily is git up which is git pull --rebase.
notify-exit: that’s one of those ideas that is so great you facepalm and ask yourself why you never thought of it before. I’m adding this to my config tomorrow.
After I learned about “ci” in vim I got hooked. All of the sudden replacing text in quotes became as simple as ci” and now I’m having a hard time to use other editors. Sometimes a little detail is all that it takes.
Just to clarify to others. In vim if you are on a word “c” starts a change and the next keystroke determines what will be changed. For example, “c$” removes text from where the cursor is to the end of the line.
Now what is new for me is vim has a concept of “inner text”. Such as things in quotes, or inbetween any two symmetric symbols. The text between those two things are the “inner text”.
For example, in this line, we want to change the “tag stuff” to “anything”.
<tag style="tag stuff">Stuff</tag>
Move the cursor anywhere between the quotes and type ci then a quote and you are left with
This is a good example of why to me learning vi is not worth the trouble. In my normal editor, which does things the normal way, and does not have weird modes that require pressing a key before you are allowed to start typing and about which there are no memes for how saving and quitting is hard, I would remove the stuff in the quotes by doing cmd-shift-space backspace. Yes, that technically is twice as many key presses as Vi. No, there is no circumstance where that would matter. Pretty much every neat Vi trick I see online is like “oh if you do xvC14; it will remove all characters up to the semicolon” and then I say, it takes a similar number of keystrokes in my editor, and I even get to see highlight before it completes, so I’m not typing into a void. I think the thing is just that people who like to go deep end up learning vi, but it turns out if you go deep in basically any editor there are ways to do the same sorts of things with a similar number of keystrokes.
There is not only the difference in the number of keystrokes but more importantly in ergonomics. In Vim I don’t need to hold 4 keys at once but I can achieve this by the usual flow of typing. Also things are coherent and mnemonic.
E.g. to change the text within the quotes I type ci”(change inner “) as the parent already explained. However this is only one tiny thing. You can do all the commands you use for “change(c)” with “delete(d)” or “yield(y)” and they behave the same way.
ci”: removes everything within the quotes and goes to insert mode
di”: deletes everything within the quotes
yi”: copies everything within the quotes
d3w, c3w, y3w would for example delete, replace or copy the next 3 words.
These are just the basics of Vim but they alone are so powerful that it’s absolutely worth to learn them.
And if you want to remove the delimiters too, you use ‘a’ instead of ‘i’ (I think the logic is that it’s a variation around ‘i’ like ‘a’ alone is).
Moreover, you are free to chose the pair of delimiters: “, ’, {}, (), [], and probably more. It even works when nested. And even with the nesting involves the same delimiter. foo(bar(“baz”)) and your cursor is on baz, then c2i) will let you change bar(“baz”) at once. You want visual mode stuff instead? Use v instead of c.
One difference is that if you are doing the same edit in lots of places in your editor you have to do the cmd-shift-space backspace in every one, while in vi you can tap a period which means “do it again!” And the “it” that you are doing can be pretty fancy, like “move to the next EOL and replace string A with string B.”
I would remove the stuff in the quotes by doing cmd-shift-space backspace
What is a command-shift-space? Does it always select stuff between quotes? What if you wanted everything inside parentheses instead?
and then I say, it takes a similar number of keystrokes in my editor, and I even get to see highlight before it completes, so I’m not typing into a void
You can do it that way in vim too if you’re unsure about what you want, it’s only one keypress more (instead of ci" you do vi"c; after the " and before the c the stuff you’re about replace will be highlighted). You’re not forced to fly blind. Hell, if your computer is less than 30 years old you can probably just use the mouse to select some stuff and press the delete key and that will work too.
The point isn’t to avoid those modes and build strength through self-flagellation; the point is to enable a new mode of working where something like “replace this string’s contents” or “replace this function parameter” become part of your muscle memory and you perform them with such facility that you don’t need feedback on what you’re about to do because you’ve already done it and typed in the new value faster than you can register visual feedback. Instead of breaking it into steps, you get feedback on whether the final result is right, and if it isn’t, you just bonk u, which doesn’t even require a modifier key, and get back to the previous state.
What if you wanted everything inside parentheses instead?
It is context sensitive and expands to the next context when you do it again.
Like I appreciate that vi works for other people but literally none of the examples I read ever make me think “I wish my editor did that”. It’s always “I know how I would do that in my editor. I’d just make a multiselection and then do X.” The really powerful stuff comes from using an LSP, which is orthogonal to the choice of editors.
In a similar way, if you want to change the actual tag contents from “Stuff” to something else:
<tag style="tag stuff">Stuff</tag>
you can use cit anywhere on the line (between the first < and the last >) to give you this (| is the cursor):
<tag style="tag stuff">|</tag>
Or yit to copy (yank) the tag contents, dit to delete them etc.. You can also use the at motion instead of the it motion to include the rest of the tag: yat will yank the entire tag <tag style="tag stuff">Stuff</tag>.
Note that this only works in supported filetypes, html, xml etc., where vim knows to parse markup tags.
I’m thankful for C#/.net for providing me a high paying career for the last 20 years. I’m even more excited about .net core, the fact that Linux is not a second class citizen, and the fact that I’ll hopefully soon be done with IIS.
Ha ha this is very very familiar. I’ve just completed migrating a huge codebase to dotnet core and running everything natively on Linux (instead of in a Windows VM) is so slick and fast.
Yeah, same boat here (though I’ve only been doing C# for 15 years now). Looking forward to my workplace moving over to .NET Core so that more of my tooling works with Emacs and I can use Visual Studio even less than I do now.
One suggestion is to rebind the default prefix (C-b) to C-z instead: while C-b moves backwards one character in the default shell config (and hence I use it all the time), C-z backgrounds a job, which I do rarely enough that typing C-z z to do so is perfectly fine.
I background+foreground jobs pretty frequently. So I need C-z to be free.
Personally I set my prefix to C-a.
I think it’s usually used to go to the start of the input line in most shells by default, but I set -o vi in my shell so that doesn’t apply to me.
A friend of mine sets their prefix to a backtick. Which I thought was interesting, but I like to use backticks now and then…
C-a is a super common choice, as it’s the same prefix that screen uses by default. The screen folks, in turn, either had the right idea or it was a pretty lucky guess: C-a takes you to the beginning of the line, which is not needed too frequently in a shell.
On the other hand it’s the “go to the beginning of the line” in Emacs, too so, uhh, I use C-o for tmux. I suppose it might be a good choice for the unenlightened vim users out there :-).
Another prefix binding that I found to be pretty good is C-t. Normally, it swaps two characters before the cursor, a pretty neat thing to have over really slow connections but also frequently unused. Back when I used Ratpoison, I used it as the Ratpoison prefix key.
I think C-i (normally that’s a Tab) and C-v (escape the next character) are also pretty good, particularly the former one, since Tab is easily reachable from the home row and available on pretty much any keyboard not from outer space.
I’ve no idea why I spent so much time thinking about these things, really. Yep, I’m really fun at parties!
Yeah, I’ve used C-o for screen since I guess the mid-90s because I couldn’t deal with C-a being snarfed, possibly because I was using a lot of terminal programs which used emacs keybindings at the time… Now I’m slowly moving over to tmux and keeping the C-o even though I rarely use anything terminal-based these days.
C-o is pretty important in vim actually. Personally I use M-t, which doesn’t conflict with any vim bindings (vim doesn’t use any alt/meta bindings by default) or any of my i3 bindings (I do use alt for my i3 super key)
Oh, I had no idea, the Church of Emacs does not let us meddle with the vim simpletons, nor dirty our hands with their sub-par editor :-P. I just googled it and, uh, yeah, it seems important, I stand corrected!
Personally I set my prefix to C-a.
I think it’s usually used to go to the start of the input line in most shells by default …
And in Emacs; I use it multiple times an hour, so unfortunately that is out for me.
I think that I have experimented with backtick in screen, after I started using Emacs. I have a vague memory of problems when copy-pasting lines of shell which led me to use C-z instead.
I’ve used C-a in screen for ages and carried it over tmux. Hitting. C-a a to get to the start of the line is so ingrained now that it trips me up when I’m not using tmux.
Yeah I found C-a much better than C-b, much less of a stretch, but eventually I started getting firearm pain in my left arm from all the pinky action. I’ve moved to M-t, most often using right hand for alt and left hand for t.
By coincidence, I just exactly finished migrating a huge codebase from .NET Framework to .NET Core and got the front end running natively in linux today.
Having it run in linux is SO nice. My dev machine runs arch linux with Windows in a qemu VM, and the Windows VM runs builds and uses IIS for hosting.
However the Windows VM also sucks up a heap of memory and occasionally CPU cycles. So being able to leave it off is very very nice.
I code in vim, using OmniSharp-vim for C# language services. I actually could already do this with the .NET Framework codebase, using mono. So dropping the mono requirement is nice but doesn’t make much difference to me.
The BIG thing that I’m stoked about is that I can now debug in Vim, using vimspector with the Samsung netcoredbg DAP adapter. It works really really well. Debugging is the only reason I ever use Visual Studio any more and I’m so happy to be able to do more of that in vim. Visual Studio is a great debugger and I expect I’ll still use it for “heavy” debugging.
Thank you for sharing about Vimspector and Samsung netcoredbg DAP adapter. I’ll have to make time to investigate those. Debugging and Intellisense are the two main things that keep me using VS Code on Linux.
I’m not entirely sure what the state of Microsoft debugger licensing is, but the initial reason for looking into the Samsung netcoredbg adapter was due to this issue, where the MS debugger license said you were only allowed to use it in Visual Studio or VSCode. JetBrains made their own debugger in response to this and I suspect Samsung also made this debugger for the same reason.
I’ve been using it for hosting .NET Framework stuff as it was what we’ve traditionally used in the company and it’s easier for me to work cooperatively with the team when my environment is not too drastically different from others’.
However my new setup with .NET Core is running its own Kestrel server with dotnet start which is way easier and lighter.
What are some crustacean experiences with Treesitter?
I tried it out, despite the warnings about instability, for the refactor plugin. Got this Unity codebase to deal with, and nothing else really provided C# refactoring.
The refactoring wasn’t project-wide, which caused some woes, though I could repeat the operations a few fimes in separate files and be happy-ish. The dealbreaker is that Neovim would quite often deny insert mode, as if the buffer setting changed BTS somewhere. Restart, my favorite!
I don’t use treesitter ATM, maybe later.
LSP is also a mixed bag. Very useful and works nice on my heirloom-grade laptop, even for Unity, but on my desktop it buffers up syntax errors for nearly every keypress and flushes them out so slowly I can jump to Unity and run the unit tests by the time LSP’s done complaining. Sometimes an error is left lingering.
This is with the same config on both machines, and OS load is always less than you’d imagine, so it’s just weird. Even mono should be the same version.
No regrets; C# needs this stuff so bad it’s worth the trade-off.
For typescript it was a game changer. Syntax highlighting in vim never looked quite right, especially in a side-by-side comparison with VSCode.
I also find the neovim ecosystem to be rich and full of new plugins because of things like tree-sitter support. There’s an entire section dedicated to treesitter enabled colorschemes on neovimcraft: http://neovimcraft.com/?search=tag:treesitter-colorschemes
Treesitter might be laggy enough or something to trigger the problem more often. Testing that fix now, and maybe I can give Treesitter another shot soon!
Could you expand on this? I would expect C# to be the language with best refactoring tools, given that both JetBrains (Resharper & Rider) and Microsoft (Roslyn) have been working on them for more than 10 years at this point.
Roslyn is MIT licensed, omnisharp makes it speak LSP. For refactors specifically, I’d be surprised if there isn’t non-editor tooling to apply Rosly analyzers as project-wide refactors.
Sure. However actually using omnisharp is another matter. Its definitely not on par with the experience inside of visual studio itself. For example vscode which uses omnisharp provides a pretty demonstrably second class experience compared with rider and visual studio proper. And even then that experience is better than what you get from lsp c# in neovim.
An example of this breakdown is that msbuild integration with visual studio is on a whole other level compared to what omnisharp can handle
I did find this one refactoring plugin vim-refactor that pulled in a bunch of library-like stuff, less surprisingly conflicted with some of my maps, and didn’t do c# properly in the end.
That’s why Treesitter seemed so appealing. I’m sure it’s not such a turd as my experience was, but I also have work to do beside debugging 0d plugins ;)
I do really look forward to trying it out again, but I’m happy with my LSP setup (despite the error stuff!) and able to be productive now.
I still don’t understand something I believe. Roslyn is a very mature tool to do analysis and transformation of C# code. Using tree sitter for refactors (which knows only syntax) is like parsing html with regexes, only one level up: using syntax analysis for semantic transformations.
If, despite all this, “refactoring C# in vim” is still an unsolved problem, then there’s an opportunity for a fairly easy, fairly popular OSS project here. All the hard work of actually doing refactors has been done by Microsoft. What’s left is the last mile of presenting what Roslyn can do in an editor.
Maybe my “repeat in multiple files” is because of syntax being the wrong layer? I’m really not 100% sure!
It was palatable, and got the job done, and the dealbreaker was something else.
I do have OmniSharp running, but I haven’t seen anything related to proper refactoring in Neovim with it. I could look again and verify I have the latest version, though I did take a quick look before my Treesitter journey.
It would be beyond awesome if more native refactoring existed, but I do believe it’s more of an opportunity than an existing feature.
Current OmniSharp-vim maintainer here. All of the OmniSharp-roslyn code actions are available in OmniSharp-vim. OmniSharp-vim doesn’t actually use LSP, as both OmniSharp-vim and OmniSharp-roslyn are older than the protocol.
However the OmniSharp-roslyn server does have an LSP “mode” and so is usable without OmniSharp-vim using vim LSP plugins or neovim native LSP. I would expect all code actions to be available via LSP too but I don’t know.
I’m curious to know what kind of server-based refactorings we’re talking about exactly.
My use case is/was simple: given a code base, rename methods so they’re renamed everywhere. The start of the rename can be a method definition or a usage, and everything will be figured out. Not sure if that’s possible, even, just a wishlist thing ;)
I haven’t needed to touch constructor (or other method) argument refactoring, and I’m sure the Treesitter refactor plugin has nothing for this, but could be useful.
TIL! Thanks! Now I got a config with treesitter that disables the refactoring prompt in c# in favor of an autocmd map for omnisharp :)
Edit: yet to see what needs to be reloaded to get rid if the “does not conrain a definition (though it sure does lol!)” error, but still this is good news :)
Edit2: jumping the gun here, :LspRestart seems to be good enough.
Right, if you’re using LSP and OmniSharp-vim together, you’ll have to find ways of keeping the servers synchronised, because you’re running multiple instances.
Yeah, it’s a bit messy but most often only when autocompleting. I approach my neovim pretty much by fixing things as they start to annoy me too much or cause a productivity hit.
Maybe I’ll get that thing sorted some day, but I’m fine for now :)
I am intrigued by the framing of the Sturm und Drang about the state of the web as being driven, to some significant degree, by politics internal to Google.
As I stated earlier this week, promo packets are what’ll do in the web.
I think a lot of developers simply lack the interest in context to process the realpolitik that shapes and distorts the fabric of spacetime for our industry.
If you refuse to understand that Google’s whole business is threatened by adblockers, you probably would be confused at the some of the changes to web extensions webRequest that make that work harder. If you don’t understand the desire to make SEO, crawling, and walled gardens easier AMP probably seemed like a great return to roots.
Other companies do this too, of course. If you didn’t know about OS/2 Warp some of the Windows APIs probably seemed weird. If you don’t know about Facebook trying to own everything you do then the lack of email signup for Oculus probably seems strange. If you invested heavily into XNA you probably got bit when internal shifts at Microsoft killed XNA off. If you don’t know about internal Canonical and RHEL shenanigans, systemd and other things probably are a surprise.
Developers need to pay as much attention to the business dependencies as the technical ones.
When you’re doing a performance review at Google, you can request a promotion. If you do this, you put together a ‘packet’ including the impactful work you’ve done. New work is rewarded heavily, maintenance less so. For senior engineers, shipping major projects with an industry wide impact is a path to promotion.
Which means Google rewards doing something new for the sake of doing something new. It’s tremendously difficult to get promoted by improving older systems. Crucially, you often need to demonstrate impact with metrics. The easiest way to do that is sunset an old system and show the number of users who have migrated to your new system, voluntarily or otherwise.
Is there any material evidence suggesting that someone’s promotion is the reason that chrome will remove alert? Obviously google will push the web in the direction that juices profit, but an individual promotion? Seems like a red herring.
It is often difficult to pick it apart as it’s rarely a single person or team. What happens in large organizations is that there is a high-level strategy and different tactics spring from that. Then, there are metrics scorecards, often based on a proxy, which support the tactics delivering the strategy. This blurs the picture from the outside and means that rarely one person is to blame, or has singular control over the successes.
I haven’t followed the alert situation very closely, but someone familiar with large organizations can get a good read from the feature blurb. There is a strong hint from the language that they are carrying a metric around security, and possibly one around user experience. This translates to an opportunity for a team to go and fix the issue directed by the metrics since it’s quantifiable. The easiest way to start might be to work back from what moves the metric, but this is a very narrow perspective.
Developers may know what the best things to work on having been a developer in that area for 10 years, but their impact tracks towards those top-level strategies. Management can’t justify promotion because someone else is very focused on metrics that drive the strategy.
In lots of places this is called alignment. Your boss may only support X amount of work on non-aligned projects, if you do at least Y amount of work on Y projects. A classic big company alignment example is a talented person in a support department. If they can fix your biggest problem at the source it’d be best to let them do this. However, metrics incentivize assigning them to solving N support cases per week and other metrics designed for lower-skilled individuals instead of working on fixing root causes. Eventually, they leave unless you have smart management taking calculated risks, manage the metrics at the team level so the team is not noticed working the way it wants, seeking paths for talented people to work on the product, etc.
Many of us understand how metrics and incentives at tech companies work. Was just pointing out that it’s a bold claim to assume that chrome is removing alert due to an individual seeking a promotion.
I think about this in terms of my time at Apple – like, people ascribed all kinds of causes to various seemingly peculiar Apple decisions that to those of us on the inside were obvious cases of internal politicking leaking out.
WHATWG is a consortium of multiple companies so I’m curious why everyone is pointing the finger at Google here, or is the assertion that Google has so much power over the WHATWG and Chrome at this point that there’s no ability for other companies to dissent? (And I mean we all know that the W3C lost and WHATWG won so a consortium of vendors is the web.)
The multiple companies are Apple, Google, Microsoft, and Mozilla (https://whatwg.org/sg-agreement#steering-group-member, section 3.1b) Of the three, only Apple develops a browser engine that is not majority funded by Google.
The browser engine Apple creates is used for a whole bunch of stuff across their platforms, besides Safari:
Mail, iMessage, Media Store fronts, App Store fronts.. Those last two alone produce revenue about 4x what Google pays Apple to make it the default.
Do I wish they’d get more people using alternatives and pass on the google money? Sure. Is there any realistic chance their ability to fund Safari and/or Webkit would be harmed by not taking the google money? Seems pretty unlikely.
Yes, this. Google’s play here is less about controlling standards per se (ed: although they do plenty of that too) and more about getting everyone to treat Whatever Google Does as the standard.
WHATWG was run at inception by a Googler and was created to give Google even more power over the standards process than the hopelessly broken W3C already gave them. That they strong armed Mozilla into adding their name or that Apple (who was using the same browser engine at the time) wanted to be able to give feedback to the org doesn’t change the Googlish nature of its existence, IMO
Like it or not, Google is the www. It is the driving force behind the standards, the implementations (other than Safari), and the traffic that reaches websites.
It would be odd if Google’s internal politics didn’t leak into the medium.
A lot of people seem to think that standards work is a bit like being in a university - people do it for the love of it and are generally only interested in doing what’s best for all.
In reality it’s a bunch of wealthy stakeholders who realize that they need to work together for everyone’s best - they’re not a monopoly, yet - but in the meantime it behooves them to grab every advantage they can get.
As mentioned in the post, standards work is hard and time-consuming, and if an organisation can assign a dedicated team to work on standards, that work will get implemented.
I’m the author of the post. I hadn’t heard of Migadu before, but it almost looks like it would work. The only issue is their Micro plan ($19/year) only allows 200 inbound emails per day. I guess that may not be an issue most of the time, but there are days where I receive more than 200 emails. The inability to control how much inbound email you are receiving makes me hesitant to use such a service.
I also use migadu for a family account. The thing I really like about it is that you can add as many accounts as you like for your domain: this@my.domain, that@my.domain, theother@my.domain, it’s so nice.
But yes, the 200-in limit has been my concern too. I subscribe to a few mailing lists and have a worry that one day some big heated mailing list conversation will put me over. This FAQ answer suggests they are lenient and that emails won’t actually be lost (assuming senders follow correct email practices!), but my tentative plan has been to wait and see if I ever get a warning email, and upgrade to the next tier if it becomes an issue. It hasn’t so far, after a year or so.
For what it’s worth, it’s not a hard limit. They won’t block the 201st email — if it’s a recurring thing, they’ll ask you to upgrade. This is mentioned in their docs, somewhere. cc @jeremyevans
I checked and it is in their documentation. So maybe that would have been a simpler option. I might have switched to Migadu instead of using a VM if I had known about it first. I think the only issue is the next level up from the $19/year plan is the $90/year plan, which is a pretty significant jump. But for someone who isn’t likely to go over their limits, Migadu looks like a really nice option.
Which email client(s) do you use? Last time I checked, Thunderbird doesn’t put design thought toward this use case. As such it is clunky to use for sending emails from different addresses.
I’m on Evolution now, but always looking for better options.
I primarily use mutt, which I have configured with 4 different email accounts: 1 work, 1 gmail, 2 migadu. So I don’t actually send from different addresses exactly (although I think that is easy to do in mutt), but have commands which switch me completely to a different account and inbox.
But what I meant about migadu is not that they give you multiple email addresses to send to and from within your domain, but that they let you add as many accounts as you like within that domain. So my daughters get their own email addresses and passwords and can sign into them on whatever mail client they like. And I can give these out to as many of my family as I like (the domain is a play on our surname), as long as I don’t hit the 200/20 limit.
Thanks for posting your setup. I’ve been sniffing at things adjacent to this for a while, looking at some other providers for SMTP. mailroute was the one that had looked most promising, but their definition of a “user” would have had me paying heavily for all the aliases I use, so I had not made the jump yet. Tuffmail’s EOL is going to force my hand.
Right now, I’m deciding between Migadu and a setup similar to what you’ve done. I had almost given up on the self hosted setup. Sendgrid could work for me, though. My only heartburn about it is, if they decide to kill off their free plan, it’s a huge jump up to $15/mo while I work out an alternative. Where I’d be flirting with the 200 in/day limit on Migadu, the jump up to the next tier isn’t as nasty if I need to do that.
gofile1 file2 … is a little wrapper to open multiple files/dirs with xdg-open. I’ll probably rename this script if I ever have to install Golang.
hmyprogram tries, in order, to open the man page, or run mycommand --help | less flag, or with mycommand -h | less. Basically “I want to read your help page, dammit” with a side order of “in a pager, please, don’t barf it straight into the terminal”.
addkernel installs the active Conda environment as a one of my user’s IPython/Jupyter kernels. This makes it visible from any Jupyter installation, specifically my main one. This, in turn, lets me maintain one virtualenv with a Jupyter installation + extensions, instead of one Jupyter installation per {conda,virtual}env.
When Freenode went to shit, I left IRC. WeeChat was my client of choice. Glad to see it’s still under active development. It’s such a great client.
IRC on libera is just like IRC on freenode, just slightly better. Things like getting cloaks are smoother and well automated. I think the shift was a good opportunity for the libera admins to organise some things and they did (and do!) a great job.
Group and cloak registration was completely non-functional on Freenode for years. I only got a proper group and a cloak for my project after migration to Libera.
My client of choice is irssi, though. Nothing against weechat, it’s just not my thing.
I’ve done something similar to the repetition aspect of devil in vim with vim-movefast.
Of course lot of the things that devil does already only require single keys in vim due to its modal nature, but there are still some things that require either a chord (
<C-u>
) or multiple keys (gt
) which are repeatable and can be reduced to single keys with vim-movefast. I’ve come to be heavily reliant on single key scrolling with<Space>jjjk
rather than<C-d><C-d><C-d><C-u>
.So, this is funny, but I’ve found myself using both the Godot and Tic-80 built in text editors for a lot of things, and only missing Vim keybinds at the extremes.
I realise that it’s not related to tic-80 but you reminded me…
I played a bit of TIS-100 on Steam recently where you program a series of extremely limited pseudo assemblers. I found the built-in editor to be so frustratingly not-vim that I played the game by finding the save files on my file system and editing them directly in vim, then reloading the game in Steam to run them.
I’ve been using Migadu for a few years now; they’re great. The best thing about them is that I always get very quick replies from their support teams when needed.
The pricing looks great for individual use, but I’m a little concerned about the limits for incoming and outgoing mail.
Are you on the Micro plan? Have you ever exceeded the limit?
I’m on the Micro plan and never come close to the limits. YMMV.
Me too, and I am subscribed to a few mailing lists
I host the email for a few dozen accounts on their largest account, and it works smoothly. The webmail is okay. No calendar integration or the like, which was a pain point for a few of the users when I migrated from an old GMail service when Google decided to start charging for it.
Their support really is excellent.
This might not be what you’re looking for but they do have basic CalDAV support. No web interface for this though.
I wonder what caldav server they use.
I believe it’s sabre/dav.
Best thing to do if you’re considering it is to look at how much mail you’ve sent and received in previous months. I think there’s a half-decent Thunderbird add-on that’ll summarise that information for you if nothing else.
Also, if you’re keen on moving but are pushing the limit on sends then remember that there’s no reason you need to always use their SMTP service! I often use my ISP’s (sadly now undocumented) relay and never had any bother.
I had to bump up to the mini plan to accommodate a family member’s small business running under my account. It was painless.
Yes, I’m on the Micro plan, and I haven’t come close to the limits. If there were one day when I exceeded the limits I’m sure they wouldn’t mind; if I had higher email flux in general though I’d be happy to pay more.
I just started using them for some things, and the amount of configurability they give you is crazy (in a great way, that is). I’m going to move all of my mail hosting to them some day.
only bad thing is their web portal - it doesn’t remember logins and the search is slow / disfunctional
otherwise i love migadu and will always sing its praises
As in the web mail interface? I assumed that was more of a toy/demo since I use IMAP.
I work in a context that only exists because copyright laws protect creators and enable them to create on a professional and not hobby basis. So I’m in no way a believer that copyright is inherently evil.
But archive.org is my only way of accessing the vast amount of literature (fiction and non-fiction!) between the end of the public domain (1920s) and, say about 2010. It is really disingenuous for publishers to claim that they are losing money from these works being available, because they are not publishing them themselves.
I would like to share the books I read and loved as a child with my children. Often archive.org is the only way to do that.
I would have a lot more sympathy for the publishers if they provided a blessed legal version of archive.org with their entire back catalog (unedited!). I would in all seriousness pay for access to that.
I used to work professionally as a musician. In theory, copyright law protected my livelihood; in practice, copyright was how various thugs would shakedown music venues for royalties, and often I would be told by venue owners/managers that we needed preapproval on every song we played.
Copyright is inherently unfair; some artists will get lucky and their particular expression of a popular idea will be legally protected, while most artists will be ineligible for the same protection.
The law in this case only holds up if the person who owns the copyright has enough money and time to pursue enforcing it.
When we talk laws and rules, I think that point gets glossed over. Lessig (1999) identifies four elements that regulate behavior (online): Laws, norms, markets, and technology.
In the case of copyright I feel like the market piece dominates the rest as the major players have so much money invested in the system.
You probably want to preface that with IANAL.
Pretty sure that’s self evident from the post.
I would love to see an update to copyright law that requires publication (ideally with no DRM that in any way impedes fair use) to protect works. It annoys me immensely that companies can simultaneously refuse to sell something to potential customers at any price and claim lost revenue when those same people acquire copies elsewhere.
It would be difficult to implement well. A publisher could easily withdraw something from publication long enough for copyright to lapse and then reintroduce it without having to pay the author, but I can imagine that, with sufficient safeguards, a rule that copyright lapsed three years after last in-copyright publication would be implementable.
I would rather see a reduction back to a more reasonable term for copyright length. 120 years is way too long in my opinion.
The original Queen Anne statute had a term of 14 years. I proposed life of author + 15 years a few months back and was blasted by a copyright abolitionist for my pains 😉
Another, related issue is that once copyright is equated with property, it can be sold and signed away to entities with much wider powers than an author or a composer, and the work is no longer a cultural artifact but an investment item.
I don’t want to abolish copyright, but I wouldn’t even go as long as life+. Maybe the 28 years the US used for a long time. It seems to me that long copyrights lose their original purpose and just become a way to seek rents.
Just had an interesting idea for a novel. In a world where copyright is limited only by the lifetime of the author, building up a valuable enough body of work might put quite the price on one’s own head.
For books, at least, the vast majority of the revenue is made in the first 7 years (and more than half of that in the first two) typically. This changes a bit with things like TV or film adaptations, which take a long time to organise. I could imagine copyright permitting redistribution of an original work after 10 years and permitting derived works without a license after a bit longer.
I think one of the big problems with copyright at the moment is that it isn’t tailored for domains. Software, for example, is completely obsolete when it comes out of copyright (the first ever copyrighted program is still under copyright). I doubt anyone benefits from, for example, Windows NT 4 or MacOS 9 still being under copyright, but the fact that they are makes (legally) emulating environments to run old software difficult.
That makes sense. To retain a trademark you have to use it and take efforts to defend it. It makes sense to me to have a “use it or lose it” to other intellectual properties.
I think the major problem would be encoding that into a law that’s enforceable and not easilly gamed. We don’t want publishers doing 10 book runs every 5 years and throwing them in the landfill to retain their rights.
Interesting enough one of the reason George Lucas got toy rights back for Star Wars was part of the license that was originally sold off said toy publishers had to make toys every year and they neglected to do that. (paraphrasing)
You could have full rights return to the author on the expiry of three years, who would then have three years to fulfill the publication requirement. And maybe a provision for an author to provide notice that they’re going to publish before the publisher’s three years are up.
Just to add to that, re-issuing books is not always the straightforward affair it’s sometimes made to be.
One of my favourite things in this world, which I’ve always carried with me when I moved, is an obscure children’s book I’ve had since I was a child. It has large, paged-sized illustrations scattered throughout it, like most children’s book have. Thing is, it was published at a time when things were… expensive where I’m from. So the illustrations aren’t coloured, they’re sort of in the style of 1940s Walt Disney newspaper comics. That was obviously not a problem for me, or most other children my age: we had crayons, so we all coloured our illustrations.
I can’t quite describe how popular this was. 30+ years later we still ask “what colour did you make the balloon?” and we all remember what colour we made it. The book was a hit among children our age, it was fun and easy to follow along. And, while not a colouring book, and very much a “serious” book otherwise – it’s literally a novel! – the fact that we could “contribute” to it to some degree was a huge part of the experience. For many people my age, this book was the gateway into reading, the first “long” book they’d ever finished, the first book they’d read cover to cover more than once and so on.
Anyway, this book was republished a few years ago, which I learned from a friend of mine who’d bought it for her child, along with a pack of crayons. However, in keeping with the times, the editor decided to republish the book with full-colour illustrations, which she only realized when they unwrapped the damn thing and her child flipped through it and asked what he was supposed to do with the crayons now. Presumably someone in marketing thought the children might get bored or whatever. The book is still fun but… it’s not quite the same book, you know?
Thing is, an uncoloured copy of it is not easy to get these days. One pops up in used bookstores every once in a while, but precisely because it was so popular, and especially after it was reissued in a slightly different format, it’s crazy expensive.
But what is the book??
Can you share the title of this wonderful book?
@nickspoon’s question sent me down the Internet rabbit hole last evening. I had no idea it had ever been published in English, I could’ve sworn it was mostly an Eastern European thing!
Given that’s the case, and based on how many people voted both your and nickspoon’s question, I think it’s worth sharing – the English translation was published as “The Adventures of Dunno and His Friends”. archive.org actually carries a 1980 English edition that’s very similar to mine, which is a local edition – it has a handful of full-colour illustrations but most of them is black and white. Some modern editions coloured some, or all of those black and white illustrations, too.
I think it was published pretty much all over Eastern Europe; I know people from Albania, Poland, and the Czech Republic who’ve read it. I’m not sure if it was equally popular all over Europe and across all generations. It was hugely popular in my generation. Basically everyone I know in my age group (35-40) has read it.
Two huge disclaimers: 1) this is a book from a very different era, and from a very foreign place. Some of it may have aged poorly, especially if read by someone who doesn’t know its original cultural context – not to the point where it’s offensive, I think, but nonetheless, 2) please don’t take this as my endorsing anything about it other than that it was fun for young kids to read back then.
While I don’t disagree with at statement, I don’t think the inverse is true, that having no copyright would prevent a professional basis. Another facet to this is that copyright can certainly also prevent creators from the same thing in many different ways.
I also think large parts of culture that creators rely on only even exist because copyright for some reason (time, waiving, etc.) didn’t exist or was broken.
Social networks succeed by membership and use, and they catch on by being discussed. How the heck do you even pronounce this word? How does a listener know how to spell that sound, in order to find it and join? These are aspects of a product, especially of a social product, that ought to be foolproof.
“Nostr” doesn’t strike me as more difficult to pronounce than “Tumblr”, for example.
“tumbler” is an existing English word, and the “blr” part of Tumblr can’t really be pronounced in any other way in English. “Tumblra” doesn’t work, for example.
Nostr isn’t close to any existing word, and could as easily be pronounced “noster”, “nostra”, “nose-ter”, ““no-sitter” etc.
Yep.
Tumble is a pretty normal English word, to this non-native speaker.
Nostr makes me think of Latin (Nostradamus, Nostrum), and how the (probable?) English pronunciation would be different than many other parts of the world.
I’m going for ‘nostril, without the il’.
Very thorough article! I’d be interested in an updated version.
3 years isn’t that long but in that time LSP has “happened”, so I would rather use an LSP server for actual code editing so I get a consistent experience with editing other languages. I should be able to edit lisp using the same LSP client (vim-lsp, ALE, neovim native LSP etc.) and therefore the same mappings I use in e.g python and typescript, to get completions, go to definition, find references, semantic highlighting (although this is presumably much simpler and easier to do with regex in lisp than lots of other languages).
The really interesting stuff (to me) in the article is about the REPL integrations and debugging, so having that decoupled from the editing support would be very interesting.
Even worse, Lenovo is claiming, rightly or wrongly, that this is a contractual obligation by MS. https://download.lenovo.com/pccbbs/mobiles_pdf/Enable_Secure_Boot_for_Linux_Secured-core_PCs.pdf
Got to give Lenovo some credit for how they complied, though. They could have made it so you would have to go through a complicated process of adding the keys you need to boot anything other than Windows. But they put a simple toggle switch to enable the most common case (you are booting a Linux distribution that uses the Microsoft 3rd party shim for secure boot).
Why are you disrupting the daily “Five minutes of hate for corporate America?”
To be fair, that’d be corporate China you’d be hating on.
Is M$ a Chinese company now?
Microsoft isn’t, but Lenovo is.
And the complaint is about M$ forcing their crap onto a HW vendor. I cannot imagine Lenovo doing this on their own. There is no money in that.
I mean, for all we know, Lenovo might be getting a good deal out of this as well – sure, there may be no money to be taken straight from customers for it, but that’s not the only source of money nor the only metric for a good deal. Just because it was a contractual obligation (if it was) doesn’t mean it was a bad deal or that Lenovo were forced to make it.
Not sayin’ you’re not right to suspect Microsoft being nasty here, just that I’ve found it best to distribute suspicion evenly (and in very generous amounts :-D) among corporate actors.
Well, I will say that assuming this was actually a contractual obligation, the culpability is on Microsoft, and Lenovo did the best they could. I’m not letting Microsoft off the hook at all here.
Because some tech people still work at M$ for some reason. It’s mind boggling how legions of people who dedicate their life to understanding technology end up creating walled gardens.
I don’t think it’s that mind boggling. Creating walled gardens is lucrative and M$ pays pretty okay.
Ok. One shouldn’t be surprised by people being willing to do questionable things for money. But this is literally working on something that they are probably going to be working around one or two employers later.
Or is it different in the US? Is it like really simple to just call one of your friends at M$ and get them cooperating, unlike from EU?
The overwhelming majority of developers in the world will not be effected by this. It’s not like a .NET developer is going to go out of their way to replace the OS on their work laptop.
I have done exactly that and it’s great for me, and actually great for my company, to have at least one person who knows their way around a linux machine (my system dependencies better match cloud dependencies and build chains, if nothing else). And dotnet works really well natively on linux now.
I would hope that developers as a group are curious enough that this is pretty common. Realistically I’m sure that the “majority” will only ever use Windows but I like to think and hope it’s not the “overwhelming majority”.
I think you’re wildly overestimating how many developers will ever voluntarily try to run linux on a laptop (as opposed to a server).
While not fully representative, almost every year the Stack Overflow survey has Linux on personal machines around the 30% ballpark.
It’s not like M$ is somehow worse than all the other big tech companies. I’m sure one could quibble over details, but at a high-level, I’d argue they are all basically the same in terms of walled-garden love.
Sure, but M$ walling off the general purpose computer segment would affect me way more than Google with Apple claiming the phone segment. I make a living off Linux on my (and some of my clients’) productive general purpose computers after all.
And I am surprised that other devs don’t mind. I though the idea was that we are the ones who could always walk away and disrupt as much as we’d like.
If this continues, we won’t be able to install Linux and Kodi on a NUC to provide 4k video for the occasional movie screening in our favorite coffee. It would have to be Windows. Meaning, among other things, no SSH remote administration, license fees, reboots in the middle of the movie and so on.
We’d have to buy either underpowered RPis or some unlocked industrial devices at much higher price point.
Or my aunt’s laptop. She can’t afford a new one, so she has an old Lenovo with Linux. Making the device useful for couple more years. Do we just throw them away once the Windows support ends? Or do we run insecure and/or slow devices?
I kinda like how things are now. The freedom of compatible hardware and software without artificial barriers is just more productive.
I generally agree with you, I think it’s a bad precedent and I don’t like where it’s going either.
Security being used to enforce stupid things is bad for everyone, not just us that want to run non-Windows.
The part I don’t really agree with is that Google or Apple’s or …’s influence is somehow not as bad, they are, just in different ways.
I think that in the past couple of years the general awareness of embedded computing skyrocketed. Thanks to Arduino, RPi and many others.
I believe that I will get my Linux phone eventually. Who knows what happens from there?
PC lockdown is a trend in the other direction.
For the record, I agree with the “unkind” assessment of my comment, but I was definitely not trolling.
I truly believe that people who help build these walled gardens should think about harm they are doing instead of just throwing arms up in the air “it’s just a job”.
IMO this requirement is entirely reasonable. I think it’s more important for a company to provide strong defaults for the majority of its customers than to make things easy for competitors, and that shouldn’t change just because a company is the dominant player. Notice I said defaults, not forced restrictions. Booting an alternative OS is an esoteric task that only a tiny percentage of PC customers will ever do, and I think it’s fine to require such customers to tweak a security setting in their firmware, particularly since they don’t have to disable Secure Boot altogether. Defaults can never be perfect for everyone, but I think that having the strongest security by default for the majority of users is the right call here.
I doubt that it would be legal in Norway: Lock-in mechanisms (innelåsende mekanismer), hereunder missing compatibility, is unfair according to Forbrukertilsynet.
If you don’t know Forbrukertilsynet, they were the first to forbid DRM on music, for similar reasons.
Do they forbid DRM on video?
Regarding this article, I don’t think that this is a real lock-in mechanism. It is a simple toggle to enable the shim and also another toggle that can disable secure boot altogether. I don’t think it is a lock-in to require a few keypresses to change some settings when the default is argubly more secure.
Yes, video shouldn’t be any different. Famously, as DVD-Jon found out, it’s not illegal to break DRM. Note that this is specifically about bought music and bought video.
That sounds like a key point. But it also depends on their informational obligations: How many expectations are broken, and do you need to be Matthew Garrett to debug it? I don’t think I could have guessed “the 3rd party key” if I hadn’t read the news today. If you can’t do it with the knowledge that can be expected of a customer, well, that’s a dark pattern, which is also forbidden.
I should add what I feel the law should be similar to the concept where they are required to allow it to be easily turned off so the user can actually own the machine, while at the same time be required to fix any issues that come up for that device during the lifetime + some extra duration of it (so no disabling of the functionality ten years later).
So installing your own operating system doesn’t violate the warranty, and the warranty of a fitness for purpose includes fixing any issues booting up the device using third party software - so if you have a disk with debian on it that boots on a few other contemporary machines but doesn’t on this particular laptop - or even better multiple separate machines all being this laptop - then they are required to fix it.
I think EU might be slowly getting there with the sideloading mandate. I don’t think they will explicitly cover different OSes, but the wording might be loose enough for courts to pry it wide open. Also, FSFE is probably lobbying as much as possible to include the OSes explicitly.
I’m convinced they’re in as well, I’m not very hopeful. It’s going to include too many holes. If you think FSFE is lobbying, what do you think MS is doing? They’ve been proven that they abuse their monopoly and that they don’t stop on “lobbying” and go straight for bribe and corruption.
EU might get the doors cracked a little. But wide open? I’m not hopeful.
As the article states though, turning the 3rd-party signing key toggle on by default would not weaken the security model.
I think requiring users to install their own security keys in order to install Linux would be unreasonable. But IMO, having to change one toggle switch in the BIOS is fine.
Yeah, I mean, slippery slope and all, but the first time I installed Linux on my computer like 20 years ago I had to fiddle with way more BIOS settings than a toggle switch. As long as I don’t have to recite incantations at the EFI shell, this just falls under the old “tinker with your machine until you get to the login prompt” dance that I do about twice during a laptop’s useful life.
The RPi 400 was still in stock recently — it’s a slightly-overclocked RPi 4 built into a keyboard. I bought one from SparkFun in April.
The 400 is worth it IMHO, relative to the plain rpi4. The whole keyboard acts as passive heatsink. It outperforms rpi4 with fancy cooling solutions. And it is cheaper than these rpi4 after you add the cost of the cooling solutions.
I hadn’t heard this before and had been looking into cooling cases for an rpi4. So I had a quick search and found this article, really interesting that yes, the rpi400 stays passively cooler than the rpi4 in an active Argon case. Awesome!
https://tutorial.cytron.io/2020/11/02/raspberry-pi-400-thermal-performance/
apart from the looks / space required that is actually a good argument for taking it as “homeserver”
Plus, where else are you going to find a keyboard with a raspberry key?
Here’s one that doesn’t have a pi built-in: https://www.raspberrypi.com/products/raspberry-pi-keyboard-and-hub/
Microcenter has those in stock locally here right now, too.
I started out with vim-wiki, then decided I only needed a very minimal subset of features and moved to vim-waikiki for simple and easy markdown formatted note management.
I have my notes in a git repo backing a man.sr.ht site, which is exactly that: a git-based wiki. It gives me an easy front end for reading notes in the browser. The downside is of course that there’s no remote note editing. Fine for my needs though.
Oh, I have quite a few!
For shell aliases and functions:
vg: Shell function to open grep results directly in Vim using the quickfix. A bit of expounding here, in a small blog post.
rg foo
(with ripgrep) to simply view resultsvg foo
to use the results as a jumping point for editing/exploring in Vim.A ton of aliases to shorten frequently used commands. Few examples:
gf
to fetch,glf
to review the fetched commits, thengm
to merge.git pull
if I’m lazy and want to automatically merge without reviewingglp
to review new local commits before pushing withgp
. Bothglf
(“git log for fetch”) andglp
(“git log for push”) are convenient because my shell prompt shows me when I’m ahead or behind a remote branch: https://files.emnace.org/Photos/git-prompt.pngtl
to list tmux sessions, thenta session
to attach to one. I name tmux sessions with different first letters if I can help it, so instead of doingta org
orta config
, I can be as short asta o
andta c
Also, “aliases” for Git operations that I relegate to Fugitive. Technically these are shell functions, but they exist mostly just to shorten frequently used commands.
gs
for git status, I dovs
and open the interactive status screen from Fugitive (which after a recent-ish update a few years ago, is very Magit-like, if you’re more familiar).vm
to immediately open Vim targeting all the merge conflicts. The quickfix is populated, and I jump across conflicts with [n and ]n thanks to a reduced version of vim-unimpaired.For scripts:
First off, an easy way to manage my PATH scripts:
binify
my scripts so they go into PATH,binedit
if I need to make a quick edit.ez: Probably my favorite one. A script to run FZF, fuzzy-find file names, and open my editor for those files. Robust against whitespaces and other special characters. I also have a short blog post expounding on it.
watchrun: A convenience command to watch paths with inotifywait and run a command for each changed file. For example,
watchrun src -- ctags -a
to incrementally update a tags file.notify-exit: Run a command and shoot off a libnotify notification if it finishes (whether with a successful exit code or not). I have it aliased to
n
for brevity (using a symlink). For example,n yarn build
to kick off a long-running build and be notified when it’s done.rn
(using a symlink). For example,rn ian@hostname yarn build
on a remote machine (within my LAN) to kick off a build, and have it still notify on my laptop.And a slew of scripts that are a bit more integrated with tools I use, e.g.:
I normally keep these in fixed locations, so everything I’ve accrued naturally over the years should be here:
I used to have a ton of these git shortcuts but at some point I threw them out again because I kept forgetting them. The only thing I use daily is
git up
which isgit pull --rebase
.I had a similar problem and only kept
branch-history
which showed me the one line commit diff between my feature branch and dev.notify-exit
: that’s one of those ideas that is so great you facepalm and ask yourself why you never thought of it before. I’m adding this to my config tomorrow.ez
, is great, thanks for sharing!After I learned about “ci” in vim I got hooked. All of the sudden replacing text in quotes became as simple as ci” and now I’m having a hard time to use other editors. Sometimes a little detail is all that it takes.
This was extremely helpful thanks.
Just to clarify to others. In vim if you are on a word “c” starts a change and the next keystroke determines what will be changed. For example, “c$” removes text from where the cursor is to the end of the line.
Now what is new for me is vim has a concept of “inner text”. Such as things in quotes, or inbetween any two symmetric symbols. The text between those two things are the “inner text”.
For example, in this line, we want to change the “tag stuff” to “anything”.
Move the cursor anywhere between the quotes and type ci then a quote and you are left with
This is a good example of why to me learning vi is not worth the trouble. In my normal editor, which does things the normal way, and does not have weird modes that require pressing a key before you are allowed to start typing and about which there are no memes for how saving and quitting is hard, I would remove the stuff in the quotes by doing cmd-shift-space backspace. Yes, that technically is twice as many key presses as Vi. No, there is no circumstance where that would matter. Pretty much every neat Vi trick I see online is like “oh if you do xvC14; it will remove all characters up to the semicolon” and then I say, it takes a similar number of keystrokes in my editor, and I even get to see highlight before it completes, so I’m not typing into a void. I think the thing is just that people who like to go deep end up learning vi, but it turns out if you go deep in basically any editor there are ways to do the same sorts of things with a similar number of keystrokes.
There is not only the difference in the number of keystrokes but more importantly in ergonomics. In Vim I don’t need to hold 4 keys at once but I can achieve this by the usual flow of typing. Also things are coherent and mnemonic.
E.g. to change the text within the quotes I type ci”(change inner “) as the parent already explained. However this is only one tiny thing. You can do all the commands you use for “change(c)” with “delete(d)” or “yield(y)” and they behave the same way.
ci”: removes everything within the quotes and goes to insert mode di”: deletes everything within the quotes yi”: copies everything within the quotes
d3w, c3w, y3w would for example delete, replace or copy the next 3 words.
These are just the basics of Vim but they alone are so powerful that it’s absolutely worth to learn them.
Just a small correction; I think you meant “yank(y)” instead of “yield(y)”.
Haha yes thanks I really got confused :)
And if you want to remove the delimiters too, you use ‘a’ instead of ‘i’ (I think the logic is that it’s a variation around ‘i’ like ‘a’ alone is).
Moreover, you are free to chose the pair of delimiters: “, ’, {}, (), [], and probably more. It even works when nested. And even with the nesting involves the same delimiter. foo(bar(“baz”)) and your cursor is on baz, then c2i) will let you change bar(“baz”) at once. You want visual mode stuff instead? Use v instead of c.
This goes on for a long time.
One difference is that if you are doing the same edit in lots of places in your editor you have to do the cmd-shift-space backspace in every one, while in vi you can tap a period which means “do it again!” And the “it” that you are doing can be pretty fancy, like “move to the next EOL and replace string A with string B.”
Sublime Text: ctrl+f search, ctrl+alt+enter select all results, then type your replacement.
Yeah I just do CMD-D after selecting a line ending if I need to do something like that.
What is a command-shift-space? Does it always select stuff between quotes? What if you wanted everything inside parentheses instead?
You can do it that way in vim too if you’re unsure about what you want, it’s only one keypress more (instead of
ci"
you dovi"c
; after the"
and before thec
the stuff you’re about replace will be highlighted). You’re not forced to fly blind. Hell, if your computer is less than 30 years old you can probably just use the mouse to select some stuff and press the delete key and that will work too.The point isn’t to avoid those modes and build strength through self-flagellation; the point is to enable a new mode of working where something like “replace this string’s contents” or “replace this function parameter” become part of your muscle memory and you perform them with such facility that you don’t need feedback on what you’re about to do because you’ve already done it and typed in the new value faster than you can register visual feedback. Instead of breaking it into steps, you get feedback on whether the final result is right, and if it isn’t, you just bonk
u
, which doesn’t even require a modifier key, and get back to the previous state.It is context sensitive and expands to the next context when you do it again.
Like I appreciate that vi works for other people but literally none of the examples I read ever make me think “I wish my editor did that”. It’s always “I know how I would do that in my editor. I’d just make a multiselection and then do X.” The really powerful stuff comes from using an LSP, which is orthogonal to the choice of editors.
I do not disagree. For vim, as for your editor, the process is in both places somewhat complex.
Like you I feel I only want to learn one editor really well. So I choose the one which is installed by default on every system I touch.
For which I give up being able to preview what happens and some other niceties. Everything is a tradeoff in the end
In a similar way, if you want to change the actual tag contents from “Stuff” to something else:
you can use
cit
anywhere on the line (between the first<
and the last>
) to give you this (|
is the cursor):Or
yit
to copy (yank) the tag contents,dit
to delete them etc.. You can also use theat
motion instead of theit
motion to include the rest of the tag:yat
will yank the entire tag<tag style="tag stuff">Stuff</tag>
.Note that this only works in supported filetypes,
html
,xml
etc., where vim knows to parse markup tags.I really like that I keep stumbling on tidbits like this one that continue to improve my workflow even further.
All of Powershell, C#/.NET, Fennel and Janet, Love2D, Godot, Web Assembly come to mind for me.
I’m thankful for C#/.net for providing me a high paying career for the last 20 years. I’m even more excited about .net core, the fact that Linux is not a second class citizen, and the fact that I’ll hopefully soon be done with IIS.
Ha ha this is very very familiar. I’ve just completed migrating a huge codebase to dotnet core and running everything natively on Linux (instead of in a Windows VM) is so slick and fast.
Yeah, same boat here (though I’ve only been doing C# for 15 years now). Looking forward to my workplace moving over to .NET Core so that more of my tooling works with Emacs and I can use Visual Studio even less than I do now.
One suggestion is to rebind the default prefix (
C-b
) toC-z
instead: whileC-b
moves backwards one character in the default shell config (and hence I use it all the time),C-z
backgrounds a job, which I do rarely enough that typingC-z z
to do so is perfectly fine.I background+foreground jobs pretty frequently. So I need
C-z
to be free.Personally I set my prefix to
C-a
.I think it’s usually used to go to the start of the input line in most shells by default, but I
set -o vi
in my shell so that doesn’t apply to me.A friend of mine sets their prefix to a backtick. Which I thought was interesting, but I like to use backticks now and then…
C-a is a super common choice, as it’s the same prefix that screen uses by default. The screen folks, in turn, either had the right idea or it was a pretty lucky guess: C-a takes you to the beginning of the line, which is not needed too frequently in a shell.
On the other hand it’s the “go to the beginning of the line” in Emacs, too so, uhh, I use C-o for tmux. I suppose it might be a good choice for the unenlightened vim users out there :-).
Another prefix binding that I found to be pretty good is C-t. Normally, it swaps two characters before the cursor, a pretty neat thing to have over really slow connections but also frequently unused. Back when I used Ratpoison, I used it as the Ratpoison prefix key.
I think C-i (normally that’s a Tab) and C-v (escape the next character) are also pretty good, particularly the former one, since Tab is easily reachable from the home row and available on pretty much any keyboard not from outer space.
I’ve no idea why I spent so much time thinking about these things, really. Yep, I’m really fun at parties!
Yeah, I’ve used C-o for screen since I guess the mid-90s because I couldn’t deal with C-a being snarfed, possibly because I was using a lot of terminal programs which used emacs keybindings at the time… Now I’m slowly moving over to tmux and keeping the C-o even though I rarely use anything terminal-based these days.
C-o
is pretty important in vim actually. Personally I useM-t
, which doesn’t conflict with any vim bindings (vim doesn’t use any alt/meta bindings by default) or any of my i3 bindings (I do use alt for my i3 super key)Oh, I had no idea, the Church of Emacs does not let us meddle with the vim simpletons, nor dirty our hands with their sub-par editor :-P. I just googled it and, uh, yeah, it seems important, I stand corrected!
And in Emacs; I use it multiple times an hour, so unfortunately that is out for me.
I think that I have experimented with backtick in
screen
, after I started using Emacs. I have a vague memory of problems when copy-pasting lines of shell which led me to useC-z
instead.I’ve used
C-a
in screen for ages and carried it over tmux. Hitting.C-a a
to get to the start of the line is so ingrained now that it trips me up when I’m not using tmux.I just use
C-a
, primarily because I moved to tmux from usingscreen
, which uses that binding for the prefix.Yeah I found
C-a
much better thanC-b
, much less of a stretch, but eventually I started getting firearm pain in my left arm from all the pinky action. I’ve moved toM-t
, most often using right hand foralt
and left hand fort
.C-f
Unfortunately that is
forward-char
🙂the lesser of all evil
I use C-q. It’s extremely easy for me to hit the keys together, since I remap capslock to be control, and is quite comfortable.
This leads to fun when I accidentally killed my other applications, but at this point it’s ingrained so I don’t mess up.
Interesting! I might give that a shot. I do use
C-q
to quote characters, but not that often, only once or twice every couple of days.I’ve read through this guide several times over the years and it’s great. Looking forward to the perpetually-TBD ConfigureAwait section!
By coincidence, I just exactly finished migrating a huge codebase from .NET Framework to .NET Core and got the front end running natively in linux today.
Having it run in linux is SO nice. My dev machine runs arch linux with Windows in a qemu VM, and the Windows VM runs builds and uses IIS for hosting.
However the Windows VM also sucks up a heap of memory and occasionally CPU cycles. So being able to leave it off is very very nice.
I code in vim, using OmniSharp-vim for C# language services. I actually could already do this with the .NET Framework codebase, using mono. So dropping the mono requirement is nice but doesn’t make much difference to me.
The BIG thing that I’m stoked about is that I can now debug in Vim, using vimspector with the Samsung netcoredbg DAP adapter. It works really really well. Debugging is the only reason I ever use Visual Studio any more and I’m so happy to be able to do more of that in vim. Visual Studio is a great debugger and I expect I’ll still use it for “heavy” debugging.
Thank you for sharing about Vimspector and Samsung netcoredbg DAP adapter. I’ll have to make time to investigate those. Debugging and Intellisense are the two main things that keep me using VS Code on Linux.
I’m not entirely sure what the state of Microsoft debugger licensing is, but the initial reason for looking into the Samsung netcoredbg adapter was due to this issue, where the MS debugger license said you were only allowed to use it in Visual Studio or VSCode. JetBrains made their own debugger in response to this and I suspect Samsung also made this debugger for the same reason.
any particular reason you still use IIS for hosting?
I’ve been using it for hosting .NET Framework stuff as it was what we’ve traditionally used in the company and it’s easier for me to work cooperatively with the team when my environment is not too drastically different from others’.
However my new setup with .NET Core is running its own Kestrel server with
dotnet start
which is way easier and lighter.What are some crustacean experiences with Treesitter?
I tried it out, despite the warnings about instability, for the refactor plugin. Got this Unity codebase to deal with, and nothing else really provided C# refactoring.
The refactoring wasn’t project-wide, which caused some woes, though I could repeat the operations a few fimes in separate files and be happy-ish. The dealbreaker is that Neovim would quite often deny insert mode, as if the buffer setting changed BTS somewhere. Restart, my favorite!
I don’t use treesitter ATM, maybe later.
LSP is also a mixed bag. Very useful and works nice on my heirloom-grade laptop, even for Unity, but on my desktop it buffers up syntax errors for nearly every keypress and flushes them out so slowly I can jump to Unity and run the unit tests by the time LSP’s done complaining. Sometimes an error is left lingering.
This is with the same config on both machines, and OS load is always less than you’d imagine, so it’s just weird. Even mono should be the same version.
No regrets; C# needs this stuff so bad it’s worth the trade-off.
For typescript it was a game changer. Syntax highlighting in vim never looked quite right, especially in a side-by-side comparison with VSCode.
I also find the neovim ecosystem to be rich and full of new plugins because of things like tree-sitter support. There’s an entire section dedicated to treesitter enabled colorschemes on neovimcraft: http://neovimcraft.com/?search=tag:treesitter-colorschemes
FWIW I hit the insert mode bug without treesitter. For the first time, even. Guess it was the wrong tree(sitter?) I was barking up, because of https://github.com/nvim-telescope/telescope.nvim/issues/82#issuecomment-854596669
Treesitter might be laggy enough or something to trigger the problem more often. Testing that fix now, and maybe I can give Treesitter another shot soon!
Could you expand on this? I would expect C# to be the language with best refactoring tools, given that both JetBrains (Resharper & Rider) and Microsoft (Roslyn) have been working on them for more than 10 years at this point.
Not if neovim is your goal. The solutions from jetbrains and ms are proprietary
This is factually wrong about ms.
Roslyn is MIT licensed, omnisharp makes it speak LSP. For refactors specifically, I’d be surprised if there isn’t non-editor tooling to apply Rosly analyzers as project-wide refactors.
Sure. However actually using omnisharp is another matter. Its definitely not on par with the experience inside of visual studio itself. For example vscode which uses omnisharp provides a pretty demonstrably second class experience compared with rider and visual studio proper. And even then that experience is better than what you get from lsp c# in neovim.
An example of this breakdown is that msbuild integration with visual studio is on a whole other level compared to what omnisharp can handle
What @Kethku said.
I did find this one refactoring plugin
vim-refactor
that pulled in a bunch of library-like stuff, less surprisingly conflicted with some of my maps, and didn’t do c# properly in the end.That’s why Treesitter seemed so appealing. I’m sure it’s not such a turd as my experience was, but I also have work to do beside debugging 0d plugins ;)
I do really look forward to trying it out again, but I’m happy with my LSP setup (despite the error stuff!) and able to be productive now.
I still don’t understand something I believe. Roslyn is a very mature tool to do analysis and transformation of C# code. Using tree sitter for refactors (which knows only syntax) is like parsing html with regexes, only one level up: using syntax analysis for semantic transformations.
I would be surprised if no-one has glued Roslyn to vim, and indeed folks seem to have done that? https://github.com/OmniSharp/omnisharp-vim
If, despite all this, “refactoring C# in vim” is still an unsolved problem, then there’s an opportunity for a fairly easy, fairly popular OSS project here. All the hard work of actually doing refactors has been done by Microsoft. What’s left is the last mile of presenting what Roslyn can do in an editor.
Maybe my “repeat in multiple files” is because of syntax being the wrong layer? I’m really not 100% sure!
It was palatable, and got the job done, and the dealbreaker was something else.
I do have OmniSharp running, but I haven’t seen anything related to proper refactoring in Neovim with it. I could look again and verify I have the latest version, though I did take a quick look before my Treesitter journey.
It would be beyond awesome if more native refactoring existed, but I do believe it’s more of an opportunity than an existing feature.
Current OmniSharp-vim maintainer here. All of the OmniSharp-roslyn code actions are available in OmniSharp-vim. OmniSharp-vim doesn’t actually use LSP, as both OmniSharp-vim and OmniSharp-roslyn are older than the protocol.
However the OmniSharp-roslyn server does have an LSP “mode” and so is usable without OmniSharp-vim using vim LSP plugins or neovim native LSP. I would expect all code actions to be available via LSP too but I don’t know.
I’m curious to know what kind of server-based refactorings we’re talking about exactly.
Hi! And thanks for your maintainership! :)
My use case is/was simple: given a code base, rename methods so they’re renamed everywhere. The start of the rename can be a method definition or a usage, and everything will be figured out. Not sure if that’s possible, even, just a wishlist thing ;)
I haven’t needed to touch constructor (or other method) argument refactoring, and I’m sure the Treesitter refactor plugin has nothing for this, but could be useful.
In OmniSharp-vim you can do
:OmniSharpRenameTo NewSymbolName
on either the definition or a usage and it will be renamed everywhere in the solution.Here’s a demo on the OmniSharp-roslyn codebase. The first 60 seconds are the server warming up (it’s a big solution):
https://asciinema.org/a/gd6GGiGGtcXjBsXoY0UKrqluB
TIL! Thanks! Now I got a config with treesitter that disables the refactoring prompt in c# in favor of an autocmd map for omnisharp :)
Edit: yet to see what needs to be reloaded to get rid if the “does not conrain a definition (though it sure does lol!)” error, but still this is good news :)
Edit2: jumping the gun here,
:LspRestart
seems to be good enough.Right, if you’re using LSP and OmniSharp-vim together, you’ll have to find ways of keeping the servers synchronised, because you’re running multiple instances.
Yeah, it’s a bit messy but most often only when autocompleting. I approach my neovim pretty much by fixing things as they start to annoy me too much or cause a productivity hit.
Maybe I’ll get that thing sorted some day, but I’m fine for now :)
I am intrigued by the framing of the Sturm und Drang about the state of the web as being driven, to some significant degree, by politics internal to Google.
As I stated earlier this week, promo packets are what’ll do in the web.
I think a lot of developers simply lack the interest in context to process the realpolitik that shapes and distorts the fabric of spacetime for our industry.
If you refuse to understand that Google’s whole business is threatened by adblockers, you probably would be confused at the some of the changes to web extensions webRequest that make that work harder. If you don’t understand the desire to make SEO, crawling, and walled gardens easier AMP probably seemed like a great return to roots.
Other companies do this too, of course. If you didn’t know about OS/2 Warp some of the Windows APIs probably seemed weird. If you don’t know about Facebook trying to own everything you do then the lack of email signup for Oculus probably seems strange. If you invested heavily into XNA you probably got bit when internal shifts at Microsoft killed XNA off. If you don’t know about internal Canonical and RHEL shenanigans, systemd and other things probably are a surprise.
Developers need to pay as much attention to the business dependencies as the technical ones.
What do you mean by promo packets? I’m not familiar with this term.
When you’re doing a performance review at Google, you can request a promotion. If you do this, you put together a ‘packet’ including the impactful work you’ve done. New work is rewarded heavily, maintenance less so. For senior engineers, shipping major projects with an industry wide impact is a path to promotion.
Which means Google rewards doing something new for the sake of doing something new. It’s tremendously difficult to get promoted by improving older systems. Crucially, you often need to demonstrate impact with metrics. The easiest way to do that is sunset an old system and show the number of users who have migrated to your new system, voluntarily or otherwise.
Ew. Thanks for the insight. But ew.
Is there any material evidence suggesting that someone’s promotion is the reason that chrome will remove
alert
? Obviously google will push the web in the direction that juices profit, but an individual promotion? Seems like a red herring.It is often difficult to pick it apart as it’s rarely a single person or team. What happens in large organizations is that there is a high-level strategy and different tactics spring from that. Then, there are metrics scorecards, often based on a proxy, which support the tactics delivering the strategy. This blurs the picture from the outside and means that rarely one person is to blame, or has singular control over the successes.
I haven’t followed the alert situation very closely, but someone familiar with large organizations can get a good read from the feature blurb. There is a strong hint from the language that they are carrying a metric around security, and possibly one around user experience. This translates to an opportunity for a team to go and fix the issue directed by the metrics since it’s quantifiable. The easiest way to start might be to work back from what moves the metric, but this is a very narrow perspective.
Developers may know what the best things to work on having been a developer in that area for 10 years, but their impact tracks towards those top-level strategies. Management can’t justify promotion because someone else is very focused on metrics that drive the strategy.
In lots of places this is called alignment. Your boss may only support X amount of work on non-aligned projects, if you do at least Y amount of work on Y projects. A classic big company alignment example is a talented person in a support department. If they can fix your biggest problem at the source it’d be best to let them do this. However, metrics incentivize assigning them to solving N support cases per week and other metrics designed for lower-skilled individuals instead of working on fixing root causes. Eventually, they leave unless you have smart management taking calculated risks, manage the metrics at the team level so the team is not noticed working the way it wants, seeking paths for talented people to work on the product, etc.
Many of us understand how metrics and incentives at tech companies work. Was just pointing out that it’s a bold claim to assume that chrome is removing alert due to an individual seeking a promotion.
I think about this in terms of my time at Apple – like, people ascribed all kinds of causes to various seemingly peculiar Apple decisions that to those of us on the inside were obvious cases of internal politicking leaking out.
WHATWG is a consortium of multiple companies so I’m curious why everyone is pointing the finger at Google here, or is the assertion that Google has so much power over the WHATWG and Chrome at this point that there’s no ability for other companies to dissent? (And I mean we all know that the W3C lost and WHATWG won so a consortium of vendors is the web.)
The multiple companies are Apple, Google, Microsoft, and Mozilla (https://whatwg.org/sg-agreement#steering-group-member, section 3.1b) Of the three, only Apple develops a browser engine that is not majority funded by Google.
I’m pretty sure Apple develops a browser engine that is majority funded by Google: https://www.theverge.com/2020/7/1/21310591/apple-google-search-engine-safari-iphone-deal-billions-regulation-antitrust
That’s some pretty weird logic.
The browser engine Apple creates is used for a whole bunch of stuff across their platforms, besides Safari:
Mail, iMessage, Media Store fronts, App Store fronts.. Those last two alone produce revenue about 4x what Google pays Apple to make it the default.
Do I wish they’d get more people using alternatives and pass on the google money? Sure. Is there any realistic chance their ability to fund Safari and/or Webkit would be harmed by not taking the google money? Seems pretty unlikely.
I don’t think the stores use WebKit. They didn’t last time I investigated.
It’s true-ish. But I’m sure the most profitable company in the world probably doesn’t require that money and would be able to continue without.
You don’t become the most profitable company by turning down revenue.
Right I was just wondering if folks think the WHATWG is run solely by Google at this point. Thanks for the clarification.
The point is that many of those new APIs don’t happen in standards groups at all. Exactly because they’d require more than one implementation.
Yes, this. Google’s play here is less about controlling standards per se (ed: although they do plenty of that too) and more about getting everyone to treat Whatever Google Does as the standard.
WHATWG was run at inception by a Googler and was created to give Google even more power over the standards process than the hopelessly broken W3C already gave them. That they strong armed Mozilla into adding their name or that Apple (who was using the same browser engine at the time) wanted to be able to give feedback to the org doesn’t change the Googlish nature of its existence, IMO
Like it or not, Google is the www. It is the driving force behind the standards, the implementations (other than Safari), and the traffic that reaches websites.
It would be odd if Google’s internal politics didn’t leak into the medium.
Right, it’s just … one of those things that is obvious in retrospect but that I would never be able to state.
A lot of people seem to think that standards work is a bit like being in a university - people do it for the love of it and are generally only interested in doing what’s best for all.
In reality it’s a bunch of wealthy stakeholders who realize that they need to work together for everyone’s best - they’re not a monopoly, yet - but in the meantime it behooves them to grab every advantage they can get.
As mentioned in the post, standards work is hard and time-consuming, and if an organisation can assign a dedicated team to work on standards, that work will get implemented.
Universities work like that too now
This is sadly true.
FWIW, I’ve personally enjoyed email hosting by Migadu.
Same here. Switched after self-hosting mail for a couple years. Can absolutely recommend Migadu.
I’m the author of the post. I hadn’t heard of Migadu before, but it almost looks like it would work. The only issue is their Micro plan ($19/year) only allows 200 inbound emails per day. I guess that may not be an issue most of the time, but there are days where I receive more than 200 emails. The inability to control how much inbound email you are receiving makes me hesitant to use such a service.
I also use migadu for a family account. The thing I really like about it is that you can add as many accounts as you like for your domain: this@my.domain, that@my.domain, theother@my.domain, it’s so nice.
But yes, the 200-in limit has been my concern too. I subscribe to a few mailing lists and have a worry that one day some big heated mailing list conversation will put me over. This FAQ answer suggests they are lenient and that emails won’t actually be lost (assuming senders follow correct email practices!), but my tentative plan has been to wait and see if I ever get a warning email, and upgrade to the next tier if it becomes an issue. It hasn’t so far, after a year or so.
For what it’s worth, it’s not a hard limit. They won’t block the 201st email — if it’s a recurring thing, they’ll ask you to upgrade. This is mentioned in their docs, somewhere. cc @jeremyevans
I checked and it is in their documentation. So maybe that would have been a simpler option. I might have switched to Migadu instead of using a VM if I had known about it first. I think the only issue is the next level up from the $19/year plan is the $90/year plan, which is a pretty significant jump. But for someone who isn’t likely to go over their limits, Migadu looks like a really nice option.
It’s mentioned in the FAQ answer I linked to
Ah, didn’t notice you’d done that.
Re: using multiple addresses at the same domain:
Which email client(s) do you use? Last time I checked, Thunderbird doesn’t put design thought toward this use case. As such it is clunky to use for sending emails from different addresses.
I’m on Evolution now, but always looking for better options.
I primarily use mutt, which I have configured with 4 different email accounts: 1 work, 1 gmail, 2 migadu. So I don’t actually send from different addresses exactly (although I think that is easy to do in mutt), but have commands which switch me completely to a different account and inbox.
But what I meant about migadu is not that they give you multiple email addresses to send to and from within your domain, but that they let you add as many accounts as you like within that domain. So my daughters get their own email addresses and passwords and can sign into them on whatever mail client they like. And I can give these out to as many of my family as I like (the domain is a play on our surname), as long as I don’t hit the 200/20 limit.
Thanks for posting your setup. I’ve been sniffing at things adjacent to this for a while, looking at some other providers for SMTP. mailroute was the one that had looked most promising, but their definition of a “user” would have had me paying heavily for all the aliases I use, so I had not made the jump yet. Tuffmail’s EOL is going to force my hand.
Right now, I’m deciding between Migadu and a setup similar to what you’ve done. I had almost given up on the self hosted setup. Sendgrid could work for me, though. My only heartburn about it is, if they decide to kill off their free plan, it’s a huge jump up to $15/mo while I work out an alternative. Where I’d be flirting with the 200 in/day limit on Migadu, the jump up to the next tier isn’t as nasty if I need to do that.
Really sad to hear about Tuffmail. They were truly the best option.
I work with Git repos, but strictly through Mercurial + hggit. Templates + revsets are the best.
go
file1 file2 … is a little wrapper to open multiple files/dirs with xdg-open. I’ll probably rename this script if I ever have to install Golang.h
myprogram tries, in order, to open the man page, or runmycommand --help | less
flag, or withmycommand -h | less
. Basically “I want to read your help page, dammit” with a side order of “in a pager, please, don’t barf it straight into the terminal”.addkernel
installs the active Conda environment as a one of my user’s IPython/Jupyter kernels. This makes it visible from any Jupyter installation, specifically my main one. This, in turn, lets me maintain one virtualenv with a Jupyter installation + extensions, instead of one Jupyter installation per {conda,virtual}env.h
! What a great idea, I’m doing that! Actually, I might call it;h
since I type this often by accident anyway, as a vim user who maps;
to:
Oh yes,
nnoremap ; :
is an excellent mapping.If you have a PageDown key to fall back on,
nnoremap <Space> :
is also fantastic, and even easier to type. No drawbacks so far in my experience.