I personally use Bitwarden, which I would say that fulfills all 3 points that you want, but I never tinkered with the SSH sync (although its privacy section ensures that its part of how it syncs). If you want a simpler and lower level alternative, probably pushcx’s advice of checking out pass works best for you.
Strong +1. Been using Bitwarden for 1.5 years now and it’s everything I hoped it would be.
Overall the experience has only improved. I’m sure it has a bright future.
Thanks for introducing me to this, it’s about what I am looking for. Time to ditch manually synced (& merged, of inevitable forks) KeePass.
Can also recommend Bitwarden, I have not tried the desktop application, but the mobile version on Android and browser extensions have worked without any issues for me so far across different browsers and operating systems.
Edit: Apparently, I posted the same comment twice, my mistake.
It is indeed, but there’s a caveat with self-hosting it that irks me. Though apparently there are ways to work around it, as mentioned.
a few people incl. me have been able to code a client-compatible self-hosted version as well, gives you a lot of insight and trust in it
https://github.com/vvondra/bitwarden-serverless https://github.com/jcs/bitwarden-ruby
I use Hugo, and I’m surprised how content I’m with it. I switched from a Jekyll clone, and despite its quirks (mostly originating from its Go ancestry) it is pretty usable and also pretty fast.
What really interest me is why would I not use S3 and Cloudfront? Not that my blog written in a language spoken by a tiny population could possibly get overwhelmed with traffic, but my monthly fees are below 1$, and the site could handle practically unlimited load, should it face it. Also no hassle with hosting, security upgrades, SSL certificates. I have hosted it on a simple DO instance, and it was totally OK, yet AWS is superior in every possible respect for serving purely static pages.
Instead of rsync the aws cli can be used to sync the bucket, it is incremental, and also pretty fast.
I think it would be simple to use this tool in conjunction with S3/Cloudfront. I use Jekyll to build a local version of my site and then s3_website to push it to AWS. I like that the tool I use for building the site doesn’t tie me to a particular hosting strategy. (AFAICT the linked script only uses rsync to build the site locally in a target directory, not to deploy it to some remote host.)
Sure it would be simple, it is just as simple as using rsync, I’m curious why would anybody run httpd and self-host given the drawbacks. Maybe I have a different usecase in mind, and that doesn’t let me see, or simply a matter of different preferences…
I don’t think httpd is for self-hosting. I think it’s for previewing things locally (similar to jekyll’s serve feature).
Thank you for your comments.
Yes, rsync(1) is just to copy source files (html, css, md, etc) to ssg working directory.
Yes, I use httpd(8) in debug mode (not like a daemon) locally, just for previewing.
Why httpd -d? It’s already installed on OpenBSD by default.
On macOS you can use python -m SimpleHTTPServer for the same purpose.
I ran into problems with SimpleHTTPServer because it has no concurrency: a single client can block everything. You can work around this with the threading mixin, something like: https://kdecherf.com/blog/2012/07/29/multithreaded-python-simple-http-server/
I’m curious why would anybody run httpd and self-host given the drawbacks.
We are talking about serving static files, for a personal blog (out if a disk cache)…what are the drawbacks again?
Even if you have only http facing the public internet you need to track the security reports. I found having to track CVEs for the few services I had on my machine too burdensome. Also you may need TLS, which also has its overhead, and hosting costs more than on S3 imho. If you need the machine for other purposes that may make the equation a bit different though.
All true. But of course, some people do this kind of thing as a hobby, or as part of their jobs. Others might find it a fun learning exercise, and even rewarding.
Oh, it totally fell out of my sight. My bad.
I abandoned the pet server approach to have more of my limitd freetime devoted to my blog, creating content, as I already did enough ops at work.
+1 to Hugo, I’d say “pretty fast” is underselling how ridiculously fast it is (at least compared to other popular static site generators).
Re: why not S3+Cloudfront?
I started with this a while ago. The problem is you end up using something like Route53 to get your custom domain and TLS, which ultimately ends up costing you $2-3/mo per domain altogether, which adds up pretty quickly when you have a bunch of domains. Not to mention the ordeal of managing AWS and their massive dashboards and weird config syntax.
These days I use Github pages + Cloudflare for DNS/TLS in their free tier. If I were up for migrating again, I’d consider using Netlify which is great by all accounts and supports some basic dynamic forms that are handy for static sites (contact form, etc).
I agree that Route53 costs can add up, but if your DNS provider can serve “APEX entries” you can get away without that, if I recall correctly (or maybe you can use Cloudfare then?). My single domain site takes <1€/mo, (Route53 + S3 + Cloudfront).
Regarding Netlify: recently I have seen some useful/impressive tools they have open-sourced, so I’d also consider their services.
Though the PROTOCOL.md is fairly light on details, it sounds a lot like S/Kademlia (PDF), is this by design or just a coincidence?
Basically it’s the same idea that BitTorrent uses for the Mainline DHT (Kademlia), but with a “Secure” addition which uses IDs that are generated from a public key of the node rather than randomly.
Very much by design. Then chat layer is built on top of a networking layer that is basically a DHT of nodes. I based this part of the project on the design of the Mainline DHT outlined in BEP 5.
The network layer is self-bootstrapping, and self-healing. It provides a minimal interface to send a “packet” to any address in the network. It only guarantees that it will deliver that packet as closely to the intended recipient as it can. All the relaying, message acknowledging, etc lives in the chat layer on top of that.
Glad you could recognize the design. Thanks for looking!
Hope you update PROTOCOL.md once it stabilizes, would be fun to write a compatible client in another language. Chat apps are fun.
Definitely. I’d like to get a JS version of this out there, using websockets instead of UDP, so that nodes can run in the browser.
Do you mean WebRTC?
That’s kinda the challenge that IPFS has (which also uses an S/Kademlia variant). They have WebRTC nodes in their JS implementation, and normal TCP nodes for the Go implementation, and bridging the two is tricky as there’s basically no full implementation of WebRTC aside from the C++ one.
One fun tangent if you’re into it (something I looked into a few years ago): The IPFS DHT is actually fairly liberal with the kinds of nodes it’ll interface with and messages it’ll pass around. You might be able to graft onto it. (I chatted with the team about this idea, they were not opposed as long as the foreign clients were well-behaving.)
I really need to get around to writing my sum type proposal for Go.
Instead of introducing an entirely new feature, the idea is to tweak the existing features to support it.
The bare idea is simple: “closed” interfaces. If you declare an interface as closed, you pre-declare all the types that belong to it, and that’s it. The syntax could be something like
type Variant1 struct {..}
type Variant2 struct {..}
type Foo interface {
// methods
for Variant1, Variant2
}
You continue to use type switches (i love type switches) with these interfaces, except that the default case can’t be used for exhaustive switches (you can also enforce that in non-exhaustive switches).
It would also be nice to lift the restriction for implementing methods on interfaces; and make it be possible to run interface methods on explicitly-interface (not downcasted) types. There was a proposal for that too..
Under the hood, these could possibly be implemented as stack-based discriminated unions instead of vtable’d pointers, though there might be tricky GC interaction there.
I haven’t really written this up properly, but I suspect it might “fit well” in Go and be nicer than directly adding sum types as a new thing.
I encourage you to make this suggestion on https://github.com/golang/go/issues/19412 which was recently marked as “For Investigation” which suggests someone is collating ideas.
+1, that’s an excellent thread with a lot of interesting insights about the constraints that the core language devs are battling with when introducing a new feature like this. It’s super long but I’ve found the discussion to be really informative.
I actually just wrote https://manishearth.github.io/blog/2018/02/01/a-rough-proposal-for-sum-types-in-go/ yesterday
But I don’t have the time/desire to really push for this. Feel free to use this proposal if you would like to push for it!
Yup!
Take care if you use hugo, the default rss template does not render the full article.
Here’s a modified one that renders the full article in the feed:
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>{{ .Title}} </title>
<link>{{ .Permalink }}</link>
<description>Recent posts</description>
<generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
<language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
<managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
<webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
<copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
<lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
{{ with .OutputFormats.Get "RSS" }}
{{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
{{ end }}
{{ range .Data.Pages }}
<item>
<title>{{ .Title }}</title>
<link>{{ .Permalink }}</link>
<pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
{{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
<guid>{{ .Permalink }}</guid>
<description>{{ .Content | html }}</description>
</item>
{{ end }}
</channel>
</rss>
I probably should have included that in the article… Too late now.
I was about to answer laziness and realized this is not something that should be celebrated.
It’s now included.
Thank you for adding it, it will be handy for me when I go back to your post in a few weeks and look for how to do this. :)
The built-in Firefox Reader mode is a godsend. I feel much more comfortable reading long texts in the same font, page width, background color + the scrollbar on the right now gives me a pretty good estimate of reading time.
RSS, lightweight versions (light.medium.com/usual/url ?), heck even gopher, perfectly does the job! we need these things.
People often include google analytics without really thinking about the privacy implications, just because publishing blind is so annoying I suppose. Is there a better alternative?
Well, there’s Piwik. I find it quite nice, though I’ve heard Google Analytics is in a league of its own. Wouldn’t know since I don’t use it for these exact privacy concerns.
You probably also punish yourself with google search ranking by not using google analytics too. bummer.
Anecdotally, this seems to be the case, based on what I’ve played with this on my own site.
Currently, if you search for “Benjamin Pollack” on Google, my blog is (usually, because Google) about third on the page. About two years ago, I noticed that it had suddenly and without any warning plummeted to almost the bottom of page one. Sometimes, it wasn’t even on page one, which was even worse. While I generally don’t like doing SEO, I didn’t really like not having my blog rank highly, either, and the sudden drop didn’t make much sense to me. So, I spent some time poking.
I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff. But while those changes did help a bit on some other search engines, none of it really seemed to help much on Google. In frustration, I started looking through what I’d changed recently to see if I’d perhaps broken something that Google cared about.
Turned out, I did: while I’d used Mint in practice to track my site’s usage, I’d accidentally left Google Analytics on as well for quite some time. I’d caught it shortly before the rankings drop, and removed it from my site. On a hunch, I added Google Analytics back in, and…presto, back up to roughly my old position.
I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results. In the case of my blog, their being able to see people showing up there based on my name, and then staying on the site, probably helps, and likewise probably gave them insight they might otherwise lack that I tend to have a few key pages that get a lot of traffic.
So, yeah: unfortunately, I do think you punish yourself with Google by not using analytics. For some, that might be okay; for others, perhaps not.
I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results.
Perhaps not active malice, but this is the exact sort of thing people mean when they say that algorithms encode values.
It may not be active malice, but it still has malicious effect, and it’s still incumbent upon Google to clarify, fix, and/or restate their values accordingly.
I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff.
in what form did you receive the “whining”? as someone with an irrational hatred of the web 2.0 “upgrades” that have been sweeping the web, making fonts huge, breaking sites under noscript or netsurf, etc., i have been wondering about the reasons for this. like is there some group of PR people going around making people feel bad about their “out-dated” websites, convincing them to use bootstrap?
would motherfuckingwebsite.com live up to google’s standards of “responsiveness”?
For what it’s worth: When I worked on Google Analytics a few years ago, that was definitely not true. And I’d bet that it’s still not true and will never be true. Search ranking is heavily silo’d from the rest of the company’s data, both due to regulatory reasons and out of principle. Just getting the Search Console data linked into GA was a big ordeal.
Edit: Just did a quick search, here’s a more official statement from somebody more relevant: https://twitter.com/methode/status/598390635041673217, I’m pretty sure there were many other similar statements made by other people over the years too.
I understand if you can’t say anything but I’m wondering if there’s a different explanation for https://lobste.rs/s/3o3acu/decentralized_web#c_ltcs3n then?
I don’t work there anymore, so there’s no way for me to know for sure.
If I had to guess I’d say it’s a similar deal to the dozens/hundreds of “I spoke about X in private and now I’m seeing ads for X, so my phone/car/alexa/dishwasher is spying on me” stories. We’re really good at attributing things incorrectly.
The comment you link already mentioned various things that happened which likely ruined the ranking: Unresponsive design, no HTTPS, whatever else was wrong with it. The thing is, it takes time for ranking to get updated and propagate. Even if everything was fixed yesterday and the site got crawled today, it can take weeks or months for relative ranking in a specific keyword to improve. It’s very hard to attribute an improvement to any specific thing—all you can do is do your best across the board over the long term.
Some other possible things that might have gone wrong which the comment didn’t already mention: Maybe Mint was doing something bad, like loading slowly or insecurely or something else. Maybe some high-value incoming links disappeared. Maybe Google rolled out one of their big algorithm changes and the site was affected by some quirk of it (it happens fairly regularly, lots of rants about it out there).
[Comment removed by author]
what is so annoying about publishing blind? i am publishing this comment blind and it doesn’t bother me.
isn’t it easier to do nothing, than to do something and set up google analytics?
Eh, well, there’s actually up down vote buttons on your comment. So the tracking was already there for you. Likes and claps and shit… people want to see who’s seeing them.
I’m curious to see what the general consensus on Bitwarden is. I can’t find many articles about it outside of its own announcements. It looks like they have a bug bounty program? I’m a bit dubious of the fact that there’s just one developer.
I’ve been using Bitwarden for about a year now and it has been great. Works on all platforms and it’s open source—and people are actually contributing. The code looks reasonably clean, well organized, and the author is quick to respond and fix any problems.
While it’s not quite at feature-parity with 1Password, it’s well beyond an MVP password manager, and new features are added regularly.
Overall, I recommend it.
Very nice! Are you putting together a proposal to get some of this merged into x/crypto? (I saw some discussion on golang-dev.)
Very nice! Are you putting together a proposal to get some of this merged into x/crypto? (I saw some discussion on golang-dev.)
Hopefully yes, but the goal is to get people using this first
As much as I love your code, I’d be reluctant to use it without some audits. :)
Seems like a great step towards a proposal, though.
Hey everyone, thanks for dropping by the main ssh-chat server (chat.shazow.net). That one is running a fairly old version of the software (/uptime -> 13579h33m20.784837542s).
I just deployed the latest release on an east-coast server here, please come help test it:
ssh chat2.shazow.net
Somebody was mean and “fuzzed” by shoving dev/urandom into it.
Damnit lobsters, this is why we can’t have nice things. >:|
People are very welcome to fuzz their own servers. Binary releases are here: https://github.com/shazow/ssh-chat/releases
Looks like this is the PR thread that started this.
Hi, I’m [the] “Guy”. Yup, certainly was ready to blow. Emails like this every weekend asking when I’m going to merge their patches is what sent me over the edge. I’ve had enough, and so I’m throwing in the towel.
I hope he doesn’t start getting more comments like this. Guy gives away free work for like almost 5 years (based on git commit logs), and some random person (with an amazon wishlist as their github homepage link no less!) comes out of the woodwork to inform him how he should feel. eye roll
I think that comment is pretty fair and the maintainer was kind of a jackass in implying guy was ruining his weekends asking how he can improve the quality of his own freely donated labor.
I do find it hard not to hate any post that begins with “sigh” though.
Yeah, the sigh and the second sentence (“It looks like..”) were what chapped my caboose I think. The rest of it was pretty reasonable.
@itistoday — Your personal campaign against BitcoinXT is getting excessive, 4 submissions attacking it in the last 5 days.
You keep saying Mike and Gavin are spreading FUD, but all I see is your posts spreading FUD—sometimes in the same sentence.
I would kindly request for you to stop posting blatantly one-sided attacks on this topic. Perhaps /r/bitcoin is a better venue for that. :)
(If other lobsters disagree, feel free to downvote or share your dissenting opinion.)
I respect that you feel that way, and if you think I’m spreading FUD you should highlight it and point it out. Waving your hand and calling it all FUD is disingenuous (EDIT: I see you stealth edited your comment to add an example, see next reply). When I said Mike was spreading FUD, I substantiated it with reasoning.
Likewise, if you feel I’m being one-sided, feel free to point out anything that I’m ignoring or not addressing.
Finally, perhaps you missed the comment, but I have no intention of bringing this issue up anymore since the threat seems to have subsided and all that’s newsworthy has been said.
I appreciate your civil response. :)
See the links in my comment for examples. Saying that forking an open source project creates a potential for people to lose money is pretty much the definition of FUD. All the best FUD has some seed of truth but framed in a way to make it seem catastrophic. Any interaction with bitcoin creates the potential for people to lose money, that is the nature of the project. Even inaction creates the potential for people to lose money (arguably a greater risk).
Mike has done a good job debunking majority of these arguments in his medium articles here, if you’d like to read. https://medium.com/@octskyward (let me know if you’d like links to the addressing of specific points and I can dig it up or paraphrase).
In case any of this matters and it’s not clear, I’ll add some disclaimers/notes about me: I’ve been maintaining open source projects for a long time and I believe that this is healthy behaviour. I’ve been holding bitcoin since early 2011. I’ve explored the bitcoin protocol in-depth since reading the whitepaper and have built code that interacts with clients at the protocol level. I don’t believe that bitcoin will die whether bigger blocks get adopted or not (and I’m curious to see the outcome of either scenario), though from a technical perspective I suspect bigger blocks today is a safer move (less radical change than allowing full blocks and never-confirmed transaction overflow).
See the links in my comment for examples.
Oh, did you edit your comment to add that link? No matter. I replied (twice), giving two situations in which people could lose their coins. So far that’s just one example of alleged “FUD”, and it wasn’t FUD.
Mike has done a good job debunking majority of these arguments in his medium articles here [..] I don’t believe that bitcoin will die whether bigger blocks get adopted or not (and I’m curious to see the outcome of either scenario), though from a technical perspective I suspect bigger blocks today is a safer move (less radical change than allowing full blocks and never-confirmed transaction overflow).
… It’s as if you didn’t read anything here. ?
… It’s as if you didn’t read anything here. ?
Or, you know, maybe I took the time to understand how this stuff works in spite of a few dozen people screaming their heads off that the world is coming to an end?
Finally, perhaps you missed the comment, but I have no intention of bringing this issue up anymore since the threat seems to have subsided and all that’s newsworthy has been said.
I think more than “all that’s newsworthy” has actually been said.. cough ..but thank the flying spaghettini monster for small miracles, that it is ending regardless.
I wouldn’t have downvoted if you hadn’t invited me to, but if that’s how we’re collecting views this time…
I don’t really want to share an opinion, but I disagree; @itistoday’s campaign has been informative and helpful. To the extent that I’ve felt pressure to take a side, it’s been from the other one, but actually Lobsters has been pretty good at keeping this polite. Maybe my expectations are calibrated weirdly as a result of caring about the culture tag; I don’t know.
This post dismisses lightning.network without good reason for doing so.
Instead, it rushes ahead with an implementation that actually employs exponentially increasing block sizes.
There’s no reason for this level of drama, and the solution they chose seems to be a poorly thought out one given the existence of alternatives like Lightning, Sidechains, and Dynamic Blocksize Limit.
More to the point, their actions seem less consistent with an intention to actually solve the problem in an elegant manner, and more consistent with that of an attempted governance coup [1] [2].
There’s nothing necessarily wrong with a governance “coup” of Bitcoin (maybe Bitcoin needs a coup, maybe it doesn’t), but doing so under the pretense of a technical issue, when Bitcoins are at stake, is lame, misleading and reckless.
Technical issues are best solved with technical solutions, and preferably elegant ones that work. Of all the options available, an exponentially increasing block size doesn’t even make the list.
This post dismisses lightning.network without good reason for doing so.
That’s not true, he addresses it specifically and links to a more fleshed-out article he wrote on the topic in May. Quote with emphasis added:
The so-called “Lightning network” that is being pushed as an alternative to Satoshi’s design does not exist. The paper describing it was only published earlier this year. If implemented, it would represent a vast departure from the Bitcoin we all know and love. To pick just one difference amongst many, Bitcoin addresses wouldn’t work. What they’d be replaced with has not been worked out (because nobody knows). There are many other surprising gotchas, which I published an article about. It’s deeply unclear that whatever is finally produced would be better than the Bitcoin we have now.
It doesn’t seem like he’s completely opposed to a solution like the Lightning network, it’s more that he doesn’t believe that raising the block sizes is mutually exclusive with the implementation of an off-blockchain payment channel solution. The biggest complaint is that it’s not plausible that such a solution will be ready within a reasonable timeline, compared to raising the block size which is addressing a relatively imminent concern.
It doesn’t seem like he’s completely opposed to a solution like the Lightning network, it’s more that he doesn’t believe that raising the block sizes is mutually exclusive with the implementation of an off-blockchain payment channel solution.
Your interpretation of what he is saying is accurate. It would be nice if that’s what he was doing.
Raising block sizes by some amount is uncontroversial. Everyone is pretty much on board with that, including all of the core devs that I’m aware of.
Mike’s “solution” is not a solution but a massive problem for Bitcoin. At the risk of repeating myself: a hard fork, done in this way, will cause more problems than it fixes (Bitcoin will become way more centralized, the value of BTC may go down, people may lose coins, and the entire community will look even more ridiculous than it does now). And, Mike will have to change his code. That is not a maybe, it is a mathematical guarantee. Exponential growth must be stopped at some point, and an existing BIP proposal does that, but blockchain scaling should be more intelligent. Rushing is foolish.
The biggest complaint is that it’s not plausible that such a solution will be ready within a reasonable timeline, compared to raising the block size which is addressing a relatively imminent concern.
Fear not, I’m already familiar with all of your cited sources and how the network works.
I get that’s not what you think he’s doing, but he may not be doing the thing you described either. The reality is that the discussion for introducing larger blocks has been on-going for a very long time. It started long before the average block size started creeping up near the limit. Unfortunately, despite numerous proposals, a consensus has not been reached and thus the bitcoin-core repo is stalled on the matter (whether that stall due to lack of consensus or ulterior motives or disagreement in social psychology is another question).
Hearn’s fork is a forcing function. It’s like when Google releases Chrome to create a pressure force on the stagnating browser features of the incumbents. It was never intended to become the top browser. It was intended to illicit a reaction from the competitors, which it did very efficiently.
I do not believe that he’s expecting to win the majority. If I were him, I would be expecting to force a decision by the bitcoin-core maintainers. A decision that I’d expect would be a more conservative but acceptable bump to the block sizes.
Yes, that thought did cross my mind as well.
That would be the most positive spin on what Mike is doing (that I can think of).
The thing is, even if that’s his plan, perfectly reasonable people can disagree whether or not that’s the best course of action.
I for one, don’t think that this method of forcing “progress” is the best one he had available to him, for a few reasons:
FUD and drama is bad enough, but actually creating the potential for people to lose Bitcoins is a whole other ballgame. He’s crossed a very real line at this point.
Starting to sound like FUD here too. :/
It also happens in centralized systems (PayPal, VISA) because of censorship. The whole concern with large block sizes is centralization, and when the limit is increasing exponentially you will get block sizes that only PayPal and VISA can handle.
I’m a total outsider, but he mentions that lightning has yet to be fully worked out, and says that those familiar with capacity planning in this domain are strongly urging rapid action. I don’t know how relatively hacky the alternatives are, but in the short summary you linked to it seems that none of them are without drawbacks.
According to him, SOME action is required in the short-term to stay alive. How much relative support do the approaches you mentioned have, and how soon could they be implemented to prevent short-term network breakdown?
According to him, SOME action is required in the short-term to stay alive.
For any common definition of “short-term”, that is complete and utter nonsense (read the “no reason for this level of drama” link).
How much relative support do the approaches you mentioned have, and how soon could they be implemented to prevent short-term network breakdown?
There is no short-term network breakdown to worry about.
Lightning.network is nearly finished (so I hear). Even if it weren’t, it’s not like it’s some sort of massive undertaking. I don’t know as much about the other proposals.
Anyway, if this was actually an urgent issue with real time pressure the Bitcoin devs would release a hack to increase the block size, and it wouldn’t be Bitcoin XT. Exponentially increasing block sizes are not the answer to this problem, they are a problem in of themselves.
I have a really simple “what” question which I haven’t seen answered by any of the many “why” posts…
Is this copying existing account balances, or starting from zero? If it’s copying them, as of what moment in time - has it already occurred, or is it in the future?
There are a lot of followup questions about currency deflation, financial incentives to do creative things, etc, but if it’s starting from zero, they don’t apply, of course…
By fork, it means that—if accepted by the 75% super-majority—it’s going to “split” from the existing ledger. At that point, balances on both forks would be identical but any transactions on the respective blockchain will diverge henceforth.
If the fork goes through, then anyone still using the old blockchain will be getting data that is no longer accepted by the new blockchain.
Aside: The Bitcoin XT client is structured in a clever way such that if you do upgrade to using Bitcoin XT but the fork does not pass, then it continues to be in-sync with the original blockchain and there is no interruption. The interruption scenario only happens if you stay on the Bitcoin-core client.
Thanks. That’s useful detail.
So theoretically, for example, if both currencies wind up coexisting for some period of time, everyone who held BTC now also holds $new_currency. The value of a currency is in what can be bought with it, and the value of BTC isn’t going to disappear overnight; even in the 75%-acceptance case, there are going to be both individuals and merchants who either don’t understand how to migrate, or have chosen not to.
If BTC eventually dies in favor of $new_currency, that’s going to be “interesting” for everyone involved in BTC transactions post-fork. The mere possibility of it seems likely to trigger a panic, and possibly hyperinflation, doesn’t it? Obviously nobody who stops transacting BTC at the moment of the fork has a huge risk there, since they have $new_currency to fall back on, and they might reasonably assume it’ll be largely unaffected by whatever happens to BTC. I don’t see how one can double the total amount of money everyone has, and not have SOME destabilizing effect, and it’s honestly anyone’s guess what it’ll be.
If hyperinflation does happen, and goes fast enough, maybe nobody will set up a currency exchange, which might indeed largely isolate the two currencies from each other. That seems like a best-case scenario, and honestly not a pleasant one to have even the slightest involvement with.
I guess I don’t have a question; that all seems like it’s entirely intended, and does clarify why this is so acrimonious. People who hold or transact in BTC should probably be making contingency plans now.
Bitcoin XT is not really a new currency. Not in the same way an “altcoin” like Litecoin would be (which starts from a fresh blockchain). Bitcoin XT is actually taking advantage of bitcoin’s built-in upgrade mechanism.
While there is always a possibility that any kind of change (or even non-change) will trigger a panic reaction in bitcoin-land (that’s part of the charm of this lovely endless source of drama, right?) it’s unlikely that Bitcoin XT will have a volatile transition if it ends up “winning”. It’s really up to the miners to decide which blockchain to adopt, and to make the incorrect decision means allocating some amount of time mining a worthless blockchain. If Bitcoin XT starts to get substantial adoption, and a critical mass of miners start switching to Bitcoin XT, then the rest of the miners are very likely to follow suit.
It’s very unlikely that there will be any kind of un-extreme split between miners supporting two different blockchains for any length of time. For this to happen from an economic perspective, the timing and cumulative decisions of everyone involved would have to coincide such that both chains are valued approximately equal to each other with the same number of supporters (note that this can’t happen since 75% is required for this upgrade). Only then will it not make sense to switch to one side or the other, which would be an extremely fascinating social and economic scenario to observe the outcome of. The moment the tides tilt one way or the other, it becomes blatantly rational to quickly migrate to the new winning blockchain (since the other blockchain is going to become obsolete).
This is not unlike the situation if we were to cut the internet connection between two perfect halves of the world, and then later reconcile the connections. For a while, there would be two alternate versions of bitcoin, but quickly one will reach critical mass and win.
If anyone stays on the old blockchain while all miners have migrated to the new blockchain, then the old-blockchain users will not achieve any transactions going through. They’ll never get mined into a new block. Even if some minority miners stay in the old blockchain, the difficulty level would remain being so high that mining a new block will take forever.
I understand what you (and apparently the Bitcoin XT community in general) are hoping for. Reiterating that anything other than the exact scenario you envision is “very unlikely” is not very reassuring.
I wouldn’t say I’m particularly hoping for anything. Either outcome is going to encounter a novel scenario with unknown outcomes.
If Bitcoin XT is not adopted and there’s a sudden rally in usage/price, then the maximum block size will get hit hard and transaction fees will start to climb. Meanwhile, an increasing subset of transactions will sit around in the pool of uncomfirmed transactions. Maybe that pool will keep growing disproportionally, maybe economic forces will play out properly, or maybe there is a bug in the miner transaction prioritization heuristic and the network will get DoS’d.
In order for bitcoin to survive long-term, it needs to be able to handle both kinds of outcomes. Probably better this happens now than someday in an optimistic hypothetical future where even more money is at stake.
And finally, comment trees can be collapsed to ignore a conversation going off topic or whatever.
A suggestion: Collapse the root comment, not just the replies. Ideally into a short one-line preview. Collapsing is great for keeping track of what you’ve read vs haven’t, especially for very large discussions. Bonus points for client-side persistence.
I’ve been making my way through Michael Abrash’s Graphics Programming Black Book (free epub/mobi) in an attempt to learn a bit more about graphics programming. It covers a lot of the fascinating advancements in graphics during his tenure at id Software, but it’s not the most pragmatic read for modern graphics programming.
It’s a tough book, but a lot of fun. I think the reason why it’s not completely graphics oriented is because the first part was taken from his other book “The Zen of Code Optimization”. Later on in the book it focuses on graphics programming.
Yea I started reading kinda from the middle, around the BSP section, but now I’m branching back out. It’s growing on me.
If you’re interested in going further down the optimisation rabbit hole, have a look at “Code Optimization: Effective Memory Usage” by Kris Kaspersky. But if you’re more interested in graphics programming, I’ve found that games programming books go into more detail rather than graphics alone… depends on what you’re after.
This offers some interesting funding ideas that I like, but I don’t think it identified any actual problem (that I agree with). The closest problem statements I could see were these:
As we have moved to more and more niche tools, it becomes harder to justify the time investment to become a contributor.
…
The other problem is the growing imbalance between producers and consumers. In the past, these were roughly in balance. Everyone put time and effort in to the Commons and everyone reaped the benefits. These days, very few people put in that effort and the vast majority simply benefit from those that do. This imbalance has become so ingrained that for a company to re-pay (in either time or money) even a small fraction of the value they derive from the Commons is almost unthinkable.
In the abstract, these seem like interesting problems. But is there hard evidence that this is causing a serious problem in the proliferation of free software?
It seems to me like the problem being described here is one of power imbalance. Namely, that there is a small set of contributors and a large group of users. You might find this inherently disturbing, but what are its real world implications? Should I, as a programmer who contributes to free software, feel bad that there are people using it that don’t give back to my project? (I certainly do not!)
In the end though, it is a bleak landscape right now.
And this is where I’m like: huh? Free software is flourishing. Compare the rise and proliferation of code sharing today with ten years ago. There are vast networks of online communities collaborating—in the open—on free software for the whole world to use.
What exactly is “bleak” about today? Is there some credible threat to free software that is looming in the shadows waiting to destroy the free sharing of code as we know it today?
The threat is the persistent and pervasive burnout amongst people working on projects that are OMG-level critical to the tech sector. A lot of people are starting to step back from major projects like Python, Postgres, Django, Ruby, etc, and that’s going to have an impact. Most of the people leaving are the ones that feel like what used to be a hobby is now a full-time, unpaid job. If we don’t figure out a better way to support those people, we’re going to have a bad time.
I just wanted to pop up a level.
I feel like I support the message you intended to convey in the OP: let’s work on helping to fund free software contributors. That is a noble goal that is hard to disagree with. I thought that part of your post was pretty good. The problem I’m having is with your framing; frankly, you come across as an alarmist with the idea that free software is going to be in huge huge trouble unless we figure out some sort of funding for free software contributors. It really put me off to be honest.
In the last 18 months we have seen some of the issues of a lack of funding - HeartBleed exemplified the problems that OpenSSL has been suffering for years from chronic under funding.
OpenBSD nearly ran out of money to cover the cost of its electricity usage.
I think there are plenty of examples of Open Source projects lacking a reasonable financial backing.
Unfortunately, I don’t have any bright ideas for solving this problem (but I do order OpenBSD CD’s twice a year :~])
Right… Maybe I misunderstood the OP. I wouldn’t have considered either of those projects as examples, because once they got into trouble, others stepped in to help out. To me, this seems like things are working great and that there’s no cause for alarm. From the OP’s tone/phrasing, I was expecting to hear about critical projects that had become completely defunct (none of OpenSSL nor OpenBSD nor PyPI fit that description).
Don’t you think it would have been much cheaper to prevent these problems than to scramble to fix them after the fact? Certainly, when people are burned out to the point of leaving a project, there is a huge transaction cost to someone else stepping in and getting up to speed, even if we assume that there will always be somebody willing and able to do so.
Of course! I’ve stated several times in this thread that I support more funding! What I don’t understand is the alarm.
The threat is the persistent and pervasive burnout amongst people working on projects that are OMG-level critical to the tech sector.
Who is responsible for this threat? Can you provide examples of critical open source projects that have become defunct (i.e., no longer useful) because of burnout?
A lot of people are starting to step back from major projects like Python, Postgres, Django, Ruby, etc, and that’s going to have an impact.
Can you elaborate? A lot of people take steps back from projects—not just major projects. Is there a particular reason why you think this is particularly bad today? And if people take a step back from these projects, is there some reason to believe that the slack won’t be picked up by other (new or old) contributors?
Most of the people leaving are the ones that feel like what used to be a hobby is now a full-time, unpaid job.
That seems like a perfectly legitimate reason to leave a project. Sometimes you lose your passion for a project. It happens, and not just in free software. Why is this a major threat to free software?
If we don’t figure out a better way to support those people, we’re going to have a bad time.
You really haven’t made a convincing case for why you think this is true. In particular, free software is flourishing in both quantity and quality, yet you seem to completely ignore this point.
Rubygems.org has had issues like ‘gems with native dependencies don’t install on windows’ open for 8+ months because it is effectively unmaintained due to the maintainers being burned out.
The critical vulnerability with YAML a few months back only happened because the maintainers had ‘investigate if that bug affects gemspecs’ on their TODO list but couldn’t find the time to do it.
AT&T deciding to get rid of their Ruby open source contributions has harmed Ruby and Rails immensely. The pernicious things about issues like the blog post is that you don’t realize they’re happening until they’ve already happened. It’s difficult to quantify in the moment.
I think that OSS looks really great on the surface, you see more projects than have ever existed in the past, more companies using it, and just a general greater acceptance across the board. However, much like a family that are running up tens of thousands of dollars of credit card debt in order to “keep up”, if you look below the surface at the “finacials” of OSS you’ll see that a frightening amount of really critical stuff is severely under-maintained if they are maintained at all.
This I think is what coderanger is speaking to when he’s talking about the landscape. This problem actually gets a lot worse the more popular OSS becomes if there isn’t also a large enough investment back into these projects by enough of their users. As OSS becomes more accepted it more people use it, as more people use it you have much more demand which places additional pressure of the maintainers of that software. They start getting more people submitting bug reports, more people demanding fixes, more people yelling at them when something doesn’t go their way and I think for a lot of maintainers the project that used to be fun to work on in their spare time starts to become something they dread touching because it brings with it feelings of guilt and anxiety and a constant need to be fighting fires.
In the end, a large numbers of projects, even well written projects, without contributors or maintainers is a pretty bad outcome if we push enough of them away.
I think that OSS looks really great on the surface, you see more projects than have ever existed in the past, more companies using it, and just a general greater acceptance across the board.
I’m having a difficult time understanding why these great attributes of free software are being qualified with “on the surface.” Why are these surface level qualities? The increase in acceptance, quantity and quality of free software don’t seem like surface level qualities to me. They seem like deep and entrenched improvement. I kindly ask you to compare the state of free software today with the state of free software ten years ago. At least from my perspective, the difference and improvement is astounding.
I otherwise take your point though. I totally get that a really important project (like PyPI!) is critical to maintain. What I don’t understand is the alarm. As you described in another comment, you eventually couldn’t keep up with PyPI any more and companies stepped in to fund it. If they disappear, and PyPI stops working, do we think that some other company won’t jump in and foot the bill? I certainly think someone would. That seems OK to me.
We have no reason to suspect anyone else would fund it, it took us years to work out the current deals keep things just on this side of failure. If Rackspace of Fastly pulled support tomorrow, we would have to start that all over again. We have contracts in place where possible to diffuse some of the risk, but it’s still a “nod and a handshake”-based mess. Rubygems is similar in a lot of ways, NPM has resources behind it from VCs, this is not a pretty picture.
That brings up a point that I don’t think was in your post: running infrastructure for some projects is a big chore, and typically getting funding for that is hard or impossible. Most projects I know run either out of basements or off a VPS somewhere, so if they get wildly popular they just fall over.
The best single example I can cite is Python packaging and its whole ecosystem. Just a few years ago, it was almost unusable due to years of neglect. PyPI was down frequently, pip was difficult to install, slow, and very insecure. While it’s not the only factor that made things better, a huge part of the improvement was due to one man (Donald Stufft) and the fact that he has had financial support in working first 50% on packaging via Rackspace and now 100% via HP. If he lost that funding, I have no doubt he would have to scale down his efforts and given what happened before I would call that a critical issue to the Python community. We have no backup plan, if HP’s generosity runs out the fallback is to just accept packaging being on a slow slide back in to the dark.
I can be more explicit, I was working on packaging prior to funding from Rackspace or HP and I was heading towards burn out pretty rapidly. I was forgoing spending time with my family or doing anything else to try and find time to work on it because, while I’m not the only person, I’m one of (if not the) primary driving force currently. The funding from Rackspace and now HP has given me the ability to dedicate time to it, without forgetting what my family looks like. You can look at OpenSSL and GPG for similar situations. There are countless tools at varying levels of critical-ness to the infrastructure of organizations (or to the internet as a whole) that have little to no funding.
This sounds more convincing. It would have helped me interpret your OP more charitably with these examples in your post.
I still don’t think these examples warrant the level of alarm in your OP. It sounds like the system, as is, is working great. Under funded critical projects are getting attention after we notice they need attention. I personally don’t see that as a major problem in and of itself.
That’s fair. It is hard to see this from the outside sometimes. As someone with friends in more or less every major FOSS project, all I hear is a sea of discontent and burnout. As dstufft pointed out though, this has stayed well hidden for years. I think the Python and DevOps communities in particular are making huge strides in it being okay to talk about burnout in public, but a lot of it is still in hidden backchannels (-dev IRC channels, private Slacks, contributor-only mailing lists, etc). The saving grace so far is that each time someone had flamed out, another has stepped up to replace them. That’s a terrible way to get forward progress though, especially when you see the massive value companies are extracting from our collective work.
I see. That’s interesting. When you frame it that way, it seems like one of the problems you’re trying to address is to make it OK to talk about burnout. That seems like a great goal.
That’s part of it, but it’s also important that we all start realizing how much of our critical infrastructure is maintained as a side project, not as something full-time. I’ve seen this happen to a bunch of projects, including some of my own.
For my own part, I basically end up telling people I accept patches to my projects I’m burned out on, but in a world where the conversation included them wanting to hire someone to do the work, I can think of a handful of people I could propose as potential contractors with the right expertise. In general though, people have an attitude that precludes this for some reason. Generally if the subject of paying for a feature they need comes up they leave upset.
A friend spent a period working on his project by soliciting donations, and it basically fizzled after about 18 months - the donations from companies dried up, and that was the end of the road. Now he’s got a corporate patron, and it’s fine, but that’s still entirely too rare.
It’s worth considering that today much of open source is created largely by people in a very substantial position of privilege. People like myself who can afford to be self-employed or not even get paid at all for extended periods of time. Some maintainers are those who got lucky with an employer who permits them to spend some amount of their time on open source work.
And the effect? We in a position of privilege gain yet more privilege. Because of my open source work (and the ability to do it), I get way more interest from eagerly-hiring companies than any of my friends without a Github repo. I gain more public respect and recognition because I can afford to do this. It gives me the luxury of being a lot more picky.
Free software may be flourishing, but who are the maintainers and contributors who are flourishing with it?
Having more options for funding open source enables a greater diversity of people to participate, and I think that’s a good thing for both software and people.
I’m having a hard time parsing your central message. You’re saying more funding is good. Great, I agree. I’m taking issue with this idea that free software is somehow in a boat load of trouble today. As the OP says, it’s a “bleak landscape right now.” Huh?
[EDIT] See some of my other comments for more explanation. :-)
I’m not arguing that the end of the world is here today. It’s a social issue, like race/gender/income inequality. By ideal societal standards, the FOSS ecosystem is not in good shape (“bleak landscape right now” are not my words, but I can understand the sentiment). Probably far worse than corporate IT in general, which is not great to begin with.
Even at the most regrettable and embarrassing times in our society’s history, even during slavery, our GDP continued to grow. It’s dangerous to ignore systemic problems just because the metrics are going up and to the right.
I don’t know how to respond to this. Our perceptions of reality are just way too different. Ideal societal standards? Corporate IT? Slavery? GDP? Income equality? Holy moly.
Ideal societal standards? Slavery? GDP? Income equality? Holy moly.
Hm? Are those questions or just mocking? :/
They are questions of a baffled reader. I was inquiring: how is free software “bleak”? What is justifying all this alarm?
Instead, i’m met with a comment that strolls into a whole bunch of seemingly unrelated topics. What more can i say? At a certain point, i have to acknowledge that we’re speaking way past one another and cut the conversation short.
At a certain point, i have to acknowledge that we’re speaking way past one another and cut the conversation short.
Fair enough.
By the way, I’ve used your Go toml library, good work. I hope producing great FOSS work continues to be feasible for you. :)
Thanks - exactly my own feelings on these topics, but I’ve tried and never been able to say them very well.
Writing stripe-copy in Go, a cli tool for migrating Stripe objects between two different accounts.
v1.0 is almost ready, could use some testers if this is a thing that is useful to you.
Ultimately, I’d like for it to also let you import/export your account’s state into files, so you’ll be able to manage your Stripe plans and subscriptions and whatnot by editing a .yaml file or somesuch.
Very cool, I was just going to play around with execution modes next week too!
Any ideas if there are any gotchas regarding linking things that rely on Go’s runtime like goroutines?
None of this new stuff is really well documented at this point (so forgive me if I make a mistake) but as I understand it, the runtime is spawned on its own thread(s) the first time the library is called into, package init runs (but main is not executed), and then the called function is executed. The runtime will manage goroutines across threads (‘P’s in the runtime nomenclature) like normal, and these continue executing independently of Rust threads. I also have a hunch that each time Rust makes a Go call, some kind of thread context switch is required so that the function can be called on a new 'G’ (goroutine) the same way it currently works when Go code makes a C call using cgo.
I did get a simple toy program running that used a channel returned by Go to receive integers and print them from Rust. I added the following Go functions:
//export CounterTo
func CounterTo(max int) <-chan int {
c := make(chan int)
go func() {
for i := 0; i < max; i++ {
c <- i
}
}()
return c
}
//export RecvInt
func RecvInt(<-chan int) int {
return <-c
}
And here was the Rust code: http://sprunge.us/jUVi
All of the usual memory management caveats apply. Memory allocated by Go cannot be safely passed across the C boundary without retaining a reference to prevent it from being collected and causing a use-after-free. I made this mistake with my channel example (and it’s a good thing I had not pushed that code yet), since the counter goroutine would exit after the max was reached and the channel could be collected, making recv unsafe to call on the Rust side.
Another bummer for me is that these new execution modes only seem to be supported on linux/amd64. This is unfortunate in my case, as I had wanted to use this to integrate Go code into a .NET Windows GUI application. Since I believe Go 1.5 is in feature freeze at this point, something like this may have to wait until 1.6 or later.
Agree and disagree. Does your readme say “here’s a thing. maybe it works.”? That’s ok.
Does it say “my new whizbang framework kicks the pants off everything else because it’s been meticulously hand crafted for awesome!”? And it’s still shitty? That’s not ok.
I thought I’d post my standard readme, btw. :) I’m particularly proud of the “except for the parts that aren’t” wording.
Experimental Code
This source-code repository is published here in the hope that it will be useful or at least interesting, but is very likely to be in an unfinished state, and not necessarily a consistent one, either.
The sole representation is that (except for the parts that aren’t), this code is authored by me - Irene Knapp.
If the repository contains information about its own licensing, that information is normative; otherwise, all rights are reserved and it is provided for information only.
If you do something that requires me to write a better disclaimer, I will be very irate.
Ha, very nice. I’m considering borrowing just this part in mine:
Disclaimer: If you do something that requires me to write a better disclaimer, I will be very irate.
If you do something that requires me to write a better disclaimer, I will be very Irene.
Fixed it for you :)
What about “I will be a very irate Irene”?
I feel like “hand crafted for awesome” is a promise that the code will be a psychedelic experience, which is pretty much the opposite of a promise that it adheres to principles of quality engineering. :)
Reminds me of a JS lib I saw advertised as “agonizingly beautiful”. Because “agony” is the feeling you want associated with your software! :)
Here’s a thing, maybe it works: https://github.com/choongng/objc-tidy
I posted it earlier but in my exuberant rebasing I accidentally deleted the source (now fixed). The thing that would make me happiest is if someone put this functionality into clang-format but in the mean time if you have an Objective C codebase the combination of the two can help tidy things up.
I would love to have a feedback post, three years later. I don’t really know the status of Neovim right now
All of the points made in the post mentioned are still true.
Neovim is still developed actively and the community is stronger than ever. You can see the latest releases with notes here: https://github.com/neovim/neovim/releases
Vim’s BDFL ultimately caved and released his own async feature that is incompatible with Neovim’s design that has been in use by various cross-compatible plugins for years (no actual reason was provided for choosing incompatibility despite much pleading from community members). Some terminal support has also been added to recent Vim. IMO both implementations are inferior to Neovim’s, but that doesn’t matter much for end-users.
There are still many additional features in Neovim that haven’t been begrudgingly ported to Vim.
At this point, I choose to use Neovim not because of the better codebase and modern features and saner defaults, but because of the difference in how the projects are maintained and directed.
No, he didn’t. He didn’t cave. He was working on async, for a long time, with the goal of producing an async feature that actually fit in with the rest of Vim’s API and the rest of VimL, which he did. Did he probably work on it more and more quickly due to NeoVim? Sure. Did he only work on it because of pressure as you imply? No.
NeoVim is incompatible with vim, not the other way around.
Async in vim fits in with the rest of vim much better than NeoVim’s async API would have fit in with vim.
The whole point of NeoVim is to remove features that they don’t personally use because they don’t think they’re important. There are a lot of Vim features not in NeoVim.
Vim is stable, reliable and backwards-compatible. I don’t fear that in the next release, a niche feature I use will be removed because ‘who uses that feature lolz?’, like I would with neovim.
Where did you get this narrative from? The original post provides links to the discussions of Thiago’s and Geoff’s respective attempts at this. I don’t see what you described at all.
Can you link to any discussion about Bram working on async for a long time before?
Huh? Vim didn’t have this feature at all, a bunch of plugins adopted Neovim’s design, Vim broke compatibility with those plugins by releasing an incompatible implementation of the same thing, forcing plugin maintainers to build separate compatibility pipelines for Vim. Some examples of this is fatih’s vim-go (some related tweets: https://twitter.com/fatih/status/793414447113048064) and Shougo’s plugins.
I get the whole “Vim was here first!” thing this is about the plugin ecosystem.
How’s that?
Here is the discussion of the patch to add vim async from Bram, where he is rudely dismissive of Thiago’s plea for a compatible design (no technical reasons given): https://groups.google.com/forum/#!topic/vim_dev/_SbMTGshzVc/discussion
What are some examples of important features or features you care about that have been removed from Neovim?
The whole point of Neovim (according to the landing page itself: https://neovim.io/) is to migrate to modern tooling and features. The goal is to remain backwards-compatible with original vim.
Do you actually believe this or are you being sarcastic to make a point? I honestly can’t relate to this.
The vim vs. neovim debate is often framed a bit in the style of Bram vs. Thiago, and the accusation against Thiago is typically that he was too impatient or should not have forked vim in the first place when Bram did not merge Thiago’s patches. I have the feeling that your argumentation falls into similar lines and I don’lt like to view this exclusively as Bram vs. Thiago, because I both value Bram’s and Thiago’s contributions to the open source domain, and I think so far vim has ultimatetively profitted from the forking.
I think there are two essential freedoms in open source,
Both of this happend when neovim was forked. There is no “offender” in any way. Thus, all questions on API compatibility following the split cannot be lead from the perspective of a renegade fork (nvim) and an authorative true editor (vim).
It was absolutely 100% justified of Thiago to fork vim when Bram wouldn’t merge his patches. What’s the point of open source software if you can’t do this?
And as a follow up my more subjective view:
I personally use neovim on my development machines, and vim on most of the servers I ssh into. The discrepancy for the casual usage is minimal, on my development machines I feel that neovim is a mature and very usable product that I can trust. For some reason, vim’s time-tested code-base with pre-ANSI style C headers and no unit tests is one I don’t put as much faith in, when it comes to introducing changes.
Vim absolutely has unit tests.
@shazow’s reasoning and this post are what I link people to in https://jacky.wtf/weblog/moving-to-neovim/. Like for a solid release pipeline and actual digestible explanations as to what’s happening with the project, NeoVim trumps Vim every time.