I’m going to be very curious if this actually goes anywhere. During the Bazaar retrospective, I remember Jelmer commenting that one specific feature of Bazaar—its ability to work transparently inside Git repositories—was a misfeature he regretted. I was a bit surprised by that at the time; Mercurial generally feels that it’d be great to have better interop with Git, and there have even been projects such as hgit (directly use the Mercurial UI on .git repos) and hg-git (use Mercurial to clone and work with Mercurial copies of remote Git repositories; this is also the track I took in Kiln Harmony) to try to achieve that.
(BTW, neither hgit nor hg-git are official Mercurial projects, but both were started by core Mercurial contributors, and the latter remains very actively maintained.)
I’m not personally convinced there is enough interest in Bazaar, or enough legacy Bazaar repositories in active use, to really justify maintaining Bazaar at this point, but I’m really unsure that there’s enough room in this space to launch an third island of DVCS into the existing landscape. The ability to use Bazaar to work with Git seemed like one of its few bright stars; I’m not sure how Breeze will get any initial traction at this point.
Nit: hg-git was actually started by a GitHub employee (he got to it a few weeks before our GSoC student was to work on the very same thing). In order to help the GSoC student, I made the code more pythonic and added some tests, and then ended up holding the bag for several years.
I’ve since given up maintainership of hg-git, because I never use it. I still want to try hgit again some day, but there’s many miles of core hg refactoring to go before it’s worth attempting.
That’s an impressively thorough proposal. I’m quite happy that Facebook is using Mercurial, it helps drive innovation and helps having an alternative to Git that works as scale. Now, I’m not sure how much performance they’ll get: many extensions are in Python, which is good thing for extensibility, so unless they port extensions to Rust (and lose the flexibility of Python), they won’t get that much of an improvement. Or did I miss something?
Extensions would still benefit from their primitives being faster. They appreciate that issues around FFI might arise and passing from Rust to Python and back in quick succession is definitely one of them.
Yeah, FFI speed is a concern, and ideally it’d be easier to implement an entire class in Rust and expose some methods to Python, because then it’d be easier to move some low-level parsers into Rust. I did a naive implementation of one of our parsers using nom and it was 100x (not a typo, one hundred times) faster than the C version, but the FFI overhead ended up making it not a win.
Out of curiosity, why is the Rust-Python FFI slower than C-Python FFI? I thought that Rust could generate C-callable symbols and call C directly. On that topic, I wrote a PEG parsing library in C with Python bindings and in production workloads 90% of the time is spend in FFI and object creation.
Well, with C, there’s always the possibility to write a python extension. This is not a generic FFI then.
Often, the issue there - and, reading python.h etc. at a glance - is that many interpreters allow direct manipulation of their memory structures (including creating and destroying objects). For that, they ship a lot of macros and definitions. You cannot use those in Rust directly. There’s two approaches to that: write a library that does exactly what python.h (on every version of the interpreter!) does and use that. Alternative: write a small C shim over your Rust code that does just the parts you need.
The big issue seemed to be in creating all the Python objects - I was returning a fairly large list of tuples, and the cpython extension could somewhat intelligently preallocate some things, whereas the Rust I think was having to be a bit dumber due to the wrappers in play.
As an addition to the point about primitives: there are cheap operations that are fast even in Python, there are expensive operations you run several times a day and there are rare operations where you need flexibility and mental-model-fit but can accept poor performance. Having better performance for frequent operations while keeping the flexibility for the long tail could be a win (depends on the effort required and usage patterns, of course).
Leiningen for Clojure once again defaults to the latest version.
Leiningen for Clojure once again defaults to the latest version.
Leiningen doesn’t default to any latest version as far as I know. Leiningen does
Versioning/pinning is not only about having an API-compliant library though, it’s also about being sure that you can build the exact same version of your program later on. Hyrum’s Law states that any code change may effectively be a breaking one for your consumers. For example:
Of course, pinning is not a panacea: We usually want to apply security issues and bugfixes immediately. But for the most part, there’s no way we can know a priori that new releases will be backwards compatible for our software or not. Pinning gives you the option to vet dependency updates and defer them if they require changes to your system.
1: Unless you use version ranges or dependencies that use them. But that happen so infrequently and is strongly advised against – I don’t think I’ve ever experienced it in the wild.
FYI, Hyrum finally made http://www.hyrumslaw.com/ with the full observation. Useful for linking. :)
Hmm, perhaps I misunderstood the doc I read. I’m having trouble finding it at the moment. I’m not a Clojure user. Could you point me at a good link? Do library users always have to provide some sort of version predicate for each dependency?
Your point about reproducing builds is a good one, but it can coexist with my proposal. Imagine a parallel universe where Bundler works just like it does here and maintains a Gemfile.lock recording precise versions in use for all dependencies, but we’ve just all been consistently including major version in gem names and not foisting incompatibilities on our users. Push security fixes and bugfixes, pull API changes.
Edit: based on other comments I think I’ve failed to articulate that I am concerned with the upgrade process rather than the deployment process. Version numbers in Gemfile.lock are totally fine. Version numbers in Gemfile are a smell.
Oh, yes, sorry for not being clear: I strongly agree that version “numbers” might as well be serial numbers, checksums or the timestamp it was deployed. And I think major versions should be in the library name itself, instead of in the version “number”.
In Leiningen, library users always have to provide some sort of version predicate for each dependency, see https://github.com/technomancy/leiningen/blob/master/doc/TUTORIAL.md#dependencies. There is some specific stuff related to snapshot versions and checkout dependencies, but if you try to build + deploy a project with those, you’ll get an error unless you setup some environment variable. This also applies to boot afaik ; the functionality is equivalent with how Java’s Maven works.
Thanks! I’ve added a correction to OP.
Hmm, I’ve been digging more into Leiningen, and growing increasingly confused. What’s the right way to say, “give me the latest 2.0 version of this library”? It seems horrible that the standard tutorial recommends using exact versions.
There’s no way to do that. The Maven/JVM dependency land always uses exact versions. This ensures stability.
Your two submissions make me think David A Wheeler’s summary of SCM security is still timely since the [D]VCS’s on average aren’t built with strong security in architecture or implementation. The only two I know that tried in architecture/design at least were Aegis and especially Shapiro et al’s OpenCM:
Both are defunct since they didn’t get popular. I think it would be beneficial for someone to apply the thinking in Wheeler’s summary and linked papers (esp on high-assurance) to modern DVCS to see what they have and don’t have. Plus the feasibility of strong implementation. I think my design in the past was just the standard mediating and logging proxy in front of a popular VCS with append-only logs of the code itself. A default for when you have nothing better.
I think that’s rather orthogonal. The problem is everybody implemented a “run more commands” feature which runs more commands. It’s not really about the integrity of the code in the repo.
In a sense, yes, if the repo was a read only artifact everything would be safer. But somehow we decided that repos need to be read/execute artifacts with embedded commands in them. Behold, the “smart” repo. Crypto signing that doesn’t make it safer.
I’ve seen the “dumb” source control tool - speed is a feature, and without a “smart” transport layer of some kind your push/pull or checkin/checkout times become pretty awful. Just compare CVS-via-pserver to Subversion, or tla to bzr.
The thing that’s surprising to me is that it took well over a decade for anyone to notice this problem, since it’s been present in Subversion all these years…
My takeaway is that argv parsing is too fragile to serve as an API contract. And I doubt very much this is the first and only bug of its kind.
If SSH transport had been implemented with calls to some SSH library instead of a fork+exec to an external ‘ssh’ program, this bug would not have happened as it did.
Oh, absolutely argv is too fragile. I’m surprised even considering that this bug survived so long.
This fucks bisect, defeating one of the biggest reasons version control provides value.
Furthermore, there are tools to easily take both approaches simultaneously. Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff. And, for example, Phabricator even pulls down the revision (pull request equivalent) description, list of reviewers, tasks, etc, and uses that to create a squash commit of your current branch when you run arc land.
I’m surprised to hear so many people mention bisect. I’ve tried on a number of occasions to use git bisect and svn bisect before that, and I don’t think it actually helped me even once. Usually I run into the following problems:
I love the idea of git bisect but in practice it’s never been worth it for me.
Your second bullet point suggests to me bisect isn’t useful to you in part because you’re not taking good enough care of your history and have broken points in it.
I bisect things several times a month, and it routinely saves me hours when I do. By not keeping history clean as others have talked about, you ensure bisect is useless even for those developers who do find it useful. :(
Right: meaningful commit messages are important but a passing build for each commit is essential. A VCS has pretty limited value without that practice.
It does help that your commits be at clean points but isn’t really necessary - you don’t need to run your entire test suite. I usually will either bisect with a single spec or isolate the issue to a script that I can run against bisect. And as mentioned in other places you can just bisect manually.
You can run bisect in an entirely manual mode where git checks out the revision for you to tinker with and before marking the commit as good or bad.
There are places where it’s not so great, and there are places where it’s a life-saving tool. I work (okay, peripherally… mostly I watch people work) on the Perl 5 core. Language runtime, right? And compatibility is taken pretty seriously. We try not to break anyone’s running code unless we have a compelling reason for it and preferably they’ve been given two years' warning. Even if that code was written in 1994. And broken stuff is supposed to stay on branches, not go into master (which is actually named “blead”, but that’s another story. I think we might have been the ones who convinced github to allow a different default branch because having it fail to find “master” was kind of embarrassing).
So we have a pretty ideal situation, and it’s not surprising that there’s a good amount of tooling built up around it. If you see that some third-party module has started failing its test suite with the latest release, there’s a script that will build perl, install a given module and all of its dependencies, run all of their tests along the way, find a stable release where all of that did work, then bisect between there and HEAD to determine exactly what merge made it started failing. If you have a snippet of code and you want to see where it changed behavior, use bisect.pl -e. If you have a testcase that causes weird memory corruption, use bisect.pl --valgrind and it will tell you the first commit where perl, run with your sample code, causes valgrind to complain bitterly. I won’t say it works every time, but… maybe ¾ of the time? Enough to be very worth it.
No it doesn’t. Bisect doesn’t care what the commit message is. It does care that your commit works, but I don’t think the article is actually advocating checking in broken code (despite the title) - rather it’s advocating committing without regard to commit messages.
Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff.
This, on the other hand, fucks bisect.
Do you know how bisect works? You are binary searching through your commit history, usually to find the exact commit that introduced a bug. The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another. Then when you bisect, you can actually find a test failure or other issue. Without squashing you’ll end up with a compilation failure or something from some jack off’s WIP commit. At least if you follow the author’s advice, that commit will say “fuck” or something equally useless, and whoever is bisecting can know to fire you and hire someone who knows what version control does.
Do you know how bisect works?
Does condescension help you feel better about yourself?
The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
I don’t read it that way. The article mainly advocates not worrying about commit messages, and also being willing to commit “experiments” that don’t pan out, particularly in the context of frontend design changes. That’s not the same as “not working” in the sense of e.g. not compiling.
It’s important that most commits be “working enough” that they won’t interfere with tracking down an orthogonal issue (which is what bisect is mostly for). In a compiled language that probably means they need to compile to a certain extent (perhaps with some workflow adjustments e.g. building with -fdefer-type-errors in your bisect script), but it doesn’t mean every test has to pass (you’ll presumably have a specific test in your bisect script, there’s no value in running all the tests every time).
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another.
Sure, but it also makes those changes much bigger. If your bisect ends up pointing to a 100-line diff then that’s not very helpful because you’ve still got to manually hunt through those changes to find the one that made the actual difference - at that point you’re not getting much benefit from having version control at all.
The page mentions git specifically as being vulnerable. While I’m sure that’s true, it seems highly impractical to attempt to move git away from SHA1. Am I wrong? Could you migrate away from SHA1?
[Edit: I forgot to add, Google generated two different files with the same SHA-1, but that’s dramatically easier than a preimage attack, which is what you’d need to actually attack either Git or Mercurial. Everything I said below still applies, but you’ve got time.]
So, first: in the case of both Mercurial and Git, you can GPG-sign commits, and that will definitely not be vulnerable to this attack. That said, since I think we can all agree that GPG signing every commit will drive us all insane, there’s another route that could work tolerably in practice.
Git commits are effectively stored as short text files. The first few lines of these are fixed, and that’s where the SHA-1 shows up. So no, the SHA-1 isn’t going anywhere. But it’s quite easy to add extra data to the commit, and Git clients that don’t know what to do will preserve it (after all, it’s part of the SHA-1 hash), but simply ignore it. (This is how Kiln Harmony managed to have round-trippable Mercurial/Git conversions under-the-hood.) So one possibility would be to shove SHA-256 signatures into the commits as a new field. Perfect, right?
Well, there are some issues here, but I believe they’re solvable. First, we’ve got a downgrade vector: intercept the push, strip out the SHA-256, replace it with your nefarious content that has a matching SHA-1, and it won’t even be obvious to older tools anything happened. Oops.
On top of that, many Git repos I’ve seen in practice do force pushes to repos often enough that most users are desensitized to them, and will happily simply rebase their code on top of the new head. So even if someone does push a SHA-256-signed commit, you can always force-push something that’ll have the exact same SHA-1, but omit the problematic SHA-256.
The good news is that while the Git file format is “standardized,” the wire format still remains a bastion of insanity and general madness, so I don’t see any reason it couldn’t be extended to require that all commits include the new SHA-256 field. I’m sure this approach also has its share of excitement, but it seems like it’d get you most of the way there.
(The Mercurial fix is superficially identical and practically a lot easier to pull off, if for no other reason than because Git file format changes effectively require libgit2/JGit/Git/etc. to all make the same change, whereas Mercurial just has to change Mercurial and chg clients will just pick stuff up.)
It’s also worth pointing out that in general, if your threat model includes a malicious engineer pushing a collision to your repo, you’re already hosed because they could have backdoored any other step between source and the binary you’re delivering to end-users. This is not a significant degradation of the git/hg storage layer.
(That said, I’ve spent a decent chunk of time today exploring blake2 as an option to move hg to, and it’s looking compelling.)
Edit: mpm just posted https://www.mercurial-scm.org/wiki/mpm/SHA1, which has more detail on this reasoning.
Plenty of people download OSS code over HTTPS, compile it and run the result. Those connections are typically made using command line tools that allow ancient versions of TLS and don’t have key pinning. Being able to transparently replace one of the files they get as a result is reasonably significant.
Right, but if your adversary is in a position that they could perform the object replacement as you’ve just described, you were already screwed. There were so many other (simpler!) ways they could own you it’s honestly not worth talking about a collision attack. That’s the entire point of both the linked wiki page and my comment.
That said, since I think we can all agree that GPG signing every commit will drive us all insane, there’s another route that could work tolerably in practice.
It is definitely a big pain to get gpg signing of commits configured perfectly, but now that I have it setup I always use it and so all my commits are signed. The only thing I have to do now is enter my passphrase the first time in a coding session that I commit.
Big pain? Add this to $HOME/.gitconfig and it works?
gpgsign = true
Getting gpg and gpg-agent configured properly and getting git to choose the right key in all cases even when sub keys are around were the hard parts.
That’s exactly what I did.
Sorry, to rephrase: mechanically signing commits isn’t a big deal (if we skip past all the excitement that comes with trying to get your GPG keys on any computer you need to make a commit on), but you now throw yourself into the web-of-trust issues that inevitably plague GPG. This is in turn the situation that Monotone, an effectively defunct DVCS that predates (and helped inspire) Git, tried to tackle, but it didn’t really succeed, in my opinion. It might be interesting to revisit this in the age of Keybase, though.
I thought GPG signing would alleviate security concerns around SHA1 collisions but after taking a look, it seems that Git only signs a commit object. This means that if you could make a collision of a tree object, then you could make it look like I signed that tree.
Is there a form of GPG signing in Git which verifies more than just the commit headers and tree hash?
You are now looking for a preimage collision, and the preimage collision has to be a fairly rigidly defined format, and has to somehow be sane enough that you don’t realize half the files all got altered. (Git trees, unlike commits, do not allow extra random data, so you can’t just jam a bunch of crap at the end of the tree to make the hash work out.) I’m not saying you can’t do this, but we’re now looking at SHA-1 attacks that are probably not happening for a very long time. I wouldn’t honestly worry too much about that right now.
That said, you can technically sign literally whatever in Git, so sure, you could sign individual trees (though I don’t know any Git client that would do anything meaningful with that information at the moment). Honestly, Git’s largely a free-for-all graph database at the end of the day; in the official Git repo, for example, there is a tag that points at a blob that is a GPG key, which gave me one hell of a headache when trying to figure out how to round-trip that through Mercurial.
Without gpg signing, you can get really bad repos in general. The old git horror story artile highlights these issues with really specific examples that are more tractable.
Though, I don’t want to start a discussion on how much it sucks to maintain private keys, so sorry for the sidetrack.
I don’t see why GPG-signed commits aren’t vulnerable. You can’t modify the commit body, but if you can get a collision on a file in the repo you can replace that file in-transit and nothing will notice.
Transparently replacing a single source code file definitely counts as ‘compromised’ in my book (although for this attack the file to be replaced would have to have a special prelude - a big but not impossible ask).
Here’s an early mailing list thread where this was brought up (in 2006). Linus’s opinion seemed to be:
Yeah, I don’t think this is at all critical, especially since git really
on a security level doesn’t depend on the hashes being cryptographically
secure. As I explained early on (ie over a year ago, back when the whole
design of git was being discussed), the security of git actually depends
on not cryptographic hashes, but simply on everybody being able to secure
their own private repository.
the security of git actually depends on not cryptographic hashes, but simply on everybody being able to secure their own private repository.
This is a major point that people keep ignoring. If you do one of the following:
then the argument that SHA3, or SHA256 should be used over SHA1 simply doesn’t matter.
And here’s the new thread after today’s announcement
(the first link in Joey Hess’s e-mail is broken, should be https://joeyh.name/blog/entry/sha-1/ )
sienote: i think they look a bit like the classic plan9 fonts :)
Plan 9 fonts were also designed by B&H. Plan uses Lucida Sans Unicode and Lucida Typewriter as default fonts. Lucida Sans Unicode, with some minor alterations was renamed as Lucida Grande, the original system font on OS X, replaced only recently by Helvetica Neue. It’s funny that several people say this reminds them of Plan 9, but not OS X :-).
However, these fonts are more similar to the Luxi family of fonts (also from B&H) than the Lucida family.
Personally, I am going to continue programming (in acme, of course) using Lucida Grande (yes, I use a proportional font for programming).
What do you like in acme, compared to other editors (vim, Emacs, Atom, Visual Studio Code, Sublime Text…)?
[Comment removed by author]
Does it have any affordance for keybindings, or is it strictly mouse-driven other than text entry? I’ve always been interested in its plugin model, but haven’t had a sense of how I’d like it given my general dislike of using the mouse.
shameless self promotion
If you’re interested in the other Plan 9 editor, sam, there’s an updated version here: http://www.github.com/deadpixi/sam that has scalable font support and extensive support for keybindings.
Sadly none that I’m aware of. Sam is much, much simpler than Acmez though, so it’s probably (IMHO) easier to just dive right in to.
I would not say it’s significantly simpler, the command language is the same. It lacks interaction through a file system, so the lack of features could be interpreted as being simpler, I guess.
I use sam when I need to edit files on remote computers, but in my opinion the UI model makes it harder to use than acme.
No keybindings outside of basic unix keybindings (C-A, C-E, C-W), sorry.
Not to expressly shit on Acme, but that doesn’t sound like anything editors such as a Emacs or Vim can’t do. Well, it depends on how nicely you want to be able to move tiled windows around. I think transpose-frame does this on Emacs, but it’s not mouse-driven.
In Emacs, the only thing you can’t do in that list with the mouse is the first item. All the others are certainly possible, even I’ve bound Mouse4/5 to copy and paste.
Executable text, mutable text (including in win terminal windows), mouse chording, and mouse support in general, structural regexp, integrates well with arbitrary Unix tools, tiled window management, no distracting fluff; no options, no settings, no configuration files, no syntax highlighting, no colors.
Acme is by far the most important tool I use. If it were to disappear from the face of the earth, the first thing I would do is reimplement acme. Luckily, it would not take me very long as acme has very few features to implement, it relies on abstractions, not features.
A good demo: http://research.swtch.com/acme
One of the distinguishing features of Plan 9 software is the rejection of the idea that software alway needs constant development. It’s done, it works, it doesn’t need further development.
As someone who has done multiple Go ports to new hardware architectures and operating systems, I would be very unhappy if plan9port would be implemented in Go because I would not be able to use it until I would be finished.
To expand on that, I think macOS uses San Francisco UI nowadays. Helvetica Neue didn’t last long.
Indeed. AFAIK Helvetica Neue was only used by macOS 10.10 - it was replaced with (Apple-designed) San Francisco in 10.11.
It’s funny that several people say this reminds them of Plan 9, but not OS X :-).
well, i’ve never really used os x ;)
I loved the classic Plan 9 pelm font. The enormous and curvaceous curly brackets are still a wonder.
I do this regularly with hg (using ‘hg commit –interactive’), but I find that the typical user of source control tooling finds the partial commit functionality puzzling. Having it on by default is biasing towards the wrong users in my experience (anecdata is pretty broad from having been the guru for all of svn, git, and hg for varying teams over time.)
note /u/nwjsmith’s comment
How well does mecurial work with git servers? Does the hg-git bridge works properly in general? Do you use it in work? Most of my client related work is done on git and I don’t want to screw things.
I know some people use hg-git, but how well it works depends heavily on your code review workflow. It gets kind of thorny around the edges when you want to edit history.
I’ve been tinkering with other ideas to try and make it more pleasant, but nothing real has materialized.
This is a bit of an hg FAQ. Here are some responses from a recentish time when this was asked in HN:
In short: easier to use, has some powerful features that git doesn’t have, such as revsets, templating, tortoisehg, giant semi-centralised repos, and changeset evolution
Performance wise is it fast enough to deal with code bases with 100k or more lines? I have read some comments stating that it is not very fast.
In general, yes, it’s extremely fast. 100k lines is fine. Mercurial itself is almost 100k lines (ignoring tests, which adds more), and I’d classify that as small for what hg is used for by FB and Mozilla.
The repo I work in, mozilla-central, has around 19 million lines I believe and it is very fast. I’m sure Facebook has a similar number if not more.
I work for Facebook.
Facebook’s mercurial is faster than Git would be on the same repository, but that’s largely because of tools like watchman hooking in to try to make many operations operate in O(changes), instead of O(reposize). It’s still very slow for many things, especially when updating, rebasing, and so on.
Your comment made me curious, so I ran cloc over my local copy of mozilla-central. By its count there are 18,613,213 lines of code in there at the moment; full breakdown here.
Yes. For example, facebook’s internal repository that is hundreds of gigabytes is run on hg. For really huge repositories (much, much bigger than 100k lines) you can use some of the tricks they have for making things like “hg status” very fast for such a huge repository.
IIRC, Linux was kept in bk for a while before Linus got tired of it and wrote up git.
How has BitKeeper progressed over time?
What advantages does it have over git, bzr, darcs, Mercurial, etc.?
Linux was in bk, under a no-cost closed-source license for the kernel devs. Bitkeeper prohibited attempts to clone/reverse engineer it. A dev reverse engineered it by telnetting to the server port and typing ‘help’. Bitkeeper revoked the license. Linus coded git in the next few weeks.
Linus coded git in the next few weeks.
Let’s not forget that hg was also released within a couple of weeks to replace bk.
Writing a vcs within a few weeks isn’t a task that only Linus can do. ;-)
Just to add more details, Linux was happy using bk. He worked in the same office as Andrew Tridgell. Andrew didn’t use bk and hadn’t agreed to no EULA. Andrew begun to reverse engineer the bk protocol (by sniffing network traffic in his office iirc). Linus asked him to stop doing it. He refused. Linus was forced to write git (and called Andrew and ass iirc)
Any source for this?
This mostly lines up with stories I’ve heard from people that were present in the kernel community at the time, for what it’s worth. I’ve only ever gotten it as an oral history though, so I can’t really provide any concrete evidence beyond what JordiGH offers in terms of “search the LKML”.
Most of the drama was public on mailing lists, but it’s kind of hard to find. Look at LKML around April 2005 and earlier.
Here’s some of the blow back, https://web.archive.org/web/20060328061810/http://www.realworldtech.com/forums/index.cfm?action=detail&PostNum=3322&Thread=2&entryID=49312&roomID=11
It’s mostly from memory from reading Slashdot and osnews at the time. The parts I’m not 100% certain have iirc next to it.
The website has a “Why?” page that tries to answer some of those questions.
BK/Nested allows large monolithic repositories to be easily broken up into any number of sub-repositories.
“I see you have a poorly structured monolith. Would you like me to convert it into a poorly structured set of micro services?” - Twitter
How can the code “just happen to be owned by Google”?
Author works at Google and is using his work computer to work on this project?
He wouldn’t necessarily have to be using his work computer :(
Google claims ownership of work done on personal time with personal resources?
That’s incredibly shitty of them, if so.
It’s being done on 20% time, from what I understand.
There’s a process to get the company to formally disclaim ownership of things, but then you’re pretty heavily restricted in terms of when you can work on it. If you don’t care about ownership, just getting an OSS license on something is the simpler path by a wide margin.
If it’s useless enough then the process is easy :-)
Shitty, perhaps, but also not uncommon.
Not uncommon, but I normally associate the practice with companies that don’t “get” Open Source, or why devs might pursue side-projects and what their personal IP means for their careers in general.
I wouldn’t normally associate those attitudes with Google. And since a lot of developers refuse to sign agreements signing personal IP over to their employer, I’m surprised to hear Google requires it, given how popular they have been among developers as a “good” employer.
Is anyone using Mercurial instead of Git? I thought about switching to Mercurial once, but now it seems the project is slowly dying. Are there benefits?
Mercurial development is not dead at all:
The userbase is dwindling, but the development, if anything, is speeding up.
I use it almost exclusively for my personal projects.
I’ve found that Mercurial’s plugin system lets you build any workflow you want straight into source control. I also don’t think Mercurial is dying off, just that Github has really pushed Git up and nobody has tried to do something similar for Mercurial.
There’s a couple of people at bitbucket who care about really pushing the envelope with what Mercurial can do. Sean Farley is rolling out Evolve for select bitbucket beta-testers upon request.
Any public information on this change?
I don’t think so, no. Feel free to stop by the #bitbucket or #mercurial channels on freenode to ask questions.
That’s good to hear. I use Mercurial on all my personal projects and strongly prefer it to Git, but reading the blog posts and announcements from Atlassian, it’s really felt like the development velocity there has much more been on the Git side of Bitbucket.
I started using Mercurial for work, and have since grown to prefer it over Git. In large part because of it’s extensibility, but also ease of use. Mercurial makes more conceptual sense to me and is easy to figure out from the cli/help alone. I rarely ever find myself Googling how to do something.
I still like Git though, and it’s likely better for people who don’t like tinkering with their workflows.
Lots of people, including some big names (e.g., Facebook). I find git’s merging more reliable, but prefer hg’s CLI. They both get the job done.
I’d love to know about cases where you find git’s merging to be more reliable. Samples would be awesome, so we can figure out what’s tripping you up.
It’s a known issue.
Sort of. It’s not a known issue that BidMerge (note that we’ve shipped BidMerge, which is an improvement over ConsensusMerge as a concept) produces worse results than Git. I really meant it when I said I’d appreciate examples, rather than handwaving. :)
I was using hg pre-3.0 (via Kiln). The problem that BidMerge is intended to solve is the problem which gave us so much trouble. I can’t speak to how well BidMerge would have fixed that, as the company is no longer in business.
Fair enough. It should be pretty well solved then. Thanks for responding!
It may well have technical advantages, but if you’re working on a project that other people will one day work on, I’d strongly urge you to use git. Being able to use a familiar tool will be far more valuable to other contributors. Look at e.g. Python, which chose mercurial years ago but has recently decided to migrate to git.
Given the size of the repository, it’s not clear that Git would be significantly better or different.
In all the really big repos I’ve used, a limit gets hit and some wacky customizations are applied. The alternative being that you just have to put up with the sluggishness.
Facebook actually hit git’s limit a while back and contributed patches, etc to Mercurial to work with it. Really interesting stuff. But, stemming from that observation and other experiences, I am a superfan of breaking up repos in DVCS systems. I maintain a mercurial extension to coordinate many repos in a friendlier fashion than hg subrepos (guestrepo!).
I’m kind of persuaded that dvcs is a smell at a stereotypical company though, I think there’s room for an excellent central VCS out there.
I think where we’re heading with Mercurial over the long term is a set of tools that makes doing centralized-model development painless with DVCS tools, while retaining most of the benefits (smaller patches, pushing several in a group, etc) of a DVCS workflow. I don’t think it’s a smell at all.
As for splitting repositories, there are definitely cases where it makes sense, but there’s also a huge benefit to having everything be in one giant repository.
(Disclaimer: I work on source control stuff for a big company, with a focus on Mercurial stuff whenever possible.)
FWIW, I use git with mozilla-central and find it a much more pleasing experience than hg (which I still export to when pushing to shared remote repos). That said, it is also what I am more familiar with, although I did use hg exclusively for a year or so.
I really enjoy having everything in the game repo for many reasons such as the lack of syncing overhead, but it does tend to push performance of version control.
I’m interested in this but I’m hung up on the bespoke ‘Fair Source’ license. You mention that it is meant to be used as Fair Source ___ where blank is the number of users before you have to start paying. But I don’t see a user limit anywhere on the site. Without that limit specified can it be assumed to be infinite? It’s hard to sell new licenses inside an environment where the lawyers have already taken on the GPL vs LGPL vs BSD vs MIT vs APLv2 and drawn lines on what they want to risk litigation on.
Thanks for the question. The use limit (15) for self-hosted Sourcegraph is specified in the LICENSE file: https://src.sourcegraph.com/sourcegraph@master/.tree/LICENSE. Sourcegraph.com is free to use for everyone.
We worked with a well-known open-source lawyer to draft Fair Source. If you’re using Sourcegraph for a team of above 15 people (and paying us to do so), then it would be a standard commercial license. Fair Source enables us to make the source code publicly available and to let teams with fewer than 15 users try it out both free as in freedom and free as in beer.
So, I think you’d raise a lot fewer hackles if you totally reworded your elevator pitch on fair.io:
The Fair Source License functions just like an open-source license—up to a point. Once your organization hits the license’s specified user limit, you will pay a licensing fee to continue using the software.
It’s not at all open-source. You’re restricting my right to redistribute, I can’t meaningfully sell my modified version, etc. It’s shared source with free for N users. Maybe something like:
The Fair Source License grants everyone the ability to see the source code and makes the license free for a limited number of users. It attempts to offer some of the benefits of open source software while retaining the ability to profit from a codebase.
I have nothing wrong with trying to sell software - but when you (unintentionally or not) make it look like you’re being “open source” you’re going to have a bad time.
Excellent to know. I should’ve checked the source… :) I looked at your lawyer’s blog and they do seem to be legit which is a selling point. I would say that I wouldn’t have a hard time selling this to the powers that be.
That said I’d really like to see C/C++ support in srclib. Thanks for the response.
C/C++ is on the roadmap in the next couple of months. Shoot me an email at firstname.lastname@example.org and we can let you know when that’s ready :)
In the meantime, feel free to check out our code analysis library, which is completely open source: https://srclib.org
I get what they’re going for, but defining things as loosely as by “people using the code” is fairly meaningless. What if I have zero users because people don’t use a particular system? Can I then install it on 1000 servers or only 25? What if the code (on a different system) sends an email alert to 1000 people, are they all “users” or just the person that actually interacts with the system?
This connects to another issue with new/unfamiliar licenses: the concern that the protections or guarantees they claim to provide are not actually ensured. The advantage of a familiar license is that the language has been vetted by a number of entities, and you can rest reasonably assured that it does what it claims. I do not know if this particular license has any problems with the assignment of rights or anything like that, but I do not have the same level of assuredness I would with a more usual license.
I think part of what journeysquid was trying to point out is that this license is worded so imprecisely that it’s impossible to know when you’ve violated the terms.
Also, can I split my company into 25 people groups to go around the 25 people limit?
Oddly this was sort of what I was thinking, have venn diagram groups of teams (which actually fits with Conway’s law) and just end run around the user limit w/ a containerized sourcegraph for each team.
Maybe all of us should be a bit more loud mouthed and get ourselves heard?
Because I don’t want to be an hg island in an ocean of git, I keep talking about hg long after most of my audience has left the building.
What are the advantages of Mercurial over Git? I’ve never used Mercurial, but have seen a number of very enthusiastic users of it online. What about it inspires such a strong preference?
It’s a variety of things, but in broad strokes (for me, anyway):
A specific thing that looks minor, but went from “neat toy” to “how did I do work without this” in a matter of months: https://selenic.com/hg/help/revsets
Some of those are minor (terms are easy-ish to relearn), and some are major (evolution was a significant productivity improvement for me when I worked it into my workflow about 3 years ago, and we’re on a good path towards working out the lingering problems and shipping it more broadly).
Then there’s some architectural decisions in the codebase. It matters less to typical end-users, but it’s made it feasible for Facebook to do some really neat work around lazy-loading file content, and for some related work that I’ve been doing elsewhere.
Off the cuff, mercurial does the job right, whereas git is a sea of hackery.
Whenever I collaborate with a junior-medium experienced git user, they screw up - regularly, repeatedly, and often, badly. This does not happen nearly to the same extent in HG.
I still need to rewrite this response for the Mercurial FAQ, because you just asked a FAQ:
This is the Official LastPass Death Pool thread. You can pick what day you think the LastPass browser extension will die, whoever is closest wins a 1 year subscription to the competitor of their choice on me. Death is considered to have happened when any of the following happens, and will be interpreted generously:
August 20th, 2016
Sadly, I don’t want a competitor subscription, I want an open source, well written alternative I can host on my own server and generate my own keys for.
I’m getting more and more on board with federated services these days. Then I wouldn’t have these problems when companies sell/die/resurrect/reinvent.
Along with what durin42 and ChadSki mentioned, I think pass is the other free software competitor. I’m evaluating these three to pick my replacement - LastPass was one of exactly two pieces of closed-source software I still use and I am grimly unsurprised to get burned again. Oddly enough, though LastPass is super-chirpy on Twitter, they haven’t responded to me.
I use pass and like it a lot, but I’m not sure it’s a replacement for LastPass (certainly not without a lot of porting and frontend work). There’s a Firefox extension, but it’s immature, I’m not aware of any Chrome plugin at all, and I don’t know if it’s been comfortably ported to mobile platforms or Windows.
Sigh, that sucks, i really wanted to use pass but honestly the cli lastpass has and the extensions and apps are a hard combination to beat.
Is there anything wrong with http://keepass.info/ ?
I investigated all the options about 18 months ago, keepass was nice in principle but the UX was unworkable in practice. If memory serves the Chrome plugin was either non-existent or didn’t work anywhere near as well as LastPass’s.
It’s possible this has since changed, and I’d expect an uptick in development due to this announcement.
“unworkable”? You can tell it wasn’t designed by hipsters, but it works pretty well. I’ve used it for about 5 years and think it’s great.
I used to use the FF extension, but I can honestly say that keepass’s “Auto-Login” feature is much less hassle than having to go and install a plugin into every computer’s browser, keeping it up to date, blah, blah.
Rants about plugins aside, keepass has a nice SSH-key plugin: KeyAgent. I love this plugin and use it+keepass to manage all my SSH keys now too. I actually roll over my SSH keys now because it’s so easy, something I never got into the habit of doing even after 15 years of using SSH.
My issue with KeePass at the moment is that I want consistent 2 factor authentication (via Yubikey) everywhere (even on my phone). You can get Yubikey on desktop via a plugin, not sure about phone and browser plugins don’t support it.
Based on the HN thread, https://passopolis.com/ sounds vaguely promising, but I haven’t done much research yet.
That’s just Mitro, which you and I assessed quickly and both found wanting due to having too many moving parts, but that can likely be fixed.
Mitro released their source when they shut down, didn’t they?
June 1st, 2016.
Also, this news makes me profoundly sad. I use Yubikey two factor with it and I guess I am now searching for an alternative that support two factor support everywhere (Chrome, App, Android).
Already made my switch to 1Password after waking up to that announcement. It’s been a slight adjustment but I’m okay with it.
December 25th 2015
February, 14, 2016
March 25, 2016
Git is powerful. Even now, at version 3.1 hg help log lists about
a dozen of options; git help log gives me about 60 pages of
hg help log
git help log
Wow. How could one think that was an argument in favor of git?
Lots of documentation is a bad thing?
No, not at all. The point is that git-log has many many not-quite-orthogonal flags, whereas Mercurial has fairly expressive query and template languages that obviate such flags. An example is the revset “grep” operator, instead of git log –grep. The former can then be used anywhere a rev-like is accepted, whereas the latter is only useful on log and has to be documented how it behaves with respect to other flags (at least on some level). Does that make sense?
(One of the things that came out of that thread is fixing some docs and making some things more discoverable.)
A git query language is a great idea and should be stolen from the hg guys. -G, –grep, and pickaxe could all be generalized.
Fair enough, yeah. Most of my work is writing docs, and most of my free time has been in ecosystems where the docs are terrible, so I’m more likely to be like “oh thank God” when I see a bunch of things. :)
I think the main problem with the git man pages is that they’re too extensive, overwhelming the curious user with extraneous information. I’m not alone in this opinion I think given the existence of http://git-man-page-generator.lokaltog.net/
It’s also “information” that’s not explained. There are a lot of places where they reference some non-obvious concept by its name, and there’s no cross-reference to tell you where to learn what it is, and if you Google the phrase it turns out this is the only occurrence anywhere, ever.
Probably depends if you say things like “there’s so much to learn!” with a grin or a grimace. It’s (sometimes) a proxy for the complexity (difficulty) of using a tool. It makes it harder to find the option I want. There’s a cognitive load because I have to look at each option and decide whether it’s for me or not. Assuming even only a single yes/no per page, that’s 60 decisions. I’m exhausted.
Yeah; I helped Augie draft this response. I think we both took some SAN damage over the course of this thread.
I’m pretty sure git’s help just opens you to the manpage rather than a terse briefing.
This offers some interesting funding ideas that I like, but I don’t think it identified any actual problem (that I agree with). The closest problem statements I could see were these:
As we have moved to more and more niche tools, it becomes harder to justify the time investment to become a contributor.
The other problem is the growing imbalance between producers and consumers. In the past, these were roughly in balance. Everyone put time and effort in to the Commons and everyone reaped the benefits. These days, very few people put in that effort and the vast majority simply benefit from those that do. This imbalance has become so ingrained that for a company to re-pay (in either time or money) even a small fraction of the value they derive from the Commons is almost unthinkable.
In the abstract, these seem like interesting problems. But is there hard evidence that this is causing a serious problem in the proliferation of free software?
It seems to me like the problem being described here is one of power imbalance. Namely, that there is a small set of contributors and a large group of users. You might find this inherently disturbing, but what are its real world implications? Should I, as a programmer who contributes to free software, feel bad that there are people using it that don’t give back to my project? (I certainly do not!)
In the end though, it is a bleak landscape right now.
And this is where I’m like: huh? Free software is flourishing. Compare the rise and proliferation of code sharing today with ten years ago. There are vast networks of online communities collaborating—in the open—on free software for the whole world to use.
What exactly is “bleak” about today? Is there some credible threat to free software that is looming in the shadows waiting to destroy the free sharing of code as we know it today?
The threat is the persistent and pervasive burnout amongst people working on projects that are OMG-level critical to the tech sector. A lot of people are starting to step back from major projects like Python, Postgres, Django, Ruby, etc, and that’s going to have an impact. Most of the people leaving are the ones that feel like what used to be a hobby is now a full-time, unpaid job. If we don’t figure out a better way to support those people, we’re going to have a bad time.
I just wanted to pop up a level.
I feel like I support the message you intended to convey in the OP: let’s work on helping to fund free software contributors. That is a noble goal that is hard to disagree with. I thought that part of your post was pretty good. The problem I’m having is with your framing; frankly, you come across as an alarmist with the idea that free software is going to be in huge huge trouble unless we figure out some sort of funding for free software contributors. It really put me off to be honest.
In the last 18 months we have seen some of the issues of a lack of funding - HeartBleed exemplified the problems that OpenSSL has been suffering for years from chronic under funding.
OpenBSD nearly ran out of money to cover the cost of its electricity usage.
I think there are plenty of examples of Open Source projects lacking a reasonable financial backing.
Unfortunately, I don’t have any bright ideas for solving this problem (but I do order OpenBSD CD’s twice a year :~])
Right… Maybe I misunderstood the OP. I wouldn’t have considered either of those projects as examples, because once they got into trouble, others stepped in to help out. To me, this seems like things are working great and that there’s no cause for alarm. From the OP’s tone/phrasing, I was expecting to hear about critical projects that had become completely defunct (none of OpenSSL nor OpenBSD nor PyPI fit that description).
Don’t you think it would have been much cheaper to prevent these problems than to scramble to fix them after the fact? Certainly, when people are burned out to the point of leaving a project, there is a huge transaction cost to someone else stepping in and getting up to speed, even if we assume that there will always be somebody willing and able to do so.
Of course! I’ve stated several times in this thread that I support more funding! What I don’t understand is the alarm.
The threat is the persistent and pervasive burnout amongst people working on projects that are OMG-level critical to the tech sector.
Who is responsible for this threat? Can you provide examples of critical open source projects that have become defunct (i.e., no longer useful) because of burnout?
A lot of people are starting to step back from major projects like Python, Postgres, Django, Ruby, etc, and that’s going to have an impact.
Can you elaborate? A lot of people take steps back from projects—not just major projects. Is there a particular reason why you think this is particularly bad today? And if people take a step back from these projects, is there some reason to believe that the slack won’t be picked up by other (new or old) contributors?
Most of the people leaving are the ones that feel like what used to be a hobby is now a full-time, unpaid job.
That seems like a perfectly legitimate reason to leave a project. Sometimes you lose your passion for a project. It happens, and not just in free software. Why is this a major threat to free software?
If we don’t figure out a better way to support those people, we’re going to have a bad time.
You really haven’t made a convincing case for why you think this is true. In particular, free software is flourishing in both quantity and quality, yet you seem to completely ignore this point.
Rubygems.org has had issues like ‘gems with native dependencies don’t install on windows’ open for 8+ months because it is effectively unmaintained due to the maintainers being burned out.
The critical vulnerability with YAML a few months back only happened because the maintainers had ‘investigate if that bug affects gemspecs’ on their TODO list but couldn’t find the time to do it.
AT&T deciding to get rid of their Ruby open source contributions has harmed Ruby and Rails immensely. The pernicious things about issues like the blog post is that you don’t realize they’re happening until they’ve already happened. It’s difficult to quantify in the moment.
I think that OSS looks really great on the surface, you see more projects than have ever existed in the past, more companies using it, and just a general greater acceptance across the board. However, much like a family that are running up tens of thousands of dollars of credit card debt in order to “keep up”, if you look below the surface at the “finacials” of OSS you’ll see that a frightening amount of really critical stuff is severely under-maintained if they are maintained at all.
This I think is what coderanger is speaking to when he’s talking about the landscape. This problem actually gets a lot worse the more popular OSS becomes if there isn’t also a large enough investment back into these projects by enough of their users. As OSS becomes more accepted it more people use it, as more people use it you have much more demand which places additional pressure of the maintainers of that software. They start getting more people submitting bug reports, more people demanding fixes, more people yelling at them when something doesn’t go their way and I think for a lot of maintainers the project that used to be fun to work on in their spare time starts to become something they dread touching because it brings with it feelings of guilt and anxiety and a constant need to be fighting fires.
In the end, a large numbers of projects, even well written projects, without contributors or maintainers is a pretty bad outcome if we push enough of them away.
I think that OSS looks really great on the surface, you see more projects than have ever existed in the past, more companies using it, and just a general greater acceptance across the board.
I’m having a difficult time understanding why these great attributes of free software are being qualified with “on the surface.” Why are these surface level qualities? The increase in acceptance, quantity and quality of free software don’t seem like surface level qualities to me. They seem like deep and entrenched improvement. I kindly ask you to compare the state of free software today with the state of free software ten years ago. At least from my perspective, the difference and improvement is astounding.
I otherwise take your point though. I totally get that a really important project (like PyPI!) is critical to maintain. What I don’t understand is the alarm. As you described in another comment, you eventually couldn’t keep up with PyPI any more and companies stepped in to fund it. If they disappear, and PyPI stops working, do we think that some other company won’t jump in and foot the bill? I certainly think someone would. That seems OK to me.
We have no reason to suspect anyone else would fund it, it took us years to work out the current deals keep things just on this side of failure. If Rackspace of Fastly pulled support tomorrow, we would have to start that all over again. We have contracts in place where possible to diffuse some of the risk, but it’s still a “nod and a handshake”-based mess. Rubygems is similar in a lot of ways, NPM has resources behind it from VCs, this is not a pretty picture.
That brings up a point that I don’t think was in your post: running infrastructure for some projects is a big chore, and typically getting funding for that is hard or impossible. Most projects I know run either out of basements or off a VPS somewhere, so if they get wildly popular they just fall over.
The best single example I can cite is Python packaging and its whole ecosystem. Just a few years ago, it was almost unusable due to years of neglect. PyPI was down frequently, pip was difficult to install, slow, and very insecure. While it’s not the only factor that made things better, a huge part of the improvement was due to one man (Donald Stufft) and the fact that he has had financial support in working first 50% on packaging via Rackspace and now 100% via HP. If he lost that funding, I have no doubt he would have to scale down his efforts and given what happened before I would call that a critical issue to the Python community. We have no backup plan, if HP’s generosity runs out the fallback is to just accept packaging being on a slow slide back in to the dark.
I can be more explicit, I was working on packaging prior to funding from Rackspace or HP and I was heading towards burn out pretty rapidly. I was forgoing spending time with my family or doing anything else to try and find time to work on it because, while I’m not the only person, I’m one of (if not the) primary driving force currently. The funding from Rackspace and now HP has given me the ability to dedicate time to it, without forgetting what my family looks like. You can look at OpenSSL and GPG for similar situations. There are countless tools at varying levels of critical-ness to the infrastructure of organizations (or to the internet as a whole) that have little to no funding.
This sounds more convincing. It would have helped me interpret your OP more charitably with these examples in your post.
I still don’t think these examples warrant the level of alarm in your OP. It sounds like the system, as is, is working great. Under funded critical projects are getting attention after we notice they need attention. I personally don’t see that as a major problem in and of itself.
That’s fair. It is hard to see this from the outside sometimes. As someone with friends in more or less every major FOSS project, all I hear is a sea of discontent and burnout. As dstufft pointed out though, this has stayed well hidden for years. I think the Python and DevOps communities in particular are making huge strides in it being okay to talk about burnout in public, but a lot of it is still in hidden backchannels (-dev IRC channels, private Slacks, contributor-only mailing lists, etc). The saving grace so far is that each time someone had flamed out, another has stepped up to replace them. That’s a terrible way to get forward progress though, especially when you see the massive value companies are extracting from our collective work.
I see. That’s interesting. When you frame it that way, it seems like one of the problems you’re trying to address is to make it OK to talk about burnout. That seems like a great goal.
That’s part of it, but it’s also important that we all start realizing how much of our critical infrastructure is maintained as a side project, not as something full-time. I’ve seen this happen to a bunch of projects, including some of my own.
For my own part, I basically end up telling people I accept patches to my projects I’m burned out on, but in a world where the conversation included them wanting to hire someone to do the work, I can think of a handful of people I could propose as potential contractors with the right expertise. In general though, people have an attitude that precludes this for some reason. Generally if the subject of paying for a feature they need comes up they leave upset.
A friend spent a period working on his project by soliciting donations, and it basically fizzled after about 18 months - the donations from companies dried up, and that was the end of the road. Now he’s got a corporate patron, and it’s fine, but that’s still entirely too rare.
It’s worth considering that today much of open source is created largely by people in a very substantial position of privilege. People like myself who can afford to be self-employed or not even get paid at all for extended periods of time. Some maintainers are those who got lucky with an employer who permits them to spend some amount of their time on open source work.
And the effect? We in a position of privilege gain yet more privilege. Because of my open source work (and the ability to do it), I get way more interest from eagerly-hiring companies than any of my friends without a Github repo. I gain more public respect and recognition because I can afford to do this. It gives me the luxury of being a lot more picky.
Free software may be flourishing, but who are the maintainers and contributors who are flourishing with it?
Having more options for funding open source enables a greater diversity of people to participate, and I think that’s a good thing for both software and people.
I’m having a hard time parsing your central message. You’re saying more funding is good. Great, I agree. I’m taking issue with this idea that free software is somehow in a boat load of trouble today. As the OP says, it’s a “bleak landscape right now.” Huh?
[EDIT] See some of my other comments for more explanation. :-)
I’m not arguing that the end of the world is here today. It’s a social issue, like race/gender/income inequality. By ideal societal standards, the FOSS ecosystem is not in good shape (“bleak landscape right now” are not my words, but I can understand the sentiment). Probably far worse than corporate IT in general, which is not great to begin with.
Even at the most regrettable and embarrassing times in our society’s history, even during slavery, our GDP continued to grow. It’s dangerous to ignore systemic problems just because the metrics are going up and to the right.
I don’t know how to respond to this. Our perceptions of reality are just way too different. Ideal societal standards? Corporate IT? Slavery? GDP? Income equality? Holy moly.
Ideal societal standards? Slavery? GDP? Income equality? Holy moly.
Hm? Are those questions or just mocking? :/
They are questions of a baffled reader. I was inquiring: how is free software “bleak”? What is justifying all this alarm?
Instead, i’m met with a comment that strolls into a whole bunch of seemingly unrelated topics. What more can i say? At a certain point, i have to acknowledge that we’re speaking way past one another and cut the conversation short.
At a certain point, i have to acknowledge that we’re speaking way past one another and cut the conversation short.
By the way, I’ve used your Go toml library, good work. I hope producing great FOSS work continues to be feasible for you. :)
Thanks. Me too! :-)
Thanks - exactly my own feelings on these topics, but I’ve tried and never been able to say them very well.