The interface of Git and its underlying data models are two very different things, that are best treated separately.
The interface is pretty bad. If I wasn’t so used to it I would be fairly desperate for an alternative. I don’t care much for the staging area, I don’t like to have to clean up my working directory every time I need to switch branches, and I don’t like how easy it is to lose commit from a detached HEAD (though there’s always git reflog
I guess).
The underlying data model however is pretty good. We can probably ditch the staging area, but apart from that, viewing the history of a repository as a directed graph of snapshots is nice. Captures everything we need. Sure patches have to be derived from those snapshots, but we care less about the patches than we care about the various versions we saved. If there’s one thing we need to get right, it’s those snapshots. You get reproducible builds & test from them, not from patches. So I think Patches are secondary. I used to love DARCS, but I think patch theory was probably the wrong choice.
Now one thing Git really really doesn’t like is large binary files. Especially if we keep changing them. But then that’s just a compression problem. Let the data model pretend there’s a blob for each version of that huge file, even though in fact the software is automatically compressing & decompressing things under the hood.
What’s wrong with the staging area? I use it all the time to break big changes into multiple commits and smaller changes. I’d hate to see it removed just because a few people don’t find it useful.
Absolutely, I would feel like I’m missing a limb without the staging area. I understand that it’s conceptually difficult at first, but imo it’s extremely worth the cost.
Do you actually use it, or do you just do git commit -p
, which only happens to use the staging area as an implementation detail?
And how do you test the code you’re committing? How do you make sure that the staged hunks aren’t missing another hunk that, for example, changes the signature the function you’re calling? It’s a serious slowdown in workflow to need to wait for CI rounds, stash and rebase to get a clean commit, and push again.
I git add -p
to the staging area and then diff it before generating the commit. I guess that could be done without a staging area using a different workflow but I don’t see the benefit (even if I have to check git status for the command every time I need to unstage something (-: )
As for testing, since I’m usually using Github I use the PR as the base unit that needs to pass a test (via squash merges, the horror I know). My commits within a branch often don’t pass tests; I use commits to break things up into sections of functionality for my own benefit going back later.
Just to add on, the real place where the staging area shines is with git reset -p
. You can reset part of a commit, amend the commit, and then create a new commit with your (original) changes or continue editing. The staging area becomes more useful the more you do commit surgery.
Meh, you don’t need a staging area for that (or anything). hg uncommit -i
(for --interactive
) does quite the same thing, and because it has no artificial staging/commit split it gets to use the clear verb.
I guess that could be done without a staging area using a different workflow but I don’t see the benefit
I don’t see the cost.
My commits within a branch often don’t pass tests;
If you ever need to git bisect
, you may come to regret that. I almost never use git bisect
, but for the few times I did need it it was a life saver, and passing tests greatly facilitate it.
I bisect every so often, but on the squashed PR commits on main, not individual commits within a PR branch. I’ve never needed to do that to diagnose a bug. If you have big PRs, don’t squash, or don’t use a PR-based workflow, that’s different of course. I agree with the general sentiment that all commits on main should pass tests for the purposes of bisection.
I use git gui for committing, (the built in git gui command) which let’s you pick by line not just hunks. Normally the things I’m excluding are stuff like enabling debug flags, or just extra logging, so it’s not really difficult to make sure it’s correct. Not saying I never push bad code, but I can’t recall an instance where I pushed bad code because of that so use the index to choose parts of my unfinished work to save in a stash (git stash –keep-index), and sometimes if I’m doing something risky and iterative I’ll periodically add things to the staging area as I go so I can have some way to get back to the last known good point without actually making a bunch of commits ( I could rebase after, yeah but meh).
It being just an implementation detail in most of that is a fair point though.
I personally run the regression test (which I wrote) to test changes.
Then I have to wait for the code review (which in my experience has never stopped a bug going through; when I have found bugs, in code reviews, it was always “out of scope for the work, so don’t fix it”) before checking it in. I’m dreading the day when CI is actually implemented as it would slow down an already glacial process [1].
Also, I should mention I don’t work on web stuff at all (thank God I got out of that industry).
[1] Our customer is the Oligarchic Cell Phone Company, which has a sprint of years, not days or weeks, with veto power over when we deploy changes.
Author of the Jujutsu VCS mentioned in the article here. I tried to document at https://github.com/martinvonz/jj/blob/main/docs/git-comparison.md#the-index why I think users don’t actually need the index as much as they think.
I missed the staging area for at most a few weeks after I switched from Git to Mercurial many years ago. Now I miss Mercurial’s tools for splitting commits etc. much more whenever I use Git.
Thanks for the write up. From what I read it seems like with Jujutsu if I have some WIP of which I want to commit half and continue experimenting with the other half I would need to commit it all across two commits. After that my continuing WIP would be split across two places: the second commit and the working file changes. Is that right? If so, is there any way to tag that WIP commit as do-not-push?
Not quite. Every time you run a command, the working copy is snapshotted and becomes a real commit, amending the precis working-copy commit. The changes in the working copy are thus treated just like any other commit. The corresponding think to git commit -p
is jj split
, which creates two stacked commits from the previous working-copy commit, and the second commit (the child) is what you continue to edit in the working copy.
Your follow-up question still applies (to both commits instead of the single commit you seemed to imagine). There’s not yet any way of marking the working copy as do-not-push. Maybe we’ll copy Mercurial’s “phase” concept, but we haven’t decided yet.
Way I see it, the staging area is a piece of state needed specifically for a command line interface. I use it too, for the exact reason you do. But I could do the same by committing it directly. Compare the possible workflows. Currently we do:
# most of the time
git add .
git commit
# piecemeal
git add -p .
# review changes
git commit
Without a staging area, we could instead do that:
# most of the time
git commit
# piecemeal
git commit -p
# review changes
git reset HEAD~ # if the changes are no good
And I’m not even talking about a possible GUI for the incremental making of several commits.
Personally I use git add -p
all of the time. I’ve simply been burned by the other way too many times. What I want is not to save commands but to have simple commands that work for me in every situation. I enjoy the patch selection phase. More often than not it is what triggers my memory of a TODO item I forgot to jot down, etc. The patch selection is the same as reviewing the diff I’m about to push but it lets me do it incrementally so that when I’m (inevitably) interrupted I don’t have to remember my place.
From your example workflows it seems like you’re interested in avoiding multiple commands. Perhaps you could use git commit -a
most of the time? Or maybe add a commit-all
alias?
Never got around to write that alias, and if I’m being honest I quite often git diff --cached
to see what I’ve added before I actually commit it.
I do need something that feels like a staging area. I was mostly wondering whether that staging area really needed to be implemented differently than an ordinary commit. Originally I believed commits were enough, until someone pointed out pre-commit hooks. Still, I wonder why the staging area isn’t at least a pointer to a tree
object. It would have been more orthogonal, and likely require less effort to implement. I’m curious what Linus was thinking.
Very honourable to revise your opinion in the face of new evidence, but I’m curious to know what would happen if you broadened the scope of your challenge with “and what workflow truly requires pre-commit hooks?”!
Hmm, that’s a tough one. Strictly speaking, none. But I can see the benefits.
Take Monocypher for instance: now it’s pretty stable, and though it is very easy for me to type make test
every time I modify 3 characters, in practice I may want to make sure I don’t forget to do it before I commit anything. But even then there are 2 alternatives:
I use git add -p
all the time, but only because Magit makes it so easy. If I had an equally easy interface to something like hg split
or jj split
, I don’t think I’d care about the lack of an index/staging area.
# most of the time
git add .
Do you actually add your entire working directory most of the time? Unless I’ve just initialized a repository I essentially never do that.
Here’s something I do do all the time, because my mind doesn’t work in a red-green-refactor way:
Get a bug report
Fix bug in foo_controller
Once the bug is fixed, I finally understand it well enough to write an automated regression test around it, so go do that in foo_controller_spec
Run test suite to ensure I didn’t break anything and that my new test is green
Add foo_controller and foo_controller_spec to staging area
Revert working copy (but not staged copy!) of foo_controller (but not it’s spec)
Run test suite again and ensure I have exactly one red test (the new regression test). If yes, commit the stage.
If no, debug spec against old controller until I understand why it’s not red, get it red, pull staged controller back to working area, make sure it’s green.
—
Yeah, I could probably simulate this by committing halfway through and then doing some bullshit with cherry-picks from older commits and in some cases reverting the top commit but, like, why? What would I gain from limiting myself to just this awkward commit dance as the only way of working? That’s just leaving me to cobble together a workflow that’s had a powerful abstraction taken away from it, just to satisfy some dogmatic “the commit is the only abstraction I’m willing to allow” instinct.
Do you actually add your entire working directory most of the time?
Yes. And when I get a bug report, I tend to first reproduce the bug, then write a failing test, then fix the code.
Revert working copy (but not staged copy!) of foo_controller (but not it’s spec)
Sounds useful. How do you do that?
Revert working copy (but not staged copy!) of foo_controller (but not it’s spec)
Sounds useful. How do you do that?
You can checkout
a file into your working copy from any commit.
Yes. And when I get a bug report, I tend to first reproduce the bug, then write a failing test, then fix the code.
Right, but that was just one example. Everything in your working copy should always be committed at all times? I’m almost never in that state. Either I’ve got other edits in progress that I intend to form into later commits, or I’ve got edits on disk that I never intend to commit but in files that should not be git ignored (because I still intend to merge upstream changes into them).
I always want to be intentionally forming every part of a commit, basically.
Sounds useful. How do you do that?
git add foo_controller <other files>; git restore -s HEAD foo_controller
and then
git restore foo_controller
will copy the staged version back into the working set.
TBH, I have no idea what “git add -p” does off hand (I use Magit), and I’ve never used staging like that.
I had a great example use of staging come up just yesterday. I’m working in a feature branch, and we’ve given QA a build to test what we have so far. They found a bug with views, and it was an easy fix (we didn’t copy attributes over when copying a view).
So I switched over to views.cpp and made the change. I built, tested that specific view change, and in Magit I staged that specific change in views.cpp. Then I commited, pushed it, and kicked off a pipeline build to give to QA.
I also use staging all the time if I refactor while working on new code or fixing bugs. Say I’m working on “foo()”, but while doing so I refactor “bar()” and “baz()”. With staging, I can isolate the changes to “bar()” and “baz()” in their own commits, which is handy for debugging later, giving the changes to other people without pulling in all of my changes, etc.
Overall, it’s trivial to ignore staging if you don’t want it, but it would be a lot of work to simulate it if it weren’t a feature.
What’s wrong with the staging area? I use it all the time to break big changes into multiple commits and smaller changes.
I’m sure you do – that’s how it was meant to be used. But you might as well use commits as the staging area – it’s easy to commit and squash. This has the benefit that you can work with your whole commit stack at the same time. I don’t know what problem the staging area solves that isn’t better solved with commits. And yet, the mere existence of this unnecessary feature – this implicitly modified invisible state that comes and crashes your next commit – adds cognitive load: Commands like git mv
, git rm
and git checkout
pollutes the state, then git diff
hides it, and finally, git commit --amend
accidentally invites it into the topmost commit.
The combo of being not useful and a constant stumbling block makes it bad.
I don’t know what problem the staging area solves that isn’t better solved with commits.
If I’ve committed too much work in a single commit how would I use commits to split that commit into two commits?
Using e.g. hg split
or jj split
. The former has a text-based interface similar to git commit -p
as well as a curses-based TUI. The latter lets you use e.g. Meld or vimdiff to edit the diff in a temporary directory and then rewrites the commit and all descendants when you’re done.
That temporary directory sounds a lot like the index – a temporary place where changes to the working copy can be batched. Am I right to infer here that the benefit you find in having a second working copy in a temp directory because it works better with some other tools that expect to work files?
The temporary directory is much more temporary than the index - it only exists while you split the commit. For example, if you’re splitting a commit that modifies 5 files, then the temporary directory will have only 2*5 files (for before and after). Does that clarify?
The same solution for selecting part of the changes in a commit is used by jj amend -i
(move into parent of specified commit, from working-copy commit by default), jj move -i --from <rev> --to <rev>
(move changes between arbitrary commits) etc.
I use git revise. Interactive revise is just like interactive rebase, except that it has is a cut
subcommand. This can be used to split a commit by selecting and editing hunks like git commit -p
.
Before git-revise, I used to manually undo part of the commit, commit that, then revert it, and then sqash the undo-commit into the commit to be split. The revert-commit then contains the split-off changes.
I don’t know, I find it useful. Maybe if git built in mercurials “place changes into commit that isn’t the most recent” amend thing then I might have an easier time doing things but just staging up relevant changes in a patch-based flow is pretty straightforward and helpful IMO
I wonder if this would be as controversial if patching was the default
What purpose does it serve that wouldn’t also be served by first-class rollback and an easier way of collapsing changesets on their way upstream? I find that most of the benefits of smaller changesets disappear when they don’t have commit messages, and when using the staging area for this you can only rollback one step without having to get into the hairy parts of git.
The staging area is difficult to work with until you understand what’s happening under the hood. In most version control systems, an object under version control would be in one of a handful of states: either the object has been cataloged and stored in its current state, or it hasn’t. From a DWIM standpoint for a new git user, would catalog and store the object in its current state. With the stage, you can stage, and change, stage again, and change again. I’ve used this myself to logically group commits so I agree with you that it’s useful. But I do see how it breaks peoples DWIM view on how git works.
Also, If I stage, and then change, is there a way to have git restore the file as I staged it if I haven’t committed?
Also, If I stage, and then change, is there a way to have git restore the file as I staged it if I haven’t committed?
Git restore .
I’ve implemented git from scratch. I still find the staging area difficult to use effectively in practice.
Try testing your staged changes atomically before you commit. You can’t.
A better design would have been an easy way to unstage, similar to git stash but with range support.
Interesting, that would solve the problem. I’m surprised I’ve not come across that before.
In terms of “what’s wrong with the staging area”, what I was suggesting would work better is to have the whole thing work in reverse. So all untracked files are “staged” by default and you would explicitly un-stage anything you don’t want to commit. Firstly this works better for the 90% use-case, and compared to this workaround it’s a single step rather than 2 steps for the 10% case where you don’t want to commit all your changes yet.
The fundamental problem with the staging area is that it’s an additional, hidden state that the final committed state has to pass through. But that means that your commits do not necessarily represent a state that the filesystem was previously in, which is supposed to be a fundamental guarantee. The fact that you have to explicitly stash anything to put the staging area into a knowable state is a bit of a hack. It solves a problem that shouldn’t exist.
The way I was taught this, the way I’ve taught this to others, and the way it’s represented in at least some guis is not compatible.
I mean, sure, you can have staged and unstaged changes in a file and need to figure it out for testing, or unstage parts, but mostly it’s edit
-> stage
-> commit
-> push
.
That feels, to me and to newbies who barely know what version control is, like a logical additive flow. Tons of cases you stage everything and commit so it’s a very small operation.
The biggest gripe may be devs who forget to add files in the proper commit, which makes bisect
hard. Your case may solve that for sure, but I find it a special case of bad guis and sloppy devs who do that. Also at some point the fs layout gets fewer new files.
Except that in a completely linear flow the distinction between edit and stage serves no purpose. At best it creates an extra step for no reason and at worst it is confusing and/or dangerous to anyone who doesn’t fully understand the state their working copy is in. You can bypass the middle state with git add .; git commit
and a lot of new developers do exactly that, but all that does is pretend the staging state doesn’t exist.
Staging would serve a purpose if it meant something similar to pushing a branch to CI before a merge, where you have isolated the branch state and can be assured that it has passed all required tests before it goes anywhere permanent. But the staging area actually does the opposite of that, by creating a hidden state that cannot be tested directly.
As you say, all it takes is one mistake and you end up with a bad commit that breaks bisect later. That’s not just a problem of developers being forgetful, it’s the bad design of the staging area that makes this likely to happen by default.
I think I sort of agree but do not completely concur.
Glossing over the staging can be fine in some projects and dev sloppiness is IMO a bigger problem than an additive flow for clean commits.
These are societal per-project issues - what’s the practice or policy or mandate - and thus they could be upheld by anything, even using the undo buffer for clean commits like back in the day. Which isn’t to say you never gotta do trickery like that with Git, just that it’s a flow that feels natural and undo trickery less common.
Skimming the other comments, maybe jj
is more like your suggestion, and I wouldn’t mind “a better Git”, but I can’t be bothered when eg. gitless
iirc dropped the staging and would make clean commits feel like 2003.
If git stash --keep-index
doesn’t do what you want the you could help further the conversation by elaborating on what you want.
It’s usually not that hard.
https://lobste.rs/s/yi97jn/is_it_time_look_past_git#c_ss5cj3
The underlying data model however is pretty good. We can probably ditch the staging area,
Absolutely not. The staging area was a godsend coming from Subversion – it’s my favorite part of git bar none.
Everyone seem to suppose I would like to ditch the workflows enabled by the staging area. I really don’t. I’m quite sure there ways to keep those workflows without using a staging area. If there aren’t well… I can always admit I was wrong.
Well, what I prize being able to do is to build up a commit piecemeal out of some but not all of the changes in my working directory, in an incremental rather than all-in-one-go fashion (ie. I should be able to form the commit over time and I should be able to modify a file, move it’s state into the “pending commit” and continue to modify the file further without impacting the pending commit). It must be possible for any commit coming out of this workflow to both not contain everything in my working area, and to contain things no longer in my working area. It must be possible to diff my working area against the pending commit and against the last actual commit (separately), and to diff the pending commit against the last actual commit.
You could call it something else if you wanted but a rose by any other name etc. A “staging area” is a supremely natural metaphor for what I want to work with in my workflow, so replacing it hardly seems desirable to me.
How about making the pending commit an actual commit? And then adding the porcelain necessary to treat it like a staging area? Stuff like git commit -p foo
if you want to add changes piecemeal.
No. That’s cool too and is what tools like git revise
and git absorb
enable, but making it an actual commit would have other drawbacks: it would imply it has a commit message and passes pre-commit hooks and things like that. The staging area is useful precisely for what it does now—help you build up the pieces necessary to make a commit. As such it implies you don’t have everything together to make a commit out of it. As soon as I do I commit, then if necessary --ammend
, --edit
, or git revise
later. If you don’t make use of workflows that use staging then feel free to use tooling that bypasses it for you, but don’t try to take it away from the rest of us.
pre-commit hooks
Oh, totally missed that one. Probably because I’ve never used it (instead i rely on CI or manually pushing a button). Still, that’s the strongest argument so far, and I have no good solution that doesn’t involve an actual staging area there. I guess it’s time to change my mind.
I think the final word is not said. These tools could also run hooks. It may be that new hooks need to be defined.
Here is one feature request: run git hooks on new commit
I think you missed the point, my argument is that the staging area is useful as a place to stage stuff before things like commit related hooks get run. I don’t want tools like git revise
to run precommit hooks. When I use git revise
the commit has already been made and presumably passed precommit phase.
For the problem that git revise
“bypasses” the commit hook when using it to split a commit, I meant the commit hook (not precommit hook).
I get that the staging area lets you assemble a commit before you can run the commit hook. But if this was possible to do statelessly (which would only be an improvement), you could do without it. And for other reasons, git would be so much better without this footgun:
Normally, you can look at git diff
and commit what you see with git commit -a
. But if the staging area is clobbered, which you might have forgot, you also have invisible state that sneaks in!
Normally, you can look at git diff and commit what you see with git commit -a.
Normally I do nothing of the kind. I might have used git commit -a
a couple times in the last 5 years (and I make dozens to hundreds of commits per day). The stattefullness of the staging area is exactly what benefits my workflow and not the part I would be trying to eliminate. The majority of the time I stage things I’m working on from my editor one hunk at a time. The difference between my current buffer and the last git commit is highlighted and after I make some progress I start adding related hunks and shaping them into commits. I might fiddle around with a couple things in the current file, then when I like it stage up pieces into a couple different commits.
The most aggressive I’d get is occasionally (once a month?) coming up with a use for git commit -u
.
A stateless version of staging that “lets you assemble a commit” sounds like an oxymoron to me. I have no idea what you think that would even look like, but a state that is neither the full contents of the current file system nor yet a commit is exactly what I want.
Why not allow an empty commit message, and skip the commit hooks if a message hasn’t been set yet?
Why deliberately make a mess of things? Why make a discreet concept of a “commit” into something else with multiple possible states? Why not just use staging like it is now? I see no benefit to jurry rigging more states on top of a working one. If the point is to simplify the tooling you won’t get there by overloading one clean concept with an indefinite state and contextual markers like “if commit message empty then this is not a real commit”.
Again, what’s the benefit?
Sure, you could awkwardly simulate a staging area like this. The porcelain would have to juggle a whole bunch of shit to avoid breaking anytime you merge a bunch of changes after adding something to the fake “stage”, pull in 300 new commits, and then decide you want to unstage something, so the replacement of the dedicated abstraction seems likely to leak and introduce merge conflict resolution where you didn’t previously have to worry about it, but maybe with enough magic you could do it.
But what’s the point? To me it’s like saying that I could awkwardly simulate if
, while
and for
with goto
, or simulate basically everything with enough NAND
s. You’re not wrong, but what’s in it for me? Why am I supposed to like this any better than having a variety of fit-for-purpose abstractions? It just feels like I’d be tying one hand behind my back so there can be one less abstraction, without explain why having N-1 abstractions is even more desirable than having N.
Seems like an “a foolish consistency is the hobgoblin of little minds” desire than anything beneficial, really.
Again, what’s the benefit?
Simplicity of implementation. Implementing the staging area like a commit, or at least like a pointer to a tree
object, would likely make the underlying data model simpler. I wonder why the staging area was implemented the way it is.
At the interface level however I’ve had to change my mind because of pre-commit hooks. When all you have is commits, and some tests are automatically launched every time you commit anything, it’s pretty hard to add stuff piecemeal.
Yes, simplicity of implementation and UI. https://github.com/martinvonz/jj (mentioned in the article) makes the working copy (not the staging area) an actual commit. That does make the implementation quite a lot simpler. You also get backups of the working copy that way.
Simplicity of implementation.
No offence but, why would I give a shit about this? git is a tool I use to enable me to get other work done, it’s not something I’m reimplementing. If “making the implementation simpler” means my day-to-day workflows get materially more unpleasant, the simplicity of the implementation can take a long walk off a short pier for all I care.
It’s not just pre-commit hooks that get materially worse with this. “Staging” something would then have to have a commit message, I would effectively have to branch off of head before doing every single “staging” commit in order to be able to still merge another branch and then rebase it back on top of everything without fucking about in the reflog to move my now-burried-in-the-past stage commit forward, etc, etc. “It would make the implementation simpler” would be a really poor excuse for a user hostile change.
If “making the implementation simpler” means my day-to-day workflows get materially more unpleasant, the simplicity of the implementation can take a long walk off a short pier for all I care.
I agree. Users shouldn’t have to care about the implementation (except for minor effects like a simpler implementation resulting in fewer bugs). But I don’t understand why your workflows would be materially more unpleasant. I think they would actually be more pleasant. Mercurial users very rarely miss the staging area. I was a git developer (mostly working on git rebase
) a long time ago, so I consider myself a (former) git power user. I never miss the staging area when I use Mercurial.
“Staging” something would then have to have a commit message
Why? I think the topic of this thread is about what can be done differently, so why would the new tool require a commit message? I agree that it’s useful if the tool lets you provide a message, but I don’t think it needs to be required.
I would effectively have to branch off of head before doing every single “staging” commit in order to be able to still merge another branch and then rebase it back on top of everything without fucking about in the reflog to move my now-burried-in-the-past stage commit forward
I don’t follow. Are you saying you’re currently doing the following?
git add -p
git merge <another branch>
git rebase <another branch>
I don’t see why the new tool would bury the staging commit in the past. That’s not what happens with Jujutsu/jj anyway. Since the working copy is just like any other commit there, you can simply merge the other branch with it and then rebase the whole stack onto the other branch after.
I’ve tried to explain a bit about this at https://github.com/martinvonz/jj/blob/main/docs/git-comparison.md#the-index. Does that help clarify?
Mercurial users very rarely miss the staging area.
Well, I’m not them. As somebody who was forced to use Mercurial for a bit and hated every second of it, I missed the hell out of it, personally (and if memory serves, there was later at least one inevitably-nonstandard Mercurial plugin to paper over this weakness, so I don’t think I was the only person missing it).
I’ve talked about my workflow elsewhere in this thread, I’m not really interested in rehashing it, but suffice to say I lean on the index for all kinds of things.
Are you saying you’re currently doing the following? git add -p git merge
I’m saying that any number of times I start putting together a commit by staging things on Friday afternoon, come back on Monday, pull in latest from main, and continue working on forming a commit.
If I had to (manually, we’re discussing among other things the assertion that you could eliminate the stage because it’s pointless, and you could “just” commit whenever you want to stage and revert the commit whenever they want to unstage ) commit things on Friday, forget I’d done so on Monday, pull in 300 commits from main, and then whoops I want to revert a commit 301 commits back so now I get to back out the merge and etc etc, this is all just a giant pain in the ass to even type out.
Does that help clarify?
I’m honestly not interested in reading it, or in what “Jujutsu” does, as I’m really happy with git and totally uninterested in replacing it. All I was discussing in this thread with Loup-Vaillant was the usefulness of the stage as an abstraction and my disinterest in seeing it removed under an attitude of “well you could just manually make commits when you would want to stage things, instead”.
I’m honestly not interested in reading it, or in what “Jujutsu” does
Too bad, this link you’re refusing to read is highly relevant to this thread. Here’s a teaser:
As a Git power-user, you may think that you need the power of the index to commit only part of the working copy. However, Jujutsu provides commands for more directly achieving most use cases you’re used to using Git’s index for.
What “jujutsu” does under the hood has nothing whatsoever to do with this asinine claim of yours, which is the scenario I was objecting to: https://lobste.rs/s/yi97jn/is_it_time_look_past_git#c_k6w2ut
At this point I’ve had enough of you showing up in my inbox with these poorly informed, bad faith responses. Enough.
I was claiming that the workflows we have with the staging area, we could achieve without. And Jujutsu here has ways to do exactly that. It has everything to do with the scenario you were objecting to.
Also, this page (and what I cited specifically) is not about what jujutsu does under the hood, it’s about its user interface.
No offence but, why would I give a shit about [simplicity of implementation]?
It’s because people don’t give a shit that we have bloated (and often slow) software.
And it’s because of developers with their heads stuck so far up their asses that they prioritize their implementation simplicity over the user experience that so much software is actively user-hostile.
Let’s end this little interaction here, shall we.
Sublime Merge is the ideal git client for me. It doesn’t pretend it’s not git like all other GUI clients I’ve used so you don’t have to learn something new and you don’t unlearn git. It uses simple git commands and shows them to you. Most of git’s day-to-day problems go away if you can just see what you’re doing (including what you’ve mentioned).
CLI doesn’t cut it for projects of today’s size. A new git won’t fix that. The state of a repository doesn’t fit in a terminal and it doesn’t fit in my brain. Sublime Merge shows it just right.
I like GitUp for the same reasons. Just let me see what I’m doing… and Undo! Since it’s free, it’s easy to get coworkers to try it.
I didn’t know about GitUp but I have become a big fan of gitui as of late.
I use Fork for the same purpose and the staging area has never been a problem since it is visible and diffable at any time, and that’s how you compose your commits.
See Game of Trees for an alternative to the git tool that interacts with normal git repositories.
Have to agree with others about the value of the staging area though! It’s the One Big Thing I missed while using Mercurial.
Well, on the one hand people could long for a better way to store the conflict resolutions to reuse them better on future merges.
On the other hand, of all approaches to DAG-of-commits, Git’s model is plain worse than the older/parallel ones. Git is basically intended to lose valuable information about intent. The original target branch of the commit often tells as much as the commit message… but it is only available in reflog… auto-GCed and impossible to sync.
Half of my branches are called werwerdsdffsd
. I absolutely don’t want them permanently burned in the history. These scars from work-in-progress annoyed me in Mercurial.
Honestly I have completely the opposite feeling. Back in the days before git crushed the world, I used Mercurial quite a lot and I liked that Mercurial had both the ephemeral “throw away after use” model (bookmarks) and the permanent-part-of-your-repository-history model (branches). They serve different purposes, and both are useful and important to have. Git only has one and mostly likes to pretend that the other is awful and horrible and nobody should ever want it, but any long-lived project is going to end up with major refactoring or rewrites or big integrations that they’ll want to keep some kind of “here’s how we did it” record to easily point to, and that’s precisely where the heavyweight branch shines.
And apparently I wrote this same argument in more detail around 12 years ago.
This is a very good point. It would be interesting to tag and attach information to a group of related commits. I’m curious of the linux kernel workflows. If everything is an emailed patch, maybe features are done one commit at a time.
If you go further, there are many directions to extend what you can store and query in the repository! And of course they are useful. But even the data Git forces you to have (unlike, by the way, many other DVCSes where if you do not want a meaningful name you can just have multiple heads in parallel inside a branch) could be used better.
I can’t imagine a scenario where the original branch point of a feature would ever matter, but I am constantly sifting through untidy merge histories that obscure the intent.
Tending to your commit history with intentionality communicates to reviewers what is important, and removes what isn’t.
It is not about the point a branch started from. It is about which of the recurring branches the commit was in. Was it in quick-fix-train branch or in update-major-dependency-X branch?
The reason why this isn’t common is because of GitHub more than Git. They don’t provide a way to use merge commits that isn’t a nightmare.
When I was release managing by hand, my preferred approach was rebasing the branch off HEAD but retaining the merge commit, so that the branch commits were visually grouped together and the branch name was retained in the history. Git can do this easily.
I never understood the hate for Git’s CLI. You can learn 99% of what you need to know on a daily basis in a few hours. That’s not a bad time investment for a pivotal tool that you use multiple times every day. I don’t expect a daily driver tool to be intuitive, I expect it to be rock-solid, predictable, and powerful.
This is a false dichotomy: it can be both (as Mercurial is). Moreover, while it’s true that you can learn the basics to get by with in a few hours, it causes constant low-level mental overhead to remember how different commands interact, what the flag is in this command vs. that command, etc.—and never mind that the man pages are all written for people thinking in terms of the internals, instead of for general users. (That this is a common failing of man pages does not make it any less a problem for git!)
One way of saying it: git has effectively zero progressive disclosure of complexity. That makes it a continual source of paper cuts at minimum unless you’ve managed to actually fully internalize not only a correct mental model for it but in many cases the actual implementation mechanics on which it works.
Its manpages are worthy of a parody: https://git-man-page-generator.lokaltog.net
Its predecessors CVS and svn had much more intuitive commands (even if they were was clumsy to use in other ways). DARCS has been mentioned many times as being much more easy to use as well. People migrating from those tools really had a hard time, especially because git changed the meanings of some commands, like checkout.
Then there were some other tools that came up around the same time or shortly after git but didn’t get the popularity of git like hg and bzr, which were much more pleasant to use as well.
I think the issues people have are less about the CLI itself and more about how it interfaces with the (for some developers) complex and hard to understand concepts at hand.
Take rebase for example. Once you grok what it is, it’s easy, but trying to explain the concept of replaying commits on top of others to someone used to old school tools like CVS or Subversion can be a challenge, especially when they REALLY DEEPLY don’t care and see this as an impediment to getting their work done.
I’m a former release engineer, so I see the value in the magic Git brings to the table, but it can be a harder sell for some :)
The interface is pretty bad.
I would argue that this is one of the main reasons for git’s success. The CLI is so bad that people were motivated to look for tools to avoid using it. Some of them were motivated to write tools to avoid using it. There’s a much richer set of local GUI and web tools than I’ve seen for any other revision control system and this was true even when git was still quite new.
I never used a GUI with CVS or Subversion, but I wanted to as soon as I started touching the git command line. I wanted features like PRs and web-based code review, because I didn’t want to merge things locally. I’ve subsequently learned a lot about how to use the git CLI and tend to use it for a lot of tasks. If it had been as good as, say, Mercurial’s from the start then I never would have adopted things like gitx
/ gitg
and GitHub and it’s those things that make the git ecosystem a pleasant place to be.
The interface of Git and its underlying data models are two very different things, that are best treated separately.
Yes a thousand times this! :) Git’s data model has been a quantum leap for people who need to manage source code at scale. Speaking as a former release engineer, I used to be the poor schmoe who used to have to conduct Merge Day, where a branch gets merged back to main.
There was exactly one thing you could always guarantee about merge day: There Will Be Blood.
So let’s talk about looking past git’s god awful interface, but keep the amazing nubbins intact and doing the nearly miraculous work they do so well :)
And I don’t just mean throwing a GUI on top either. Let’s rethink the platonic ideal for how developers would want their workflow to look in 2022. Focus on the common case. Let the ascetics floating on a cloud of pure intellect script their perfect custom solutions, but make life better for the “cold dark matter” developers which are legion.
I would say that you simultaneously give credit where it is not due (there were multiple DVCSes before Git, and approximately every one had a better data model, and then there are things that Subversion still has better than everyone else, somehow), and ignore the part that actually made your life easier — the efforts of pushing Git down people’s throat, done by Linus Torvalds, spending orders of magnitude more of his time on this than on getting things right beyond basic workability in Git.
Not a DVCS expert here, so would you please consider enlightening me? Which earlier DVCS were forgotten?
My impressions of Mercurial and Bazaar are that they were SL-O-O-W, but they’re just anecdotal impressions.
Well, Bazaar is technically earlies. Monotone is significantly earlier. Monotone has quite interesting and nicely decoupled data model where the commit DAG is just one thing; changelog, author — and branches get the same treatment — are not parts of a commit, but separately stored claims about a commit, and this claim system is extensible and queriable. And of course Git was about Linus Torvalds speedrunning implementation of the parts of BitKeeper he really really needed.
It might be that in the old days running on Python limited speed of both Mercurial and Bazaar. Rumour has it that the Monotone version Torvalds found too slow was indeed a performance regression (they had one particularly slow release at around that time; Monotone is not in Python)
Note that one part of things making Git fast is that enables some optimisations that systems like Monotone make optional (it is quite optimistic about how quickly you can decide that the file must not have been modified, for example). Another is that it was originally only intended to be FS-safe on ext3… and then everyone forgot to care, so now it is quite likely to break the repository in case of unclean shutdown mid-operation. Yes, I have damaged repositories that way to a state where I could not find advice on how to avoid re-cloning to get even partially working repository.
As of Subversion, it has narrow checkouts which are a great feature, and DVCSes could also have them, but I don’t think anyone properly has them. You kind of can hack something with remote-automate in Monotone, but probably flakily.
Let the data model pretend there’s a blob for each version of that huge file, even though in fact the software is automatically compressing & decompressing things under the hood.
Ironically, that’s part of the performance problem – compressing the packfiles tends to be where things hurt.
Still, this is definitely a solvable problem.
I used to love DARCS, but I think patch theory was probably the wrong choice.
I have created and maintains official test suite for pijul, i am the happiest user ever.
Hmm, knowing you I’m sure you’ve tested it to death.
I guess they got rid of the exponential conflict resolution that plagued DARCS? If so perhaps I should give patch theory another go. Git ended up winning the war before I got around to actually study patch theory, maybe it is sounder than I thought.
Pijul is a completely different thing than Darcs, the current state of a repository in Pijul is actually a special instance of a CRDT, which is exactly what you want for a version control system.
Git is also a CRDT, but HEAD isn’t (unlike in Pijul), the CRDT in Git is the entire history, and that is not a very useful property.
Best test suite ever. Thanks again, and again, and again for that. It also helped debug Sanakirja, a database engine used as the foundation of Pijul, but usable in other contexts.
There are git-compatible alternatives that keep the underlying model and change the interface. The most prominent of these is probably gitless.
I’ve been using git entirely via UI because of that. Much better overview, much more intuitive, less unwanted side effects.
You can’t describe Git without discussing rebase and merge: these are the two most common operations in Git, yet they don’t satisfy any interesting mathematical property such as associativity or symmetry:
Associativity is when you want to merge your commits one by one from a remote branch. This should intuitively be the same as merging the remote HEAD, but Git manages to make it different sometimes. When that happens, your lines can be shuffled around more or less randomly.
Symmetry means that merging A and B is the same as merging B and A. Two coauthors doing the same conflictless merge might end up with different results. This is one of the main benefits of GitHub: merges are never done concurrently when you use a central server.
Well, at least this is not the fault of the data model: if you have all the snapshots, you can deduce all the patches. It’s the operations themselves that need fixing.
My point is that this is a common misconception: no datastructure is ever relevant without considering the common operations we want to run on it.
For Git repos, you can deduce all the patches indeed, but merge and rebase can’t be fixed while keeping a reasonable performance, since the merge problem Git tries to solve is the wrong one (“merge the HEADs, knowing their youngest common ancestor”). That problem cannot have enough information to satisfy basic intuitive properties.
The only way to fix it is to fetch the entire sequence of commits from the common ancestor. This is certainly doable in Git, but merges become O(n) in time complexity, where n is the size of history.
The good news is, this is possible. The price to pay is a slightly more complex datastructure, slightly harder to implement (but manageable). Obviously, the downside is that it can’t be consistent with Git, since we need more information. On the bright side, it’s been implemented: https://pijul.org
no datastructure is ever relevant without considering the common operations we want to run on it.
Agreed. Now, how often do we actually merge stuff, and how far is the common ancestor in practice?
My understanding of the usage of version control is that merging two big branches (with an old common ancestor) is rare. Far more often we merge (or rebase) work units with just a couple commits. Even more often than that we have one commit that’s late, so we just pull in the latest change then merge or rebase that one commit. And there are the checkout operations, which in some cases can occur most frequently. While a patch model would no doubt facilitate merges, it may not be worth the cost of making other, arguably more frequent operations, slower.
(Of course, my argument is moot until we actually measure. But remember that Git won in no small part because of its performance.)
I agree with all that, except that:
the only proper modelling of conflicts, merges and rebases/cherry-picking I know of (Pijul) can’t rely on common ancestors only, because rebases can make some future merges more complex than a simple 3-way merge problem.
I know many engineers are fascinated by Git’s speed, but the algorithm running on the CPU is almost never the bottleneck: the operator’s brain is usually much slower than the CPU in any modern version control system (even Darcs has fixed its exponential merge). Conflicts do happen, so do cherry-picks and rebases. They aren’t rare in large projects, and can be extremely confusing without proper tools. Making these algorithms fast is IMHO much more important from a cost perspective than gaining 10% on a operation already taking less than 0.1 second. I won’t deny the facts though: if Pijul isn’t used more in industry, it could be partly because that opinion isn’t widely shared.
some common algorithmic operations in Git are slower than in Pijul (pijul credit
is much faster than git blame
on large instances), and most operations are comparable in speed. One thing where Git is faster is browsing old history: the datastructures are ready in Pijul, but I haven’t implemented the operations yet (I promised I would do that as soon as this is needed by a real project).
I’m convinced that this kind of phone screen is used as a way for FAANG companies to weed out people who don’t want to spend tens of hours studying Cracking the Coding Interview. Maybe they’ve found some sort of correlation with those people dedicated to getting into a “prestige” company and how long they will stay at a job, which helps reduce costs associated with hiring and retraining or morale hits from people leaving more often. And because big companies do it, other companies cargo cult the idea even though the reason behind doing them doesn’t fit that.
That’s just speculation, of course, but I’m convinced it’s something like that. There is a time and a place for whiteboard interviews, but trying to move that step to a phone screen makes it worse than worthless.
I’ve recently taken over most of designing the hiring process at my current job, and my top priorities are:
Using Kevin as an example, if he applied for a position it would be clear from his Github and blog that he was competent and motivated. There would be no reason to ask algorithm or whiteboard questions, and an interview would be about seeing if we have aligned goals, and if we do, trying to convince him that he should choose us over any other offers he has.
Maybe some high salary or prestige companies can afford a high rate of false negatives, but algorithm puzzle phone screens and 5-hour take-home exercises for every candidate no matter what is a good way to fall behind on hiring goals and let great candidates slip through your fingers.
It’s only an anecdote, but I did not study for my Google interviews and got offered a job anyway. The technical questions which involved writing code were given in front of a whiteboard; I did not even have the luxury of Coderpad. And I had not taken the opportunity seriously, other than participating; I did not prepare much because I did not think I would get an offer.
I mostly recall your comments on lobste.rs revolving around topics of higher mathematics and FP which, in my opinion, go well beyond a knowledge of an average “coder”. Even though you were not prepared for an interview itself, your above average cleverness might have been picked up by an interviewer? For people closer to the central part of Gauss curve it might require some additional effort, I don’t know :/
Perhaps. But I think that this is hero worship, or at least belief in meritocracy. Also, it implies that Google’s interviews are singularly difficult, which is definitely not the case. In terms of FP, they offered to interview me in Haskell but I chose Python instead since I felt stronger with it.
It is more likely that, due to an adverse childhood, I have internalized certain kinds of critical/lateral/upside-down thinking as normal and typical ways to examine a problem. I was hired as an SRE at Google; I was expected to be merely decent at writing code, but even better at debugging strange problems and patiently fixing services. There were several interview segments where I discussed high-level design problems and didn’t write any code.
As another anecdote, I once failed an interview at Valve so horribly that they stopped the interview after half a day and sent me home.
Maybe some high salary or prestige companies can afford a high rate of false negatives
The cost of a false positive outweighs the cost of ten false negatives, arguably moreso for the smaller place. Not filling a slot is… not filling a slot. It sucks, you keep trying until you find the right match. Filling a slot with the wrong person is a mess. You lose money, you throw the team off its stride, you disrupt the hiring pipeline… really, everyone suffers.
We don’t do algorithm puzzles, but we do give out a take-home coding task as an early screening stage. The tasks are fairly easy (but not ones you’ll find on the internet), well-defined, grounded in reality (usually they’re a toy version of something the team in question actually works with), there’s no time limit, and candidates are allowed to use whatever tools and languages they like, although we do state our preferences. The criteria we judge on are basically:
It’s rare that we even have to make much of a judgment call, because most people fail one of the first two points.
Is a lightweight wasm miner on a website really any worse than surveillance capitalism advertisements and supercookies? If websites had an opt-in, turn off all trackers for the same amount of CPU resources used on a wasm miner, I’d use it every time.
Yes, but I feel this is missing the woods for a particular tree, as you can’t really block arbitrary wasm components.
You can block JS, and block loading certain scripts. You can block and remove elements that make tracking easier like pixels and canvases and iframes and more. Just look at browser extensions like uMatrix or NoScript.
One of the reasons I turn JS off is to reduce wear and tear on my machines. I had my last laptop for 8 years. It ran quickly on about everything except some web applications. I’m not sure if lots of crypto mining will burn out my current CPU more quickly. I’d rather it not happen in the background just in case.
Also, some principle about how, if they’re making money with my hardware, then I should get a cut of it or I block it. Something like that.
Cynical answer: Venture-capital funded Stasi.
More descriptive: Selling and hoarding of people’s private lives for the purposes of making money off of it, for example, advertising.
filter_map
is a method I’ve wanted for quite a while. Ruby is a language that knows what it is, and keeps moving in that direction, rather than trying to be all things to all people, and I appreciate that about it.
Can we ban this blog completely? All of the posts are just lazy blogspam summaries of Ruby and Rails features that do little more than restate what the feature is, and the author and invite tree have been banned from lobsters already.
Hm, that’s an interesting idea. Right now the consequence of spamming is “you get banned”, there’s still incentive to spam, because you get hits for the time you’re not banned. But if the consequence is “you get banned and nobody can post links to your site ever again”, people might think twice. …Might.
There tends to be a half-life to these things - the community remembers previous infractions, and would probably flag subsequent submissions as spam.
So, are you saying that technical blog posts about framework features are not welcome on lobsters? Is there something in the submission guidelines being violated here? Or are you calling this spam simply because you are not interested in the content? I don’t think “spam” is a defensible categorization.
It is spam because it it’s authors organised a voting/submission ring in order to drive traffic to their company blog; that is, unsolicited content to drive a commercial outcome.
Whether the content is suitable for the site is actually orthogonal to spamminess, but it’s easy to lose sight of that as so little spam is suitable.
Okay, I can understand why over-submission is considered spammy even if the content is fine. I was not aware that there was a coordinated submission scheme happening.
The most discussed articles have been always “non-tech”, because “tech” as you defined is not conductive to discussion. You can have Q&A, more links, but those produce less comments than discussion.
That’s a good point, reminds me of survivorship bias. There very well may be lots of technical content we just don’t see because it’s not as conducive to discussion (and consequently not featured on the front page)
You can be on the front page without discussion. For example, my post here got >20 upvotes without any comments.
Maybe it would be better to give a small negative weight to number of comments, rather than positive weight. It could be weighted in a way that it wouldn’t affect high-vote posts with a moderate number of comments. Instead it would elevate good technical posts that get lots of upvotes but don’t have many points of discussion, and diminish the staying power of argumentative topics.
The negative weight to number of comments could be disabled for certain tags like ask and meta, since the value of those comes from the comment section.
I’ll be studying the industry (logistics) and programming language (Go) for the job I started this month, both of which I have, until now, no experience in. If anyone has recommendations for books or other resources for either of these, please let me know! I have some material, but am looking for more.
Aside from that, my first Judo tournament as a black belt tomorrow, and practicing guitar, which I’m at the extreme-beginner end of.
Compared to most languages I use, Go has excellent documentation. Almost everything under https://golang.org/doc/ is worth skimming if you’ll be using it fulltime for years.
“Aside from that, my first Judo tournament as a black belt tomorrow”
Congratulations on getting your black belt. I know that took a lot of time and effort. :)
Thanks Nick! It did, indeed, but it’s been very rewarding. It’s physically and mentally stimulating in a way I haven’t experienced otherwise, and it’s the best offline community I’ve ever been a part of as an adult.
The most impactful part of this for me was:
I took a punt and searched Google for “privacy respecting commenting systems” as I wasn’t having much luck with DuckDuckGo.
That hits home, ouch. I’ve had so many searches that were just not coming up relevant on DDG where I’ve had to sigh and throw in that “!g”
If you’re using Firefox, consider adding a “keyword search” for Google, so that you can just type “g something something” in the url bar, instead of routing your request through ddg.
Unfortunately, you’re right. I’ve tried Startpage a few times (which proxies search results through other search engines, including Google, but I just don’t like the UI/UX all that much.
I started using Bing, and I’ve had much better luck than DDG and have forgotten that I’m not using Google. In fact, the results are sometimes better as they don’t have as much direct manipulation.
There’s also Startpage if you want Google search results without the tracking.
On Firefox, bypass DuckDuckGo to hit Startpage directly from address bar with search keywords. I’ve been using that ever since a Lobster (@freddyb maybe?) told us about it. My default search is DuckDuckGo. If results are crap, I put sp in address bar followed by the search terms for Startpage results. If they are crap, then I go with !G for Google results.
Yeah, that actually happens sometimes despite Startpage supposedly going Google with supposedly the same tech in background. They give different results on some of my queries with Google instantly taking me to what I need that Startpage was nowhere near. (shrugs) DuckDuckGo works most of the time, though.
I think you will find it was more a transaction than a give-away, they traded some data for some data ;)
Microsoft has less of an overall tracking footprint, and again their results are better. Just throwing it out there as a suggestion for people like myself who aren’t satisfied with DDG and don’t want to move back to Google.
Microsoft has less of an overall tracking footprint,
Do you just mean JS file size? I’m curious about this claim and how it’s quantified.
A lot more webpages are running Google Analytics than are running any Microsoft code. Microsoft just has less data to triangulate on me.
Unions would help remove this chicanery from the already-grossly-imbalanced empower/employee relationship.
plugging https://techworkerscoalition.org
Probably, but unions aren’t a panacea. Unions can lead to some bad things too. From my experience I’m not sure which I prefer.
Unions are not a panacea, as in they don’t solve all problems and even come with some of their own, but they exist to deal with exactly this situation. This feels a little like discussing an article about nails and pointing out that hammers aren’t a panacea.
“Unions” as a concept is mainly just “allowing workers to form a structure to gain negotiation power”. There’s thousands of implementations of that concept, which makes it hard to have a discussion about “unions” on that level as much as for “political parties” or “enterprises” globally.
Even as a business owner, I got a lot of support from my union, which makes me quite happy with my particular implementation, but I appreciate there’s tons of problems even in mine, specifically around my business.
Wouldn’t the logical conclusion be to prefer power being more dispersed between employers/unions, rather than concentrated? We should fight concentration of power, as it is always weaponized against us.
The only argument I can think of is that the union can interfere at times when they aren’t strictly necessary.
No, the logical conclusion is to prefer power being more dispersed between employers and employees. Unions are one way of achieving that, but they present their own problems which ought to be considered. Unions can pervert incentives. Where I work (UAW) there is no reason to work harder because you will never get ahead from it (unless you’re trying to be a supervisor, but there aren’t many of those). Sometimes people will work really slow to make sure they don’t do better than standard. Our supervisors won’t even tell us “good job today” unless it’s in private because they’re worried about the union. It’s also basically impossible to fire someone no matter how bad they are at their job. I’m not sure we make more money either, the last place I worked started a bit lower but increased the longer you worked there.
I pointed out that they aren’t a panacea because I get the impression that the people advocating for them in tech have no experience with them and aren’t fully considering the implications. But maybe UAW is the only one with these problems.
Individual approaches can pervert incentives just as much. It does for example lead to situations where motivations are entirely self-driven. I’ve seen many projects put on hold because people personal goals for the next raise in the company didn’t align with the goals of their team. That included that their goal became meaningless over the year!
I’m not saying unionisation cannot lead to weird situations such as yours, but that also usually a sign of a business where this still go too well. For example, when talking to union people in the insurances sector in Germany, they are keenly aware that the whole sector is being automated, so for example some of their strategy is currently bargaining with the employers to migrate people to either other parts of the company or going half-time positions instead of straight-out being fired. Those are very knowledgeable people with high interest in finding a good solution for all sides.
I do agree with your reading of union advocacy in tech: most of the time, employees figure out they want to unionise when they already have an open conflict with their upper management. Even if they manage, they will be in a situation where management gives them not quarter (why should they, they didn’t before) and that will also lead to them never giving a win away anymore.
This threat, along with wage suppression and employee control in general, is why big tech is pushing for more H1Bs for India.
On the fiction side, I was recently recommended Nick Cole’s speculative fiction work and I’ve been tearing through it. Soda Pop Soldier was the first one of his I read, and it’s like if Ready Player One was better written, more believable, and removed 97% of the nostalgia wankery. The second book in the series was equally as entertaining, and I’m already a book into the Galaxy’s Edge series that he is writing with Jason Anspach; a page-turner of an intense military science fiction book. Looking forward to getting into the rest of it.
I have a couple of non-fiction books going at the moment, as well:
The Ecotechnic Future by John Michael Greer - extremely interesting perspective on current human ecology and the likely direction of it in the future as we transition from an industrial society to a post-industrial society. Definitely a book to take in slowly to appreciate.
Estrogeneration by Anthony G. Jay - A well-researched overview of estrogenic sources in our food and environment that throw off our hormonal imbalance, summaries of the studies that back it up, and practical recommendations for avoiding them.
I have a cheap VPS from Contabo that hosts all of my hobby and personal sites. They’re all Elixir applications, deployed using edeliver
Wrapping up some code I’ve been working on for fantasy baseball valuations. For years I’ve been writing little scrapers and scripts that have made me thousands of dollars just in my own games, but over 2019 I want to take some of my more innovative tools and productize them for the 2020 season. There are some really great sites and tools out there already, but they all have a few blind spots, and that’s where I’ve gotten my edge over the years.
I currently live outside the US, so I can’t play daily fantasy any more (where I first made more than $1k in a season). I’m restricted to just playing season-long leagues that require a fair bit of hands-on management for each league individually. That makes it more scalable to sell a product rather than try to play 100 leagues myself.
I’m not familiar with fantasy baseball, or indeed fantasy just about anything. Is the money also fantasy? Or are you making real money scripting a game?
Does IO count as a Little Language? It has some use outside of being a toy language, but it’s extremely simple and transparent.
I’m not sure. I don’t have a solid idea myself of what is and isn’t a litte language in the sense of that post. What I was thinking is that the smallness of the language should be intended to optimize the language as a tool for learning. So not all small languages fall into this category just by virtue of their smallness.
People who’re participating, what languages are you using? I’m starting to learn Typescript, so I’m thinking of using that for practical reasons, though I’d probably rather do OCaml or something a bit more esoteric.
Last year I used 6 languages, and this year my goal is to use 12! Carp, Fennel, and Reason are some new-to-me languages I hope to touch on. I will likely do most of the problems in clojure because I enjoy manipulating data-structures in it so much, but rarely feel like it’s the practical choice for my projects.
If it is fine with you to not be able to complete a task in 24h than AoC is a great way to learn new language.
I’m learning Clojure and liking it so far, so my AoC has a perfect timing this year to be full of parenthesis
I did 2015 and 2017 in Racket, and had a pretty good time. I tried 2016 in Erlang, and really struggled.
This year, I’m going to try it with Pony.
I use Perl 5. Life + day job doesn’t give much time for actual programming so I’m really happy to just try to solve problems in a language I know well.
Plus Perl makes parsing the problem input a breeze.
As much as I am tempted to try ReasonML, I wrote a modest amount of Elixir back in 1.2 (three years ago, and stable is now 1.7), and have an upcoming need for it again. Will probably do that to brush up on any changes.
In an attempt to make it challenging is I am going to try and pick up Emacs while doing so. I have heard that the lineage inherited from Erlang provides a decent ecosystem in Emacs for other BEAM languages. Does anyone know if that perception is correct?
Cannot edit my post anymore, but it looks like not only is that the case, there are multiple Elixir-specific packages as well (elixir-mode
and alchemist
).
I think this year I’ll forego going for leaderboard, and use the opportunity to brush up on my Elixir skills. I haven’t had a good chance to touch it for quite a while, so it should be a good refresher. I really enjoy thinking in and solving problems with the toolset available in Elixir.
Jose Valim is going to stream his own solutions to each challenge one day afterward (in Elixir of course):
http://blog.plataformatec.com.br/2018/11/lets-learn-elixir-together-with-advent-of-code/
Crystal. Figured I should brush up on making actual programs instead of one-off scripts and thus style and “architecture” will be my focus.
Pretty ballsy that a project will try to reclaim that name from the horror that is Crystal Reports. Kudos!
Disclaimer: I’m one of the newest users here, so if my ideas about what’s on-topic don’t line up with the community’s, I understand that that indicates that I should move on, not that the community should change to suit me :-)
How do you think the person who submitted this neat project feels when only a tiny fraction of the replies to their submission even talk about it?
That person probably feels pretty bummed out about the lack of technical discussion. I understand and empathize with their disappointment. But I’m surprised that a discussion of the broader context around the submission would be considered off-topic. Nothing happens in a vacuum.
We’ve seen throughout history that people who are willing to act unethically have an advantage over those who aren’t. Pragmatically, the main things preventing companies from behaving unethically or immorally are the threat of legal repercussions, the consciences of their employees, and the criticism of the general public. You could summarize the latter two as “shame,” and if our venue prohibits that mechanism then we’re effectively siding with the companies that are willing to get ahead at any cost.
Allowing moral policing in comments creates an environment where technical posts and discussion (the lifeblood and differentiator of lobste.rs) will suffer. Two ways this happens include:
It discourages people from submitting interesting technical projects due to potential backlash. I wouldn’t want to miss out on good technical discussion.
It discourages comments and discussion about the project. If the top comment has 95 upvotes and takes up 10 pages of scrolling, it will make it harder to justify commenting on the project technically. If I post, will anyone see it anyway?
A deterioration in the quality of technical discussion will lead to users who care only about the technical content leaving, and thus further deterioration of content.
Whatever your personal views are, I encourage everyone to take a moral nihilistic stance when it comes to making comments here. There are plenty of places to discuss morals, ethics, and politics on the internet, and turning Lobsters into yet another one of these rather than the best place to find and discuss technical articles on the internet would be a loss.
Moreover, articles on HN/reddit are usually a super set of what’s here. If an article appears here that you would like to make a political comment on, finding the same article on one of the other sites and joining in the discussion should not be too onerous.
Of course, that robs those with a strong desire to proselytize of a potential audience so is unlikely to be welcomed.
But I’m surprised that a discussion of the broader context around the submission would be considered off-topic. Nothing happens in a vacuum.
The “broader context” discussion starts with tangents and gets only worse from there. That’s why the SNR on HN is so low, and that’s why I barely read HN.
When I joined lobsters, the unwritten rule was that the focus is (almost) exclusively on technical content. Maybe I imagined that rule? The way it was enforced was with relevant technical tags (and a bit of activism, not unlike what sock is doing here), but once you get broad enough tags (culture, practices, …) it’s bound to get out of hand. Worse yet, comments aren’t tagged like submissions so there was never a mechanism for enforcing on-topic technical discourse. So that’s getting out of hand too, as more people engage tangents. And now I’m seeing more and more people who think that anything they upvote or anything they find interesting belongs on the site. IDK what to think.
When I joined lobsters, the unwritten rule was that the focus is (almost) exclusively on technical content.
Even if that’s no longer the case now - I’d certainly like that to become a rule (written or not).
I’d prefer not. Pure technical content is sterile and boring. Read a textbook or subscribe to a journal if that’s your bag.
Technology is only interesting and valuable to humanity where it impacts and has interactions with the humanities.
Although I understand the fact, that its difficult to judge something without context, I also wish I knew where to draw the line of how broad or narrow the context can be discussed. I don’t think that it is even really possible when it comes to convictions and beliefs that are mostly subjective.
C-level executives and board members antagonize employees and threaten unemployment, knowing full well that they’ll never miss a meal or a mortgage payment and that their children will still have health insurance and good schools: freedom.
Workers thinking they should organize to present common concerns to management: not freedom.
Remember, they only call it class warfare when we fight back.
Workers organizing is freedom.
Workers being coerced to join or pay an organization is not freedom.
This is true even when the organization itself exists to protect freedom.
Workers being coerced to join or pay an organization is not freedom.
Yes and no. My being forced to pay taxes isn’t freedom. My living in a society with roads and clean water and educated children (and my own education, which given my home life at the time wouldn’t have happened without compulsory and free education) dramatically increases my overall freedom, far more than was lost by paying taxes.
The power imbalance between most employers and most employees is such that the vast majority of people are almost-serfs in all but name. The tech sector can sometimes forget that because of the high salaries and relatively competitive employment market…but for most people, their health and home are literally tied to the whims of someone who views them as nothing more than expendable labor. Sure they’re “free” to change jobs, but saying “you’re free to risk your children’s health!” isn’t really freedom at all.
Correcting that power imbalance might take away some freedom, but it would add a lot more freedom on the other side of the balance sheet, IMHO.
Universal health care and a strong social safety net is the other way to fix this, if labor unions are determined to be too problematic. That allows you the freedom to change jobs without worrying that you couldn’t pay for your child’s healthcare.
To provide a real example: a friend of mine has a chronically ill daughter. Without health insurance he literally cannot afford to keep his daughter alive. Thanks to the repeated attempts at removal of the preexisting condition clause by the GOP recently, he runs the very real risk that he could end up with his daughter uninsured and potentially in dire straits if he were to lose his job. His employer knows this and, as the provider of his health insurance, could demand literally anything of him. If he were unemployed long enough that he could no longer pay for COBRA between employers, he’d literally be unable to keep his daughter alive. That is not freedom; that it’s not the government who holds the power is immaterial.
(Note that his employer is awesome and doesn’t do anything bad, but that’s not true of everyone and it shouldn’t have to be…)
I think you made a great case for universal healthcare – which can be argued to either side of the political fence. If you lean left, universal healthcare is a right and a true good. If you lean right, universal healthcare drives competition, flexibility and allows people to create new companies and more around more quickly.
That said, I am not sure you made a great case for unions. Unions don’t fix the fundamental problem around healthcare in any form. You still can’t leave to a non-union shop, can’t leave to start your own company, etc without giving it up. If anything it makes it more entrenched.
You seem to be fixating on a single example, not the thrust of his argument. You do realize other first world countries have universal healthcare and wayyy higher union participation than the US? There must be other things unions are useful for.
The point of labour organizing is not ‘freedom’, especially not in the anglo sense of formal freedom on the marketplace, that everyone on the English-speaking internet seems to assume to be the only true and natural kind of freedom there is. It’s merely improving the conditions of labour, nothing more, nothing less.
That said, unions can be terrible because they’re often loci of concessions, nationalism, taming, and other reactionary politics, rather than struggle.
They can have issues. Most of the problems I see are caused by apathy and/or incentives at the top with problems they cause being externalities. Unions also seem to stop more problems than they create. They also counter the trend paid with political bribes to make people easy to fire without cause in as many states as possible. That’s on top of executive compensation always going up in companies that “can’t afford” good wages or benefits for production works. These all lead me to be pro-union in general.
Idk how it is in the US honestly, but that part of my comment wasn’t anti-union in general just noting that they definitely have limits in a political sense.
However for workers they’re obviously a huge net-positive.
You make a good point. The immediate goal of a union is not freedom of its workers. I think workers unionize, though, because they desire more freedom. Limiting work hours means freedom to choose what to do with the rest of the day, for example.
I agree with you here. I’m all for workers being able to collectively bargain for their own interests, but not at the expense of imposing on the liberty of others.
I don’t mind if my co-workers unionize, but I want to be able to choose my own terms of employment with my employer without having a third party interfere without my consent.
I don’t mind if my co-workers unionize, but I want to be able to choose my own terms of employment with my employer without having a third party interfere without my consent.
You can’t have your cake and eat it too: if the strength of your coworkers’ union results in your employer entering into a favorable health insurance contract with an insurer, are you really going to reject that insurance and try to negotiate your own? Even if the insurance you purchase will invariably be more expensive and will cover you less?
I don’t really care what a third party does with regards to my contractual agreement with my employer. The agreement I enter in is between myself and the company employing me.
In your hypothetical, I may indeed choose to cover myself. It’s hard to guess without actually having the numbers and going through a negotiation. I likely value different things at different levels than a potential union does, and would be better served negotiating based on my preferences rather than letting a group decide the terms of my contract.
Except you would end up with a significantly less favourable contract, as you lack the negotiating leverage of the union.
I don’t understand why you care so much about my contract. It’s up to me to decide what is favorable for me and what isn’t. I have the leverage of my own skills and experience, and that I can take a better offer from a different employer at any time.
I don’t care about you, per se, but if everybody privileges abstract notions of freedom over concrete gains from their employment, you have a collective action problem and everybody ends up strictly worse off.
Strictly worse off by whose definition? I’m under no moral obligation to sacrifice my own values to appease yours. If you’re worried about me not joining your union, then make your union attractive enough that I want to join it over negotiating my contract myself. Don’t force me into a contractual agreement that I never consented to.
If you’re worried about me not joining your union, then make your union attractive enough that I want to join it over negotiating my contract myself.
In all likelihood, it will be attractive - but the benefits it confers will end up available to all employees, not just those in the union.
Now, if the choice was a strict “join the union and receive benefits which it negotiated” or “do not join the union and you are solely responsible for negotiating every part of your employment” I’d be happy, and it sounds like you would be as well.
Would you decline to accept any benefit - work conditions, time off, retirement, etc. - which was negotiated by your workplaces union if you planned to not join the union? If your answer is yes, I’d applaud your consistency.
The problem that @jfb identifies is that most people would say “no” - they’d chose to benefit from things negotiated by a union they’re not a member of.
Now, if the choice was a strict “join the union and receive benefits which have been established” or “do not join the union and you are solely responsible for negotiating every part of your employment” I’d be happy, and it sounds like you would be as well.
Yep, perfectly fine with me.
Would you decline to accept any benefit - work conditions, time off, retirement, etc. - which was negotiated by your workplaces union if you planned to not join the union? If your answer is yes, I’d applaud your consistency.
In a contract negotiation between myself and my employer, it’s impossible for me to know what parts of their offering are influenced by the presence of the union, or to what extent they are. For example, imagine an employer that would negotiate for some sort of health insurance regardless of existence of a union. If the existence of a union changes that relationship via a change of insurer, I can’t just ignore it and keep whatever insurance plan I chose before the union came in.
I don’t care to take advantage of a union. I won’t take drinks from your “union members” fridge or take breaks on your union schedule and hope nobody notices. I will, however, negotiate the best deal for myself with my employer, and not handicap myself by trying to figure out what I would or would not have access to if the union didn’t exist. The union is an outside agent that I don’t have control over, and the extent that its existence benefits extend beyond its members are for the union to figure out.
Would you decline to accept any benefit - work conditions, time off, retirement, etc. - which was negotiated by your workplaces union if you planned to not join the union?
I certainly wouldn’t refuse all time off because the union gets some, but I wouldn’t automatically assume to have the same. If I get three weeks, and the new union contract gives four, then I guess I’m stuck with three.
But observing that the company gives four weeks off is a data point I might consider when asking for more time off. That’s not strictly a union thing, though. If I saw a non union worker getting more time off, I might want that too.
Is that how it works in the non union case? If you hear a coworker got a raise, do you refuse to ask for your own?
My wife has a saying: Good and Evil don’t exist, it’s just selflessness and selfishness.
Eric is talking right around the crux of the matter, but he missed something.
I’m under no moral obligation to sacrifice my own values to appease yours.
Sure you are, buddy. You aren’t under any legal obligation, nor any ethical obligation. The obligation is in fact, a moral obligation.
When you throw your lot in with a group, you are sacrificing some of your autonomy in exchange for the group’s strength. Due to network effects, many groups are stronger than their strongest member, but yes, sometimes a member will become weaker by joining. (I’m ignoring here the second order effects like community respect gained due to being described as selfless, etc.)
EDIT: reworked the bottom, sorry.
Sure you are, buddy. You aren’t under any legal obligation, nor any ethical obligation. The obligation is in fact, a moral obligation.
You and I have very different moral preferences if you think it’s ok to impose your values on someone else without their consent.
When you throw your lot in with a group, you are sacrificing some of your autonomy in exchange for the group’s strength.
When I join a company I am entering an agreement with an employer in which I exchange my labor for (primarily monetary) compensation.
Your assumption that joining a company means joining an subset of coworkers for an unspecified goal of “group strength” seems entirely arbitrary to me.
Look, the simple fact is that unions allow for more favorable price fixing by Labor.
The benefit should be obvious.
You join the work force as a worker and that makes you a worker. There are social expectations from that and you can be aware of them well before deciding to join the workforce. There’s an unwritten social contract and in the same way by living in a nation-state you’re implicitly a citizen, by joining the workforce you’re implicitly a worker and then subject to all the moral obligations that come with it. Most of them are not protected by law, because in non-socialist states one of the goals of the legal system is to repress the worker, but nonetheless you’re held responsible by other workers. This, most of the times just boils down to “he’s such an asshole” but in other times it meant more than that, because your action was directly and undeniably hurting your peers.
You and I have very different moral preferences if you think it’s ok to impose your values on someone else without their consent.
Don’t worry, I’m not in a position to compel you! That would be wrong. I may only ask.
But, that isn’t the case we are discussing is it? We are talking about compulsory unionization. Join the union or no job seems to be what they are referencing.
Right, the closed shop. It’s a way to limit individual liberty to allow for stronger collective liberty. I’m perfectly ok with this, but there are those who have a different conception of liberty who might not be. I think it’s totally wrong, but it’s not a nonsensical way to conceptualize the relationships between people.
But the situation of a union being part of the negotiation is not much different than the situation where just you and the employer negotiate. Typically in a non-unionised company your boss is heavily restricted in what they can offer you by company policy and HR. Unionization is the same kind of rules just optimized for other goals.
The notion that you are somehow more free negotiating in non-unionised jobs is - I don’t know - self-deception?
I started at a unionized company which gave me a 20% pay increase that my previous employer was unwilling to match (their competitive offer was 10% after telling me before I applied that they could notpay me more). Now the union negotiates for me the annual pay increases. I can also negotiate directly with my employer in the sense that my employer can put me into a more senior position which means I would get more 💰.
If the union contract was bad, i could still negotiate with my employer that they pay me above the union contract. But since the union contract is quite generous and above the typical competition I would have a hard time negotiating that in the same way as i have a hard time negotiating for that kind of a salary at non-unionised competitors.
, i could still negotiate with my employer that they pay me above the union contract
It was my understanding this was explicitly not allowed by union agreements. This is because to have collective bargaining power and a “union contract” requires that contract to be adhered to by all union members. Can you link me to a union that says you can negotiate individually in their rules? How does that even work – so you get a floor but then can ignore the ceiling and push for whatever additional you want? Doesn’t that take a lot of the positive upside away from a contract from the employer side?
My union has around 16 pay levels plus some kind of an individual components. If you want to earn more than the highest level you can definitely get such a non-union contract (it’s what management gets in any case).
You can also negotiate for being grouped into a different category.
You could also try to be hired as a contractor (this would mean that you have the biggest negotiation freedom).
Anyways the question is pretty theoretical in the sense that the union contract is fairly good and on average better than what individuals with the same competency get on the free market here.
“I don’t mind if my co-workers unionize, but I want to be able to choose my own terms of employment with my employer without having a third party interfere without my consent.”
In our unionized company, everyone gets the benefits and small restrictions that come with the work of the union and its members. Some people think only the union members should get benefits union negotiates. We know how badly that might end up, though. Especially fights internal to the company. We don’t push that. We do encourage people to highlight benefits union brought: ending fire-without-cause of hard workers; reducing perjury on your references; great health/dental for $25 a month; right to sleep between shifts (a bit…); paid holidays, sick leave, and vacations; fair-ish, standard pay based on position, experience, and time in company. I’m not saying it’s best terms but better than most competitors.
That said, I see your position. That people choose competitors to union companies for their different terms supports it a bit. :) I’ve considered letting union people get their negotiated terms while others get theirs. The first thing I ask those people is: “Do you want to work for least they can pay over minimum wage, overtime without overtime, unsafe working conditions (maybe even no bathroom), have little to no benefits, and potentially be fired without cause after years of hard work with bosses giving you no or falsified reference? And while we get the opposite?” Outside high-pay areas like highly-skilled techs, most companies are giving employees as little as they can. They get more commoditized without even being sure they’ll get a job reference for a better job. Might have to endure a lot to get it in some companies. A lot of people don’t have that opportunity.
Now, if you do, there’s another thing to consider. These companies that are offering you a good deal at some five to six digit wage might be pocketing multiples of that with folks in suits doing less than you getting a bigger cut or higher cut vs beneficial work ratio. They will similarly be paying lobbyists on Washington and at state levels similarly large sums to reduce what you can gain at an individual level. The unions are one of few groups lobbying for people like you. If more technical workers unionized, then there’d be more lobbying effort toward getting such individuals better deals. That sector also has the kind of money where donations and campaigns might bring some serious results in terms of expected compensation, work environment, better share of I.P. ownership or equity, paid leave (maybe maternity leave), or even better housing in high-rent areas. Again, may not interest you. I just wanted to mention people dealing with you might have been paying politicians to reduce size of those deals, your perks, or rights as a worker.
Thanks for the thoughtful response. Your company’s union sounds like it’s doing good work, and you’ve done a good job making a case for it. I would not rule out joining a union without looking at the terms of membership, but I would also be extra wary of joining a company that had compulsory union membership.
I don’t have a problem with people making more money than me at the same company, regardless of their beneficial work to pay ratio (which I can’t assess anyway), or what kind of clothes they wear ;)
As the lobbying question, there is a high chance I would make the ethical judgement not to join a company (or union, for that matter) based on their lobbying efforts.
As an aside, I appreciate your posts and comments on Lobsters in general; anything from nickpsecurity is must-read for me.
I appreciate the kind words! I was hoping some of us could chill the thread a bit. It seems like you just prefer to have more insight into and control of job or other commitments letting other people do their thing. A union shop may or may not be right for you depending on how flexible the terms are for non-members. Glad you would consider turning down an offer if it supports corruption. Most wouldn’t.
reducing perjury on your references
Are you referencing bad-references as a way of punishment? I didn’t realize that was a common enough thing to warrant protection from.
Many poverty or working class people I know has either experienced it or had to mitigate it with careful exits knowing it could happen. The middle class and up folks with more to loose or carry with them usually play exits safe because they know it can happen. I don’t know how often it does happen to them, though. I know there’s laws in some countries where they have to give you references without any badmouthing. Apparently, it happened enough to make laws against it over there.
Not here, though. Still can get hit with the shit.
this is slightly tangential to the direction this went in, but I’m curious. Why? Bargaining as a group is always more advantageous than doing so individually.
I’ve worked in a unionized industry; it’s not the utopia you make it out to be. While the average income may be higher under collective bargaining, this is done by making some people worse off than they would be under individual bargaining.
There’s also a huge issue with people who really should be fired, but who aren’t because of the overhead imposed by the union. Honestly, I’d prefer to work for less remuneration than to work with under performers. Particularly when you know those under performers are getting paid the same amount as you. It’s completely demoralizing.
“There’s also a huge issue with people who really should be fired, but who aren’t because of the overhead imposed by the union.”
I don’t have any hard data, only 8 years personal experience working in a (partially) unionised white collar job.
It might vary union by union or company by company but there’s patterns I noticed at management level. My union won’t protect people who do nothing: only people who work as instructed by management who are written up, suspended, or terminated by poor results of management’s plans. There are people at my company who we can’t seem to get rid of. Management uses union as excuse but I’ve seen no use of established procedures against those workers. It seems management in those areas either lets them talk their way out of it, ignores those that argue or intimidate the most, and gets hard on the more compliant workers (aka easy targets or outlets) that probably don’t deserve it.
The performance metrics also suck so bad at this company and a lot of others (including non-union) where many workers artificially look like they’re not good workers. Some of these companies fail workers if they don’t achieve a arbitrary expectations with no proof they matter (see Office Space) or from managers without real-world experience. If they do this to everyone or many, then the bad workers just fade into the background of what looks like a problem with everyone. A made up problem. If the requirements were sensible, then most people would meet them visibly working at a steady or fast pace (context dependent) with some barely working and some getting way ahead. The bad workers become much easier to identify, discipline, and/or eliminate with a fair baseline.
I’ve talked with people in a few other industries that are unionized. They usually have examples of the above two points happening that mostly come from top-down, ignore-workers management and office politics. I still can’t be sure how much “the union” was responsible for workers being hard to get rid of if management was that inept. It’s all the more believable by how much non-union workers and books on management talk about the same failures. My theory is most managers and corporate offices suck in a lot of ways with unions countering them usually in pretty generic ways focusing on what members value most. Outside the focus areas, the rest of the dynamic becomes back and forth battles with plenty of potential inefficiencies. Companies with competent, take-care-of-workers management usually has less of these problems and workers don’t ask for unions. Hmm… ;)
I agree. Unions are not a panacea for every issue workers may have with a company, and in fact can cause many of their own.
However, the issues you mention here are also universal:
While the average income may be higher under collective bargaining, this is done by making some people worse off than they would be under individual bargaining.
True! But considering the current state of tech salaries, I think that’s acceptable from a macro level view. I say that as one of those that would likely see a pay decrease under a union contract – I tend to negotiate quite a bit with potential employers.
There’s also a huge issue with people who really should be fired, but who aren’t because of the overhead imposed by the union.
There are two parts of this argument:
and
I think both are false personally, and I don’t think there’s any data to prove either, I’d love to be proven wrong! For the first, I’ve personally found the opposite – the bar to entry for IBEW-NECA was much higher than that for non-unionized electricians, and the bar for firing was extremely clear. For the second, process can add more time, but it can also reduce it by clarifying for all the bar for firing. I find in most tech companies, the standard months of bad perf -> PIP -> eventual firing process can take a long time due to trepidation on the part of all parties.
I think both are false personally, and I don’t think there’s any data to prove either,
I don’t have any hard data, only 8 years personal experience working in a (partially) unionised white collar job. I’d have thought it was rather logical though that unions would, in their capacity of protecting their members, make firing more difficult. Which can be a good thing, but can also be horrible for org culture and performance.
The idea of a union as a quality filter is interesting, and not something I’ve come across. IME, unions will take anyone in their industry who’s willing to pay the fee.
Yeah I agree largely. If only there was a set standard for unions across the board — unfortunately their independence produces wildly disparate results at the tail. For that reason I can never begrudge someone that is against a union in good faith too much, I can only make my persuasion towards unionization more effective. Thank you!
Unions are typically very strict on safety, and few things are more dangerous in the workplace than an incompetent electrician.
But in an office job, incompetence isn’t dangerous, it’s just useless. Perhaps that accounts for our differing points of view.
I’d have thought it was rather logical though that unions would, in their capacity of protecting their members, make firing more difficult
There are also reasons not to make firing harder, notably the reputational damage that would occur (and which, evidenced by you, has already occurred :) ).
As others have said, unions tend to make the bar for firing very clear, which also tends to mean bureaucratic. This isn’t a bad thing; bureaucracy is what we use in place of trust when trust is hard to establish or otherwise damaged. It’s also not necessarily a slowdown, as others have pointed out.
It does mean that it’s harder for a manager to fire someone at a whim, or based on a longstanding issue that’s not been written down or communicated. But that’s a good thing. At the very least, documentation helps someone who is fired know why (and therefore what to work on in the next job). At the best, starting the documentation process is enough to turn a bad employee into a productive one.
It also means that it’s harder to fire someone for something that’s inconvenient to the employer, but not the fault of the employee. In some places, for example, it’s very common for union construction sites to have a position called “lift operator”. It’s been used as an example of union waste in the past – it’s just someone who sits in the elevator and presses the buttons for everyone. But that position was originally created for (and is usually still used for) union members who have had injuries or other physical problems which make it hazardous or impossible for them to do mainline construction work.
In a union-free situation, that person would be fired, through no fault of their own.
Bargaining as a group is always more advantageous than doing so individually.
Not it isn’t. I can’t be more clear than that. There are lots of cases where negotiating as an individual is a far more advantageous position. If your values differ than the group. If your skills differ from the group. If you needs are in direct conflict with the group (for example, you want a 20% raise and don’t care if it is taken from $personX because they are bad at their job). This idea that the group think is magically always what is best for you is fundamentally untrue.
since neither of us have given data yet, I guess I left myself open to be rebutted in this way. There is data showing that on average union workers make more and have better insurance and benefits in general than non-union workers, but since we haven’t applied that to the tech fields yet, I won’t bring that up as proof. Do keep in mind that for non-tech fields, all of the above is already established as true. in addition:
for example, you want a 20% raise and don’t care if it is taken from $personX because they are bad at their job
is not really how raises are ever allocated, and if they were, I think that company needs a union.
Instead I’ll provide three opinions:
There is data showing that on average union workers make more and have better insurance and benefits in general than non-union workers
“average” and “always” are very different – but since this wasn’t the thrust of your argument, we can move on past it.
is not really how raises are ever allocated
This is also simply not true – I have sat in exactly such hard decision making meetings. People fired, positions collapsed to give raises to other people, whole teams let go to give budgets to other higher performing teams. You put forth this idea “this isn’t how raises are ever allocated” when it simply isn’t true. It makes it very hard to have a fair and rational discussion with you. Budgets are well – budgets and in bad times hard decisions have to be made.
Letting yourself being lulled … your mind.
Absolutely agree. Tech workers commonly think they are worth more than they are. I suspect the Worth despair poster is commonly applicable: https://i.imgur.com/G7yMiXu.jpg (“Just because your necessary doesn’t mean your important.”)
The only metric you care about in this instance is salary
No, what I care about is individual interests. Some individuals value salary very highly, others a company car, others vacation, others healthcare, others still childcare and others more disparate and interesting things. I don’t find find fathers, women/mothers or non-binary folks to be any less individual than “single dudes”.
Anything else widens disparities in worker pay.
The silent implication here is the disparity in worker pay is a bad thing, which I don’t agree with.
I know that a lot of tech folks will rebut this by saying that their work deserves 300k more than their coworkers, but I think that’s probably not true in 99.99% of cases.
Sure, you say 300k to make your strawman seem obviously true – knock an order of magnitude off that number and ask if a reasonable person at the same tier believes they are worth 30k more… hell, even define how you makes these “bands” – arbitrary experience in terms of years?
To be more explicit, I only care about the average.
Worth clarifying which average you mean while you’re at it (mean vs median yield quite different answers)
Assuming your interests are the same as the group’s. Even when they are, priorities differ. Everybody wants more pay and more vacation, but which do you care more about? If I want to work 30 hours for 75% pay, will the union negotiated contract offer that flexibility?
It’s more likely to if you are a voting member.
But your employer will be more than happy to reward you for defecting, until the union is gone and they again have leverage.
If you are a part of the union, you get to help decide that. :)
A democratic union would take its workers wants and needs into mind when crafting the contract with the employer. Right now, you can probably only get those benefits by either being very lucky to find a company that supports it, by altering your lifestyle by working on contract, or by earning it after some time proving yourself. Hypothetically, a tech industry with a standardized contract for workers could extend those benefits to all companies, saving you the time of doing one of the above or opening your own business.
I don’t mind if my co-workers unionize, but I want to be able to choose my own terms of employment with my employer without having a third party interfere without my consent.
I’m assuming a lot by your avatar, but my guess is that you serve a lot less to gain from unionization than, for example, a woman of color. In other words, you still want to benefit from a system that rewards white males even if that mean weakenings an institution that would bargain for people lesser off than you.
You’re assuming a lot more than you think you are.
you serve a lot less to gain from unionization than, for example, a woman of color
A woman of color? Which one? All of them? What color? In what way?
you still want to benefit from a system that rewards white males
What system? Where does it reward white males?
I’m assuming a lot by your avatar
I’m a minority. The company I work for is less than 5% white.
I have no desire to engage with semantic games with you, especially if it’s just going to be screenshotted to Twitter with ad hominem attacks.
Have a good day.
Since I don’t expect you to respond to this message, I’m just posting this to clear my record.
I have not played any semantic games. All I asked you to do was concretely define your statements and back them with something other than conjecture. I can’t argue with someone who doesn’t clarify their own argument.
Ad hominem is an argumentative strategy, of which I have not engaged in. I think what you want to say is that I insulted you, which is also false, unless you count “white, male Bay Area resident” as an insult.
In most of human history, the only people who rented themselves for wages were slaves. Up until recently, wage labor was called wage slavery. It takes a certain mental gymnastics to equate ‘consensual contract with employers’ as liberty. Think about how absurd it is to rent your time, especially for creative work like programming for example.
But at the same time, people like getting wages. They don’t know how to make society value their time, so they get an employer to do that instead.
When a few people in society hold all the money (inequality is huge) and the only way they value the rest of society is through wages, then doesn’t it follow that wages are the only realistic way for most people to get money? It’s really the only choice they have. Since wages
a) Don’t change any power relations b) Don’t change any ownership relations
They are an attractive vehicle for the people who hold all the cards. The alternative is people have percent ownership in where they work! that would be lovely.
I suppose I’m less cynical. Money is justa recognition that someone else appreciated what you did, and most people have no idea how to help society, or have no will to risk their own lives to help society. Thus, we get salaried positions with benefits to make sure we are safe and able to live. Wages are just the employer saying they appreciate your contribution to whatever the employer wants to do.
It’s a nice sentiment, but Rome wasn’t built in a day. The system we live in was built piece by piece over a period of time. There are historical reasons why things the way they are. If you could magically wave a wand and create it anew, would you have wages? Wages are a modern concept anyway. Why not something better?
Or are you saying the system we have is ideology-free and it’s people’s human nature that governs it?
What’s wrong with the way we have constructed work? It has created the modern world, without which we both may never have had this discussion
So? People in general love to be miserable anyway, so I reject the notion that you can fix misery with something other than wages.
You believe people love to be miserable? This is simply an absurd view and allows you to justify vast harm. Did you know, for example, that wage theft (wages are stolen from workers) is the largest theft? It dwarfs thugs and criminals by a mile.
And if the existence/non-existence of a union depends on whether the company can hire non-union workers, you have to decide between one kind of freedom and another.
I have too many, but most of them are not that time consuming so I can stay up with all of them. I’m in Tokyo, by the way, and love showing friends/acquaintances/strangers some things off the beaten path. Feel free to hit me up if you’re ever in town.
Judo - Twice per week, fantastic exercise, fun and challenging, not a huge time commitment, and my dojo community is great.
Weight training - Time-efficient exercise and time to catch up on audio books. It’s fun to get consistent, measurable strength gains as well. Performance in the gym is also a good proxy for how well I’ve been eating and sleeping recently.
Reading - Audio books, ebooks, and sometimes even physical books. Mostly non-fiction lately, but I go on fiction binges as well sometimes. I’m also starting an online book club so I can share thoughts with people about books that my friend groups don’t read.
Fantasy baseball - I’m very competitive, and fantasy baseball is a great mental challenge. I made a few grand on daily fantasy when it wasn’t so sharp, but now focus on season-long. I play in 4-5 small money leagues yearly and win a good bit more than I lose.
Baseball - Not a time-efficient exercise, but a good excuse to get outside and under the open sky for a few hours at a time.
Video games - I’m not currently playing any games competitively, but I’ve been in the top percentile of players in a few different games. I play more narrative experiences recently (Yakuza 0 being my favorite of the year so far).
Attending live music shows - Experiencing a small (20-300 person) show is something I’ve enjoyed from when I was 15. Anything from pop-rock to hip-hop to ska to metal. The next show I’ll go to is a death metal festival at the end of this month (Asakusa Death Fest).
Wrapping up Kindly Inquisitors: The New Attacks on Free Thought, by Jonathan Rauch. It was written 25 years ago, but is vitally relevant today. It’s a fiercely argued treatise that defends the principles of free speech and liberal science. This is certainly one I’ll return to again.
Send me a friend request if you use Goodreads. The friend activity feed is a good way to discover new books, so it’s a good platform for friend collecting.
Interesting article with some insights I wouldn’t have expected. For example, that of the “hardcore” gamer group, women tended to score higher than men on the “completion” aspect and just as high on “power.”
This may just be that, as a whole, women tend to score higher across the board, whereas men are more likely to focus on a few areas and be less interested in others (my personal profile is quite extreme in this way).
Yep. I think it’s useful information for anybody who wants to work on game designs that encourage a healthy mix of players.
So I could track who clicks on the link, of course! /s
The website gives a shortened link to share by default, so that’s what I copied.
I don’t know what it is, but I always really enjoy these problems and am more motivated to do them than any other programming challenge sites. Going to try to do them all in Elixir this year, and challenge myself a bit with Prolog if there are some that seem to lend themselves to it.
As shameful as it is to admit, for me the Christmas stuff really helps out. Even though the story is relatively barebones, imagining the silliness of the circumstances, and getting to write variable names like
elfCookieLoad
makes it so much more fun for me than dry algorithms exercises, even when day 5-6 onwards starts to ramp up towards basically the same kinds of problems.You may have meant to respond to this comment.