I have been working remotely for almost 6 years now. I have ridden the emotional roller coaster as well. Before making a change, I would just advise sitting and thinking about what your transition to remote looked like and what you viewed as positive then and see if those are still true. It is easy to allow familiarity to dull and take those things for granted.
I am sorry for the loneliness. I can’t speak for you but I have felt it in both in an office and remote and it led me to find meaning outside of work.
I run a profitable SaaS business on Haskell and Elm that I developed on evenings and weekends. The technology is my not-well-guarded secret to serious productivity. I also run another startup (recently released, not yet making money), again on Haskell and Elm, with a business partner (though I did all the programming).
I’m not a great programmer, but it’s some hilarious joke that I can build two businesses from scratch on my own on evenings and weekends faster than a team of 10 developers can implement one feature in an existing codebase with Clojure and ClojureScript at my day job.
Would love to see a detailed write up of how you did it - the stack, the tradeoffs, the tooling to take it to prod.
I’m not doubting that you might be getting some benefit from your technology choices, but a single reasonably skilled and sufficiently motivated person will always outpace a team of 10. The team might beat you on longevity and stability, though.
Do you have any more data points on that? I don’t assume you’re wrong, but this idea has interesting implications for me when I start hiring some people for a different VC-funded project I’m running. As I understand it, conventional wisdom is that more workers results in more labour/results, and I’ll need some solid data to convince other people that this is a myth. Is it just The Mythical Man-Month that I should read?
As I was reading your comment, all I could think of was “read the Mythical Man Month”. Read it all, but specifically the essay “Out of the Tar Pit”. The short answer is that it depends on what you are trying to build. To get a program to market, generally a small team will out-perform a large team. As the product grows, there are divergent thoughts about how to build effectively. Many have bright shining stars they will hold out as proof that their way is the best way and plenty of counter-examples about how others failed. To my knowledge, there is no silver bullet (also in the mythical man month) but a process of finding what works for your organization.
I think you bring up a good point, but I think if you don’t want an effect I would prefer default behavior to be an error and require either --dry-run or --force. I would be annoyed if I thought it did the thing and it just did a dry run and the opposite end, I don’t understand the full effects of my actions and a do a very much bad thing.
Agreed, and the error message should displayed prominently, otherwise there’s little difference between what you suggest and a no-op, apart from the process exit code.
I have greatly enjoyed The Morning Paper. A CS paper in my inbox (almost) every day.
While I greatly appreciate this perspective, the overall tone is challenging. I do agree strongly that we have a tendency to optimize early and we should resist that urge as much as possible. I also agree that we should have reasons to add complexity, those are great truths that are helpful to be reminded of continually.
I have run applications on bare metal, run my own virtualization on bare metal, configured my own switches and routers and now run applications in the cloud with and without chef/ansible. The universal experience is that, despite working with some of the most wonderful and talented people that I could ask, we have always run into problems due to infrastructure. There is no perfect solution. I am sure I can find blog posts about why you shouldn’t use x in production because “it didn’t work for me”.
While I do not have personal production experience with Docker, that is likely to change in the next 1 to 2 months. I like the overall idea of abstracting away certain configuration aspects, but I expect there will be tradeoffs - not unlike every time I have changed any piece of infrastructure.
That being said, if it doesn’t work… we can change again. I don’t think I would “regret” it unless there was nothing to learn or take away from the experience.
The biggest issues I have experienced about working remote are primarily around communication. Distraction is roughly the same at home vs an office for me (I have two young children). There is a difference between working on a remote team being the remote member of a localized team. I would caution strongly against the latter, it amplifies communication issues.
One issue I had to learn was to watch the medium that I am using. If I am using slack, email, GitHub comments, and things get out of hand - I get mad/defensive or sense someone else doing so, I do my best to hop on to a video conference as soon as possible. The written word can only convey so much meaning, video conferences work well to iron out issues and establish safety.
This one is probably me, but if you are remote and using slack, it might apply to you as well. I have to close slack and be OK with doing that so I can actually think and not be trying to keep up with the conversation. The key, at least for me, is being OK to close it and not worry what other people are thinking.
I also like to be explicit about expectations especially around communication. If you slack me, you can expect a response in within 2 working hours, if you email me 24 hours. If you try to reach me on facebook, you probably won’t ever hear from me. I have a README document that I link to in my profile so people can see it. When I meet with people for the first time, I share it. It has other things about me, it is a recent addition, but one that has so far helped.
Thank you for the experiment and the thoughtful retrospective. An interesting side effect of reading this is an increased empathy toward those I few as trolls. I am looking back at times people have commented on my things and not necessarily dismissing them due to their tone. I hope my troll detector might become more finely-tuned and I am not throwing the baby out with the bath water.
The book The Haskell School of Expression is similar to SICP in that it has exercises and it is very easy to approach. It works toward building a music notation system in Haskell.
I just found what appears to be a rewrite of the book available online http://haskell.cs.yale.edu/wp-content/uploads/2015/03/HSoM.pdf
This book is my worlds converging, it looks quite interesting.
While you’re in a Scheming frame of mind, I really like The Little Schemer and The Seasoned Schemer. They have an informal presentation style but before you know it you’ve internalized quite a lot.
In a totally different vein, I’ve been reading Type Theory and Formal Proof and I think this is a remarkable text because it makes the subject so accessible and sprinkles in interesting asides about the development of the theory and implementation of type theory along the way.
Thank you, I was scheming to get into those!
A much, much better list is here:
perhaps for free programming books but this list includes much more than free and much more than programming. But then again, I am a sucker for a good book list.
That one is sadly only focused on Programming. :(
I can’t say enough good things about this talk, it was one of the highlights of the conference for me.
Hopefully it encourages more people to give ATS a try.
Most of the developers at $WORK are remote, and we’re scattered all across the globe. It’s great! We keep in touch via various chat channels, and keep track of work via Jira issues. Every now and then we get the team together somewhere for both socialising and same-room working. The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
I couldn’t recommend it enough.
The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
I really hope that 100 years from now we’ll look back on the idea of spending an hour driving thru heavy traffic each way every day with as much disgust and horror as we view drinking from lead pipes in ancient Rome. It’s just so toxic to your psyche.
How do you manage hours of work when you do remote?
Not the OP, but I do work on an all-remote team.
We just work 9-5 local time. Sometimes timezones and DST conspire to offset a few of us by an hour or two, but it’s not a big deal. We always have a solid ~5 hours of overlap each day anyway.
I don’t, really. I tend to be “at work” from about 7:30am to 5-7pm ish, but I have a few breaks to work out, run errands, or to just go outside and walk around the block or lie in the hammock for 15 minutes to clear my head when I need to. When I’m blocked on a CI build or getting feedback / direction on an issue, I can go sit outside away from the screens. When you don’t have the pressure to look busy that you get in a cube farm, what might normally seem like a long day doesn’t wreck you like it otherwise might. Ditto being able to wear clothes to work that aren’t a constant physical discomfort :)
Also not the OP, but on a fully remote team.
We don’t really manage hours and trust that people are doing the work. Sometimes people will work non-traditional hours but we try to schedule our meetings during overlap hours. I personally am an 8-5er, but others aren’t.
This may seem like a trivial gripe, but it’s been a real issue for me when writing Perl 6: I can’t find an editor whose syntax highlighting can keep up with Perl6. I’m a Vim guy, and Vim’s Perl6 syntax highlighting is s l o w, causing the editor to lag, making using the language a chore in that editor.
So um, any suggestions? Perl6 is crazy cool, but I just can’t find a means to write it without getting frustrated at laggy/incorrect syntax highlighting.
Howdy, also a vim user. I moved to neovim a while back and it allows aync syntax highlighting through plugins like chromatica. Since leveraging this along with ale things have become significantly snappier, especially on larger files.
Currently Atom seems to have the best Perl 6 plugins.
I struggle to see how this one person’s anecdotal experience can suddenly be applied in the macro. And with regard to the authors opinions on code reviews.
You have your own issues to work on, and my interruption already did some damage by de-focusing you.
What is the rush if you have been working on the code for a week? Is there not some means of putting them into a queue and letting people review their code when you would not interrupt them? We use a Slack channel to post our code reviews to.
Reading code is hard. Nobody really does it.
Yes, it is that is why it takes time and generally some back and forth to review code well. Just because it is hard does not mean that it is not worth doing.
But wait, speaking of mandatory code reviews, did you think about this? So you don’t trust your own developers, and require attention of at least 3 people for a commit to get in.
This isn’t a matter of trust. I want my code to be reviewed because a good review can help me see the problem in a new way and sometimes find a better solution than I initially came up with.
I absolutely love code reviews and don’t think I could work in a place that didn’t care enough to help its developers be the best they can be and the product be the best it can be. This post completely ignores the value of collaboration on solving problems.
As a consumer of this information, I was sad to see it removed. However, I have also worked with and for the deaf at various times over the last 20 years. I will pass along something that happened early on. My deaf coworker was sick and trying to schedule a doctor’s appointment. The soonest they would see her for a sick visit was 3 weeks out. She was frustrated and asked me to call and try to book an appointment with the same symptoms. I did and they were willing to see me the same day.
The lawsuit might seem petty but these are people that see 20,000 “World-class” university lectures made available for anyone but not for them where the ADA says that they should be accessible. And rightly or not the experience I shared was not uncommon, I can sympathize somewhat with the school as it is not a small expense, but at the same time I feel for my friends who are not able to share any of the wealth of information that I too-often take for granted.
I don’t have the full story of this particular case, so it is possible I am missing something. However, I am quite frankly more surprised that Berkley did not try more creative solutions like crowd-sourcing captions or transcripts as opposed to allowing it to go to court, paying the lawyers and then removing everything.
This fucks bisect, defeating one of the biggest reasons version control provides value.
Furthermore, there are tools to easily take both approaches simultaneously. Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff. And, for example, Phabricator even pulls down the revision (pull request equivalent) description, list of reviewers, tasks, etc, and uses that to create a squash commit of your current branch when you run arc land.
I’m surprised to hear so many people mention bisect. I’ve tried on a number of occasions to use git bisect and svn bisect before that, and I don’t think it actually helped me even once. Usually I run into the following problems:
I love the idea of git bisect but in practice it’s never been worth it for me.
Your second bullet point suggests to me bisect isn’t useful to you in part because you’re not taking good enough care of your history and have broken points in it.
I bisect things several times a month, and it routinely saves me hours when I do. By not keeping history clean as others have talked about, you ensure bisect is useless even for those developers who do find it useful. :(
Right: meaningful commit messages are important but a passing build for each commit is essential. A VCS has pretty limited value without that practice.
It does help that your commits be at clean points but isn’t really necessary - you don’t need to run your entire test suite. I usually will either bisect with a single spec or isolate the issue to a script that I can run against bisect. And as mentioned in other places you can just bisect manually.
You can run bisect in an entirely manual mode where git checks out the revision for you to tinker with and before marking the commit as good or bad.
There are places where it’s not so great, and there are places where it’s a life-saving tool. I work (okay, peripherally… mostly I watch people work) on the Perl 5 core. Language runtime, right? And compatibility is taken pretty seriously. We try not to break anyone’s running code unless we have a compelling reason for it and preferably they’ve been given two years' warning. Even if that code was written in 1994. And broken stuff is supposed to stay on branches, not go into master (which is actually named “blead”, but that’s another story. I think we might have been the ones who convinced github to allow a different default branch because having it fail to find “master” was kind of embarrassing).
So we have a pretty ideal situation, and it’s not surprising that there’s a good amount of tooling built up around it. If you see that some third-party module has started failing its test suite with the latest release, there’s a script that will build perl, install a given module and all of its dependencies, run all of their tests along the way, find a stable release where all of that did work, then bisect between there and HEAD to determine exactly what merge made it started failing. If you have a snippet of code and you want to see where it changed behavior, use bisect.pl -e. If you have a testcase that causes weird memory corruption, use bisect.pl --valgrind and it will tell you the first commit where perl, run with your sample code, causes valgrind to complain bitterly. I won’t say it works every time, but… maybe ¾ of the time? Enough to be very worth it.
No it doesn’t. Bisect doesn’t care what the commit message is. It does care that your commit works, but I don’t think the article is actually advocating checking in broken code (despite the title) - rather it’s advocating committing without regard to commit messages.
Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff.
This, on the other hand, fucks bisect.
Do you know how bisect works? You are binary searching through your commit history, usually to find the exact commit that introduced a bug. The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another. Then when you bisect, you can actually find a test failure or other issue. Without squashing you’ll end up with a compilation failure or something from some jack off’s WIP commit. At least if you follow the author’s advice, that commit will say “fuck” or something equally useless, and whoever is bisecting can know to fire you and hire someone who knows what version control does.
Do you know how bisect works?
Does condescension help you feel better about yourself?
The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
I don’t read it that way. The article mainly advocates not worrying about commit messages, and also being willing to commit “experiments” that don’t pan out, particularly in the context of frontend design changes. That’s not the same as “not working” in the sense of e.g. not compiling.
It’s important that most commits be “working enough” that they won’t interfere with tracking down an orthogonal issue (which is what bisect is mostly for). In a compiled language that probably means they need to compile to a certain extent (perhaps with some workflow adjustments e.g. building with -fdefer-type-errors in your bisect script), but it doesn’t mean every test has to pass (you’ll presumably have a specific test in your bisect script, there’s no value in running all the tests every time).
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another.
Sure, but it also makes those changes much bigger. If your bisect ends up pointing to a 100-line diff then that’s not very helpful because you’ve still got to manually hunt through those changes to find the one that made the actual difference - at that point you’re not getting much benefit from having version control at all.
I don’t want to learn a DSL for my commit messages.
I think the commit message formatting becomes more relevant when you have hundreds of people working on a small set of files in the same repository. We did this with the administrative portal for Cisco Spark and found that having a standard format for people to follow means that there is less confusion and more summarization for each change that comes through.
I agree with @zg although I think it becomes relevant much earlier than having hundreds of people. Being able to parse and read the history of repository is more than a nice thing to have when working with other developers. When you even have 10 developers all doing their own thing and putting up long messages, some putting issues in the subject, some where the issue number is the subject with no body, it makes figuring out what happened and when that happened very very hard. While I might not like this particular format, standards like this aren’t about preference, it is about consistency and consideration for the other developers on the team.
My knee-jerk reaction to this article was, “Oh, I hate this.” Thank you for putting to words what my intuition was trying to tell me.
I’ve contributed to Karma several times, and it wasn’t really a lot of work. The project setup installs git hooks to check the commit message. It makes a lot of sense for maintainers as it is possible to generate a changelog.
The recommendations are pretty much what I would do anyway, except for the type/scope prefix.
The post is now published and available at http://www.defstartup.org/2017/01/18/why-rethinkdb-failed.html
More often than not when exploring a new area/problem I realize that my current direction won’t produce the results I hoped for. The author suggests that I continue to hack away at it to produce a tangible result, so that I have something to show people when they ask how I’ve spent that duration of time. This is a great way to pad your resume, but you aren’t doing useful work, and you’re probably wasting your time.
It’s like committing to every hand in poker. Sometimes you gotta know when to fold your cards.
Indeed. However, I don’t believe that is what the author was saying. Although, I could just been reading my own biases into the post.
To continue the use of your poker metaphor, you cannot play every hand but you also cannot fold every hand. In both cases you walk away a loser. It also matters when and why you fold. Did you fold early or did you fold late? Were you chasing the straight? again? Where I feel the post is lacking is not that the point is finishing but the point is choosing wisely. As is said in almost every productivity book I have read, the point is not “getting things done” but the point is “getting the right things done”. I agree that you must abandon some projects for very valid reasons, but when I am honest with myself, that is not why I abandon most of my side projects. So I tend to sympathize with the OP here. I think what I feel is insinuated but missing from this article is choosing what you work on so that you can and do finish the right things and get the other things off your plate at the right time.
I totally agree with your poker analogy, but there’s another way to look at this too:
Instead of either A) stopping or B) hacking away, you can simply change your specification of the result. That is, choose to produce something that captures at least some of the value you were attempting to gather.
For example, let’s say you’re writing a new program and decided that you’ve gone about it all wrong. Stop “writing the program” and start “hacking away” at a README that explains the prototype, detailing what you learned. You’ll find that this process is far more valuable to you in on both internal (learning) and external (perception) fronts. Writing things down in prose helps crystalize your thoughts and publishing those writings makes people appreciate how you spend your time.
By the way, this applies to poker too: When you fold a hand, you should be making a bunch of mental notes.
Exactly! At work, I tell people I manage that I’m happy with either working code (with tests) or a document explaining why something can’t (or shouldn’t) be done.