Thank you for the experiment and the thoughtful retrospective. An interesting side effect of reading this is an increased empathy toward those I few as trolls. I am looking back at times people have commented on my things and not necessarily dismissing them due to their tone. I hope my troll detector might become more finely-tuned and I am not throwing the baby out with the bath water.
The book The Haskell School of Expression is similar to SICP in that it has exercises and it is very easy to approach. It works toward building a music notation system in Haskell.
I just found what appears to be a rewrite of the book available online http://haskell.cs.yale.edu/wp-content/uploads/2015/03/HSoM.pdf
This book is my worlds converging, it looks quite interesting.
While you’re in a Scheming frame of mind, I really like The Little Schemer and The Seasoned Schemer. They have an informal presentation style but before you know it you’ve internalized quite a lot.
In a totally different vein, I’ve been reading Type Theory and Formal Proof and I think this is a remarkable text because it makes the subject so accessible and sprinkles in interesting asides about the development of the theory and implementation of type theory along the way.
Thank you, I was scheming to get into those!
A much, much better list is here:
perhaps for free programming books but this list includes much more than free and much more than programming. But then again, I am a sucker for a good book list.
That one is sadly only focused on Programming. :(
I can’t say enough good things about this talk, it was one of the highlights of the conference for me.
Hopefully it encourages more people to give ATS a try.
Most of the developers at $WORK are remote, and we’re scattered all across the globe. It’s great! We keep in touch via various chat channels, and keep track of work via Jira issues. Every now and then we get the team together somewhere for both socialising and same-room working. The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
I couldn’t recommend it enough.
The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
The only real downside I can see is that I’m going to be miserable if I have to go back to a commute to another a cubicle farm some day.
I really hope that 100 years from now we’ll look back on the idea of spending an hour driving thru heavy traffic each way every day with as much disgust and horror as we view drinking from lead pipes in ancient Rome. It’s just so toxic to your psyche.
How do you manage hours of work when you do remote?
Not the OP, but I do work on an all-remote team.
We just work 9-5 local time. Sometimes timezones and DST conspire to offset a few of us by an hour or two, but it’s not a big deal. We always have a solid ~5 hours of overlap each day anyway.
I don’t, really. I tend to be “at work” from about 7:30am to 5-7pm ish, but I have a few breaks to work out, run errands, or to just go outside and walk around the block or lie in the hammock for 15 minutes to clear my head when I need to. When I’m blocked on a CI build or getting feedback / direction on an issue, I can go sit outside away from the screens. When you don’t have the pressure to look busy that you get in a cube farm, what might normally seem like a long day doesn’t wreck you like it otherwise might. Ditto being able to wear clothes to work that aren’t a constant physical discomfort :)
Also not the OP, but on a fully remote team.
We don’t really manage hours and trust that people are doing the work. Sometimes people will work non-traditional hours but we try to schedule our meetings during overlap hours. I personally am an 8-5er, but others aren’t.
This may seem like a trivial gripe, but it’s been a real issue for me when writing Perl 6: I can’t find an editor whose syntax highlighting can keep up with Perl6. I’m a Vim guy, and Vim’s Perl6 syntax highlighting is s l o w, causing the editor to lag, making using the language a chore in that editor.
So um, any suggestions? Perl6 is crazy cool, but I just can’t find a means to write it without getting frustrated at laggy/incorrect syntax highlighting.
Howdy, also a vim user. I moved to neovim a while back and it allows aync syntax highlighting through plugins like chromatica. Since leveraging this along with ale things have become significantly snappier, especially on larger files.
Currently Atom seems to have the best Perl 6 plugins.
I struggle to see how this one person’s anecdotal experience can suddenly be applied in the macro. And with regard to the authors opinions on code reviews.
You have your own issues to work on, and my interruption already did some damage by de-focusing you.
What is the rush if you have been working on the code for a week? Is there not some means of putting them into a queue and letting people review their code when you would not interrupt them? We use a Slack channel to post our code reviews to.
Reading code is hard. Nobody really does it.
Yes, it is that is why it takes time and generally some back and forth to review code well. Just because it is hard does not mean that it is not worth doing.
But wait, speaking of mandatory code reviews, did you think about this? So you don’t trust your own developers, and require attention of at least 3 people for a commit to get in.
This isn’t a matter of trust. I want my code to be reviewed because a good review can help me see the problem in a new way and sometimes find a better solution than I initially came up with.
I absolutely love code reviews and don’t think I could work in a place that didn’t care enough to help its developers be the best they can be and the product be the best it can be. This post completely ignores the value of collaboration on solving problems.
As a consumer of this information, I was sad to see it removed. However, I have also worked with and for the deaf at various times over the last 20 years. I will pass along something that happened early on. My deaf coworker was sick and trying to schedule a doctor’s appointment. The soonest they would see her for a sick visit was 3 weeks out. She was frustrated and asked me to call and try to book an appointment with the same symptoms. I did and they were willing to see me the same day.
The lawsuit might seem petty but these are people that see 20,000 “World-class” university lectures made available for anyone but not for them where the ADA says that they should be accessible. And rightly or not the experience I shared was not uncommon, I can sympathize somewhat with the school as it is not a small expense, but at the same time I feel for my friends who are not able to share any of the wealth of information that I too-often take for granted.
I don’t have the full story of this particular case, so it is possible I am missing something. However, I am quite frankly more surprised that Berkley did not try more creative solutions like crowd-sourcing captions or transcripts as opposed to allowing it to go to court, paying the lawyers and then removing everything.
This fucks bisect, defeating one of the biggest reasons version control provides value.
Furthermore, there are tools to easily take both approaches simultaneously. Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff. And, for example, Phabricator even pulls down the revision (pull request equivalent) description, list of reviewers, tasks, etc, and uses that to create a squash commit of your current branch when you run arc land.
I’m surprised to hear so many people mention bisect. I’ve tried on a number of occasions to use git bisect and svn bisect before that, and I don’t think it actually helped me even once. Usually I run into the following problems:
I love the idea of git bisect but in practice it’s never been worth it for me.
Your second bullet point suggests to me bisect isn’t useful to you in part because you’re not taking good enough care of your history and have broken points in it.
I bisect things several times a month, and it routinely saves me hours when I do. By not keeping history clean as others have talked about, you ensure bisect is useless even for those developers who do find it useful. :(
Right: meaningful commit messages are important but a passing build for each commit is essential. A VCS has pretty limited value without that practice.
It does help that your commits be at clean points but isn’t really necessary - you don’t need to run your entire test suite. I usually will either bisect with a single spec or isolate the issue to a script that I can run against bisect. And as mentioned in other places you can just bisect manually.
You can run bisect in an entirely manual mode where git checks out the revision for you to tinker with and before marking the commit as good or bad.
There are places where it’s not so great, and there are places where it’s a life-saving tool. I work (okay, peripherally… mostly I watch people work) on the Perl 5 core. Language runtime, right? And compatibility is taken pretty seriously. We try not to break anyone’s running code unless we have a compelling reason for it and preferably they’ve been given two years' warning. Even if that code was written in 1994. And broken stuff is supposed to stay on branches, not go into master (which is actually named “blead”, but that’s another story. I think we might have been the ones who convinced github to allow a different default branch because having it fail to find “master” was kind of embarrassing).
So we have a pretty ideal situation, and it’s not surprising that there’s a good amount of tooling built up around it. If you see that some third-party module has started failing its test suite with the latest release, there’s a script that will build perl, install a given module and all of its dependencies, run all of their tests along the way, find a stable release where all of that did work, then bisect between there and HEAD to determine exactly what merge made it started failing. If you have a snippet of code and you want to see where it changed behavior, use bisect.pl -e. If you have a testcase that causes weird memory corruption, use bisect.pl --valgrind and it will tell you the first commit where perl, run with your sample code, causes valgrind to complain bitterly. I won’t say it works every time, but… maybe ¾ of the time? Enough to be very worth it.
No it doesn’t. Bisect doesn’t care what the commit message is. It does care that your commit works, but I don’t think the article is actually advocating checking in broken code (despite the title) - rather it’s advocating committing without regard to commit messages.
Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff.
This, on the other hand, fucks bisect.
Do you know how bisect works? You are binary searching through your commit history, usually to find the exact commit that introduced a bug. The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another. Then when you bisect, you can actually find a test failure or other issue. Without squashing you’ll end up with a compilation failure or something from some jack off’s WIP commit. At least if you follow the author’s advice, that commit will say “fuck” or something equally useless, and whoever is bisecting can know to fire you and hire someone who knows what version control does.
Do you know how bisect works?
Does condescension help you feel better about yourself?
The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!
I don’t read it that way. The article mainly advocates not worrying about commit messages, and also being willing to commit “experiments” that don’t pan out, particularly in the context of frontend design changes. That’s not the same as “not working” in the sense of e.g. not compiling.
It’s important that most commits be “working enough” that they won’t interfere with tracking down an orthogonal issue (which is what bisect is mostly for). In a compiled language that probably means they need to compile to a certain extent (perhaps with some workflow adjustments e.g. building with -fdefer-type-errors in your bisect script), but it doesn’t mean every test has to pass (you’ll presumably have a specific test in your bisect script, there’s no value in running all the tests every time).
Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another.
Sure, but it also makes those changes much bigger. If your bisect ends up pointing to a 100-line diff then that’s not very helpful because you’ve still got to manually hunt through those changes to find the one that made the actual difference - at that point you’re not getting much benefit from having version control at all.
I don’t want to learn a DSL for my commit messages.
I think the commit message formatting becomes more relevant when you have hundreds of people working on a small set of files in the same repository. We did this with the administrative portal for Cisco Spark and found that having a standard format for people to follow means that there is less confusion and more summarization for each change that comes through.
I agree with @zg although I think it becomes relevant much earlier than having hundreds of people. Being able to parse and read the history of repository is more than a nice thing to have when working with other developers. When you even have 10 developers all doing their own thing and putting up long messages, some putting issues in the subject, some where the issue number is the subject with no body, it makes figuring out what happened and when that happened very very hard. While I might not like this particular format, standards like this aren’t about preference, it is about consistency and consideration for the other developers on the team.
My knee-jerk reaction to this article was, “Oh, I hate this.” Thank you for putting to words what my intuition was trying to tell me.
I’ve contributed to Karma several times, and it wasn’t really a lot of work. The project setup installs git hooks to check the commit message. It makes a lot of sense for maintainers as it is possible to generate a changelog.
The recommendations are pretty much what I would do anyway, except for the type/scope prefix.
The post is now published and available at http://www.defstartup.org/2017/01/18/why-rethinkdb-failed.html
More often than not when exploring a new area/problem I realize that my current direction won’t produce the results I hoped for. The author suggests that I continue to hack away at it to produce a tangible result, so that I have something to show people when they ask how I’ve spent that duration of time. This is a great way to pad your resume, but you aren’t doing useful work, and you’re probably wasting your time.
It’s like committing to every hand in poker. Sometimes you gotta know when to fold your cards.
Indeed. However, I don’t believe that is what the author was saying. Although, I could just been reading my own biases into the post.
To continue the use of your poker metaphor, you cannot play every hand but you also cannot fold every hand. In both cases you walk away a loser. It also matters when and why you fold. Did you fold early or did you fold late? Were you chasing the straight? again? Where I feel the post is lacking is not that the point is finishing but the point is choosing wisely. As is said in almost every productivity book I have read, the point is not “getting things done” but the point is “getting the right things done”. I agree that you must abandon some projects for very valid reasons, but when I am honest with myself, that is not why I abandon most of my side projects. So I tend to sympathize with the OP here. I think what I feel is insinuated but missing from this article is choosing what you work on so that you can and do finish the right things and get the other things off your plate at the right time.
I totally agree with your poker analogy, but there’s another way to look at this too:
Instead of either A) stopping or B) hacking away, you can simply change your specification of the result. That is, choose to produce something that captures at least some of the value you were attempting to gather.
For example, let’s say you’re writing a new program and decided that you’ve gone about it all wrong. Stop “writing the program” and start “hacking away” at a README that explains the prototype, detailing what you learned. You’ll find that this process is far more valuable to you in on both internal (learning) and external (perception) fronts. Writing things down in prose helps crystalize your thoughts and publishing those writings makes people appreciate how you spend your time.
By the way, this applies to poker too: When you fold a hand, you should be making a bunch of mental notes.
Exactly! At work, I tell people I manage that I’m happy with either working code (with tests) or a document explaining why something can’t (or shouldn’t) be done.
Half a work day MTTR?!
I have a really strong reaction to that, and it’s not positive. Do not put yourself on my calendar without asking me, period. If you have a deadline for when you need something done by, then communicate with me and ensure that we can meet that goal.
Expecting other people to drop what they need to get done in a ~4 hour window and do a task for you is just asking for trouble, especially if you also expect your coworkers to focus and accomplish complicated tasks.
I don’t think that is completely unrealistic for given small reviews and the right team. At my current company, we have a pull request slack channel that we post to. Most PRs are reviewed in less than an hour. We are a smallish team with < 20 developers on our stack. I can see how for smaller teams that expectation could be more challenging. But we tend to review other people’s code quickly because we like it when our code is reviewed quickly.
That’s a very reasonable reaction in an environment where individuals own their work/outcomes.
It’s a quite reasonable request, however, in an environment where a close-knit team choose to share ownership of work & outcomes. I’ve only worked in one such environment in my career (so I know they can exist but they don’t seem common).
“Peak” has been a book on my to-read list since I listened to the author speak on “The Art of Manliness” podcast. I am also intrigued by the concept of deliberate practice. While I agree that implementing something is a wonderful way to really learn something, it is easy for me to get lost in the weeds and it feels somewhat inefficient. One thing I have attempted to do since listening to the podcast is more focused practice. So I have started to apply the kata methodology to my practice. The flow works something like this:
This is a bit repetitious but I think that is the idea that is proposed in “Peak”. Since doing this for the last 4 months, I have noticed some improvement where I more readily recognize places where to apply these patterns and variations of those patterns. But when I set out on this, I realized this would likely be an endeavor of several years before it bore real fruit.
As some background - I have been developing software for about 15 years now professionally and am a bit of a productivity nerd.
We have run into this with one of our payment providers when setting up ACH payments. It is an effort to combat fraud, they also allow use of “microdeposits” to verify your account. While as a user, it is annoying and frustrating, I also feel their pain. People committing fraud continue to find ways around countermeasures, this and measures like it are almost inevitable.
I’ve set up ACH transactions, and microdeposits seem to be pretty common, but most sane institutions just ask you to log on to the other account yourself and type in the deposit amounts into their form.
For tech-related books I usually monitor my “trusted” publishers for interesting titles:
If I want to dig deeper into a topic, I use a trusted book’s bibliography as a guide.
I am on GoodReads and just by entering the books I read, I will sometimes get good recommendations. But this is usually a better source for history or biographies, not for tech-related books. Additionally, I like to look for reading lists - so usually from December through January, I will start googling for reading lists and many people will post their best reads of the year and I can generally find some interesting titles there. Bill Gates usually posts his book list and I probably will read half of his recommendations every year.
I am a new lobster, but I suspect there are some you could find here by searching. Or by posting a “What are your favorite books about x” question.
I do something similar wrt. “trusted” publishers. Interesting that you didn’t include O'Reilly on your list - for me, they’ve always been a favourite (and their DRM-free e-books and e-book upgrade options are excellent). Over the years they’ve widened their focus, but I still think they publish a lot of excellent content.
That was an over-sight, O'Reilly is still a good source, though it has been a while since I have purchased anything from them.
For me, the most important innovation on mac laptops has been MagSafe, so I am glad it got a nod here.
No other company, to my knowledge, has replicated this (maybe because of patents?) which is sad to me, as it’s such an amazing feature. My guess, though, is that Apple believes the battery will last a full day, so you simply don’t need to plug it in until it is sitting safely on your desk, which is optimistic at best.
I’m quite surprised that they didn’t include an equivalent feature with these new models. It’s something that third parties can address relatively easily (perhaps even with a breakaway Type C to Type C adapter rather than a full charger), but I’d be very wary of any non-major-brand chargers. If Benson Leung’s Amazon reviews are anything to go by, Type C products are a minefield.
Griffin has Breaksafe which is a breakaway Type C to Type C cable
I’d expect Apple to make this, and then charge another $40 for it. That’s what Steve would have done.
Very optimistic, especially from Apple.
The other day my iPhone (in airplane mode, running only strava and the music app) with a full charge lasted ~40 minutes while I was out for a run.
Then it’s faulty and you should file a warranty claim.
I agree, magsafe was one of the killer features that brought me to Mac 9 years ago. The other was just the ability to close my lid and trust that the computer would turn off and would come back on when I opened the lid again. I actually broke two previous computers due to the power cord port breaking off the main board. I am still trying to figure out how I feel about this, because this is a tragic loss to me.
My only thought as a potential tradeoff is the fact that you can plug your charger into one of the 4 ports, so you aren’t tied to a certain spot for your power cord. As I am typing this, I could see how this could be a handy feature.
Apple believes the battery will last a full day
Such a ridiculous and patronizing assumption for them to make. I like my screen at max brightness and run among other things 2 JVMs while developing. If the battery lasted 6 hours of my development day I’d be thrilled.
What is OTP and why is it, and Elixir, great? I see lots of posts on here tagged for Elixir and it makes me think I should look into it.
OTP is a bunch of middleware that drives erlang/elixir/lfe/BEAM-langs'. Bundles of libraries for common tasks that are considered integral to [erlang|elixir|et al]. Elixir is a fancy new language built atop the BEAM vm (like erlang) with erlang interop, macros, and other neat things. If Lisp Flavored Erlang didn’t exist, I’d say Elixir is to Erlang what Clojure is to Java. (Not that I have any real experience wth these things. Someone wwho actually makes use of elixir and/or OTP care to correct/clarify/elaborate?)