1. 37

    At my former employer, for a time I was in charge of upgrading our self-managed Kubernetes cluster in-place to new versions and found this to eventually be an insurmountable task for a single person to handle without causing significant downtime.

    We can argue about whether upgrading in-place was a good idea or not (spoiler: it’s not), but it’s what we did at the time for financial reasons (read: we were cheap) and because the nodes we ran on (r4.2xl if I remember correctly) would often not exist in a quantity significant enough to be able to stand up a whole new cluster and migrate over to it.

    My memory of steps to maybe successfully upgrade your cluster in-place, all sussed out by repeated dramatic failure:

    1. Never upgrade more than a single point release at a time; otherwise there are too many moving pieces to handle
    2. Read change log comprehensively, and have someone else read it as well to make sure you didn’t miss anything important. Also read the issue tracker, and do some searching to see if anyone has had significant problems.
    3. Determine how much, if any, of the change log applies to your cluster
    4. If there are breaking changes, have a plan for how to handle the transition
    5. Replace a single master node and let it “bake” as part of the cluster for a sufficient amount of time not less than a single day. This gave time to watch the logs and determine if there was an undocumented bug in the release that would break the cluster.
    6. Upgrade the rest of the master nodes and monitor, similar to above
    7. Make sure the above process(es) didn’t cause etcd to break
    8. Add a single new node to the cluster, monitoring to make sure it takes load correctly and doesn’t encounter an undocumented breaking change or bug. Bake for some day(s).
    9. Drain and replace remaining nodes, one a time, over a period of days, allowing the cluster to handle the changes in load over this time. Hope that all the services you have running (DNS, deployments, etc.) can gracefully handle these node changes. Also hope that you don’t end up in a situation where 9/10 of the nodes’ services are broken, but the remaining 1 original service is silently picking up the slack and hence nothing will fail until the last node gets replaced, at which point everything will fail at once catastrophically.
    10. Watch all your monitoring like a hawk and hope that you don’t encounter any more undocumented breaking changes, deprecations, removals, and/or service disruptions, and/or intermittent failures caused by the interaction of the enormous number of moving parts in any cluster.

    There were times that a single point release upgrade would take weeks, if not months, interspersed by us finding Kubernetes bugs that maybe one other person on the internet had encountered and that had no documented solution.

    After being chastised for “breaking production” so many times despite meticulous effort, I decided that being the “Kubernetes upgrader” wasn’t worth the trouble. After I left, is seems that nobody else was successfully able to upgrade either, and they gave up doing so entirely.

    This was in the 1.2-1.9 days, for reference, so though I’d be very surprised things may be much better now.

    1. 33

      tldr; If you can’t afford 6+ full-time people to babysit k8s, you shouldn’t be using it.

      1. 13

        Or, at least, not running it on-prem.

        1. 6

          True, if you out source the management of k8s, you can avoid the full-time team of babysitters, but that’s true of anything. But then you have the outsourcing headache(s) not including the cost(like you still need someone responsible for the contract, and for interacting with the outsourced team).

          Outsourcing just gives you different, and if you selected wisely, less, problems.

          1. 5

            True dat. But every solution to a given problem has trade-offs. Not using Kubernetes in favour of a different orchestration system will also have different problems. Not using orchestration for your containers at all will give you different problems (unless you’re still too small to need orchestration, in which case yes you should not be using k8s). Not using containers at all will give you different problems. ad infinitum :)

            1. 6

              Most companies are too small to really need orchestration.

              1. 2


        2. 2

          I keep having flashbacks to when virtualization was new and everyone was freaking out over xen vs. kvm vs. VMWare and how to run their own hypervisors. Now we just push the Amazon or Google button and let them deal with it. I’ll bet it 5 years we’ll laugh about trying to run our own k8s clusters in the same way.

          1. 8

            Yeah, this is the kind of non value added activity that just beg to be outsourced to specialists.

            I have a friend who work in a bakery. I learned the other day that they outsourced a crucial activity to a contractor: handling their cleaning cloths. Everyday, a guy come to pick up a couple garbage bag full of dirty cleaning cloth, then dump the same number of bag full of cleans one. This is crucial: one day the guy was late, and the bakery staff had trouble keeping the bakery clean: the owner lived upstairs and used his own washing machine as a backup, but it could not handle the load.

            But the thing is: while the bakery need this service, it does not need it to differentiate itself. As long as the cloth are there, it can keep on running. If the guy stop cleaning cloth, he can be trivially replaced with another provider, with minimal impact on the bakery. After all, people don’t buy bread because of how the dirty cloth are handled. They buy bread because the bread is good. The bakery should never outsource his bread making. But the cleaning of dirty cloth? Yes, absolutely.

            To get back to Kubernetes, and virtualization : what does anyone hope to gain by doing it themselves? Maybe regulation need it. Maybe their is some special need. I am not saying it is never useful. But for many people, the answer is often: not much. Most customers will not care. They are here for their tasty bread, a.k.a. getting their problem solved.

            I would be tempted to go as far as saying that maybe you should outsource one level higher, and not even worry about Kubernetes at all: services like Heroku or Amazon Beanstalk handle the scaling and a lot of other concerns for you with a much simpler model. But at this point, you are tying yourself to a provider, and that come with its own set of problems… I guess it depends.

            1. 2

              This is a really great analogy, thank you!

              1. 2

                It really depends on what the business is about: tangible objects or information. The baker clothes, given away to a 3rd party, do not include all personal information of those buying bread. Also, business critical information such as who bought bread, what type and when is not included in the clothes. This would be bad in general, and potentially a disaster if the laundry company were also in the bread business.

                1. -7

                  gosh. so much words to say “outsource, but not your core competency”

                  1. 1

                    Nope. :) Despite my verbosity we haven’t managed to communicate. The article says: do not use things you don’t need (k8s). If you don’t need it, there’s no outsourcing to do. Outsourcing has strategical disadvantages when it comes to your users data, entirely unrelated to whether running an infra is your core business or not. I would now add: avoid metaphors comparing tech and the tangible world because you end up trivializing the discussion and missing the point.

            2. 3

              As a counterpoint to the DIY k8s pain: We’ve been using GKE with auto-upgrading nodes for a while now without seeing issues. Admittedly, we aren’t k8s “power users”, mainly just running a bunch of compute-with-ingress services. The main disruption is when API versions get deprecated and we have to upgrade our app configs.

              1. 2

                I ahd the same problems with OpenStack :P If it works, it’s kinda nice. If your actual job is not “keeping the infra for your infra running”, don’t do it.

              1. 13

                I have https://www.brother-usa.com/products/hl3170cdw, which seems to work quite nicely with generic printer drivers on my NixOS system, and has specific drivers for the other Linux systems I’ve used in the past (e.g. Arch).

                When researching this very question a couple of years ago, I found that Brother seems to make reasonably-priced and Linux-compatible printers, and so far I haven’t been disappointed on either front!

                1. 9

                  I’ll second the Brother recommendation. I purchased an HL-L2370DW a little over a year ago ($80 at Best Buy on Black Friday). After configuring it onto my wi-fi network, all the machines in my house detected it and offered it in their print dialog boxes right away without any further configuration. This includes one running Xubuntu 18.04. The CUPS drivers included with the distro were all I needed. “Laser”, automatic duplex, and wireless. It’s been working nicely ever since.

                  1. 3

                    I have an HL4150-CDN. It’s not very recent (2012) but still works without any issue. When it was new, there were some issues with Brother’s dialect of Postscript (BR-Script), leaving you with either a slow open source driver or a fast proprietary one. But it now works without any issue with the driver shipped by CUPS. Newer models are now driver-less and understand PDF, so this shouldn’t be an issue anymore.

                    1. 2

                      Adding my vote for Brother. I’ve had one for years and it’s been solid. Plus they have drivers for the major OSes including Linux.

                    1. 2

                      This is an interesting approach, but I feel like the implementation complexity and management of the system might outweigh the benefits in the long term.

                      In a monorepo, I can run the equivalent of make in the root and see what happens. With this system, I have to have something that integrates and knows about all my repos, knows how to issue pull requests, and knows how to bump versions and recompile/test to ensure that it works. On top of this, it’s a separate system that itself needs to be maintained, and maintain compatibility with all its consumers (i.e. other service repos).

                      Sure, someone has to set up the build system, but its maintenance becomes a necessary part of getting your software out the door; it’s not going to fall behind or become out-of-date easily.

                      Mind you, this system sounds great! I’m just not sure how feasible such an approach is without investing a very high amount of engineering resources into it.

                      1. 0

                        I don’t understand why all this complexity is necessary… If you want to calculate the 5/19th you use two gears, one with (a multiple of) 5 teeth and another with (the same multiple of) 19 teeth. It already says “Apple Engineer” though.

                        1. 4

                          Lego gears aren’t available in arbitrary ratios; the largest gear I can find has 40 teeth, and there’s no 38 tooth gear.

                          The complexity is inherent to the constraints of the construction method.

                        1. 2

                          This is wonderful!

                          However, somehow the act of cutting into the bricks is unsettling to me: it just doesn’t feel right.

                          1. 5

                            So what happens if you have two projects, A and B, that both depend on a third project, C? If C is separate and versioned, you can peg A at 1.0.3 while B continues to 1.1.0, and so on, managing the dependency just as you would any other dependency. If you are working in a monorepo, it seems to me that there’d be no easy way to manage this case, meaning that one of a few things is going to happen:

                            1. Code will not be written to be shared. C never exists, so you end up with Ac and Bc.
                            2. C will ossify. You’ll end up stuck on 1.0.3, effectively, and the additions that A needs will become Ac. B will have no access to them, and so you’ll end up with “forked” extensions to C.
                            3. C will be changed as needed for A. This will mean that both A and B must be fully tested before the new versions of A and C can be deployed, or the risk for breaking B is much higher.
                            1. 7

                              At work, we (years ago) started out with this philosophy. Sadly, it didn’t pan out. In practice, the chore of upgrading many separate repositories when you made a change far outweighed the theoretical benefits of being able to run different versions. I say “theoretical” because I don’t know that we ever needed that in any serious way. Perhaps if you absolutely need the ability to run different versions of your packages, you would want to pay the human cost of churning through repos to update dependencies. In practice for us, it wasn’t worth it.

                              We’re in the process of migrating all our Scala code to a monorepo for this reason.

                              1. 4

                                I think the idea is that if there are any changes to C that imply changes to A and B, then those changes need to happen all at once. Preferably in a single commit. CI for all packages runs on every commit, to catch and prevent cross-package breakage.

                                1. 4

                                  That’s case 3, and it works fine in theory; however, what if project B has constraints that make deployment difficult, or if no resources are currently allocated to updating it with any changes that are necessary? “Just do it all at once” isn’t necessarily a viable strategy.

                                  1. 1

                                    It becomes the responsibility of the maintainer of A to make the fix in C and B if they need changes to be made to C, it’s unmaintained, and their changes break B. That might mean working directly with the teams involved in the breakage, or just going in on her own and fixing it. Hopefully you have tests that will help mitigate regressions from this refactor.

                                    1. 4

                                      Ever worked in a web agency? It is quite common to have around tens of old projects with unknown future prospects. From experience you know some of them will see future work, but you don’t know which ones and margins are not nearly big enough to invest time in possibly dead projects.

                                2. 1

                                  Depending on different project versions across the company creates a lot more trouble long term than time saved in the short term. I guarantee if some other team at my company complained they depended on an old version of my library I would tell them to quit their tomfoolery and develop like adults. New commits are a small hassle, but they are also bug fixes, security fixes, or something else important enough that I bothered to commit it!

                                  1. 4

                                    It doesn’t necessary have to be long term; it should just fit into the cadence of both primary projects. If B can file a ticket to “Upgrade C to 1.1.0,” then it can be prioritized and fit into their cadence. In situations where the company is working with real clients (i.e. not a social network or advertising giant), this means that client features can be finished, then the upgrade can be taken care of at its proper time. In a monorepo, if B doesn’t have time to upgrade C, but A does, then it simply will not happen if B has higher priorities. A will end up writing their changes into their own repo, and the shared “libraries” will atrophy and die.

                                    1. 4

                                      Depends on a company. I think this article makes an unstated assumption about what kind of work the company does. For example, in web agencies most projects would be fairly short and once they ended work on them may be limited to security updates. It almost never happens that a client wants to pay for an upgrade to a newer version of a framework until they want to make a more substantial change to a website.

                                      In an ideal world they would keep all the projects up to date with newer versions. In real world that would be prohibitively expensive as you never know which client will actually return with more work and number of projects goes in few tens if not over hundred.

                                      I also have doubts if adding a new feature to a library at the same time as to a client code is a good idea, but that is just a minor issue with the article as there are doubtless occasions where you would like to make changes to multiple projects at the same time.

                                      1. 4

                                        As an adult myself, I often don’t want to take any random update to a dependency just because. Software doesn’t actually rot just because it’s old, and changes made to software often result in new bugs. Everything is a trade-off, and what makes sense to you doesn’t have to be the best course of action for everybody.

                                    1. 2

                                      I’m working in a large-ish Ruby on Rails 3 codebase at the moment, and finding where something is used (or comes from, for that matter) can sometimes take on the order of minutes, but is usually on the order of a dozen seconds, depending on how heavily-used that something is and whether it’s uniquely named or not.

                                      The only tools that can help me are ag (i.e. thesilversearcher) and a working knowledge of the codebase. This gives me things that at least contain the name of the variable/method, and hopefully they named it something that doesn’t collide with something else too badly. From there it’s mostly an exercise in praying that nobody did anything weird (meta-programming etc.) that will come back to bite you later.

                                      Ruby does absolutely nothing to help you figure out what’s going on. Once you write a method, it’s essentially forever “public” and nobody can help you figure out where, or what for, it’s being used for.

                                      As an aside, working in this codebase has forever soured me on using a dynamic language for anything that has even a faint chance of growing beyond a couple thousand lines. It’s a constant fight against the what the language allows you to do wielded by people who often lack the perspective to know whether they should do it. Refactoring is an enormous task because you never know what you’ll need to change. There’s just too much rope to hang yourself by.

                                      1. 5

                                        At work, we tried in several different places to us Hamster, all to little effect.

                                        The mail problem is that basically everything in the Ruby ecosystem expects everything to be mutable, and you end up constantly swimming upstream against the mutable currents.

                                        We gave up and said “no more”, and have fallen back to freezing things judiciously where appropriate (mostly class-level constants and the like), and defaulting to not mutating input arguments to functions. Anything that mutates gets a ! at the end of it’s name, so at least you know where your bugs come from ;)

                                        1. 4

                                          I find it very difficult to care about filesystems. It’s about as exciting to me as printer drivers. I currently use ext4 because it was a default and I had no reason to try anything else. Can someone explain what appreciable difference a filesystem would make on my everyday usage of computers?

                                          1. 16

                                            I’m only really excited because Apple (might) get rid of the .DS_Store files finder creates when viewing directories.

                                            1. 2

                                              I was hoping the new FS would be case sensitive by default, just because it’s what I’m used to from Linux. But it won’t be.

                                              1. 4


                                                APFS defaults to case sensitive currently, I’m not sure if that’s going to stay the default going forward but it seems like progress on that front.

                                                1. 5

                                                  It likely won’t stay the default.

                                                  Consider that there’s a lot of legacy software in the Apple ecosystem that isn’t very careful about case normalization because it doesn’t have to be. You can format a volume case-sensitive HFS+ today (and you could also do case-sensitive UFS in the past), but a ton of stuff is broken, including big-name apps like Steam. Apple has always stuck with case-insensitivity in the default install because doing anything else breaks too much.

                                                  1. 1

                                                    Is a case-sensitive filesystem considered a good thing?

                                                    In other words, I’m curious what benefits there are to be had in the ability to have Foo.txt and foo.txt side-by-side.

                                                    1. 6

                                                      For “normal” people, having a case-insensitive file system is nice, so if they fat-finger the caps-lock key they still get the file they want.

                                                      For programmers, having a case-sensitive file system is nice, so the file you create with a given name is always distinct from other files with logically different names.

                                                      Imagine writing some sort of “cache” files to disk that are named using some hash that produces a combination of upper and lower-case letters. On a case-insensitive file system, you’re eventually going to end up with collisions. That’s a bummer (and hours of debugging time lost) to have to worry about, especially when your program needs to work cross-platform.

                                                      1. 5

                                                        Normal people never type the names of existing files.

                                                        1. 4

                                                          What Windows does (as a compromise) is NTFS and the kernel are case sensitive, but the Win32 subsystem is not by default. Users get what they expect, and other subsystems can get the semantics they want. (Case sensitivity is toggleable for Win32 as well, but I wouldn’t recommend this.)

                                                2. 10


                                                  • Snapshots
                                                  • Self-healing files

                                                  Snapshots are particularly useful. Sorta like a git commit, you can always go back to that point, even if you delete files.. etc. With HammerFS(2? DragonflyBSD Only), ZFS, BTRFS and FreeBSD FFS (maybe more..) you get the snapshotting.

                                                  1. 12

                                                    Hey @qbit, been a while!

                                                    I figure I’ll weigh in here too.

                                                    As already mentioned: snapshots. If you ever try SmartOS/SmartDataCenter (recently renamed to Triton) by Joyent, you might end up playing around with container snapshots (zones). It’s crazy that I can go into a zone, or a bunch of zones, completely destroy the filesystem, then roll back to a safe snapshot in a matter of seconds (data type/size would make “seconds” vary here, but it’s always fast).

                                                    I remember being absolutely blown away by the notion of VM “flavours” being available in a repo, just like packages, this was before the big hit of Docker becoming widely known and adopted. The fact I could go onto my SmartOS headnode and do a imgadm avail | fgrep redis then grab that “image” in no time from Joyent’s remote repo, then deploy straight to a zone was just baffling. Why am I harping on about this? Because this framework revolves around ZFS snapshots bundled up with some metadata compressed into a tarball. Pretty damn cool.

                                                    Which then leads me to some of the utilities ZFS has available, like zfs send and zfs receive: https://duckduckgo.com/?q=zfs+send+receive&ia=web

                                                    I won’t babble on about that, check out those search results.

                                                    There’s also the ability to implement file sharing protocols, like NFS/smb, at the filesystem level with some simple flags when making/modifying volumes. I use a SmartOS server at home with a few NFS shares set up directly when I made the volumes.

                                                    Another thing: on the fly expansion/shrinking of volumes. Really cool when you want to chuck some extra space at some zones, or bring them down.

                                                    All of this is largely from my own experience of administering servers with ZFS, I’ve never used it on a desktop/laptop. However, were I to go down that path (if it was presented to me in a simple, solid manner), I’d be using snapshots, send/recieve to make backups all the damn time.

                                                    ZFS is fucking great - I’ll end on that.

                                                    1. 1

                                                      From the perspective of an OSX user, would there be much of an improvement over the kind of snapshotting Time Machine does? I realize it’s not at the filesystem level, and not nearly as flexible if you’re managing big storage arrays and such, but for a desktop/laptop user it seems like a “good enough” solution.

                                                      1. 3

                                                        It might be that TM does a good enough job, but other things like the self-healing stuff can take it a step further. Say you have a “raid” volume (used quotes because zfs has its own naming) and a file gets corrupted on one of the raid mirrors, zfs will check the file’s checksum against other mirrors and replace the broken file with a known good copy. TM in this example would just put the corrupt file into your TM backup.

                                                        All that said, my main FS is OpenBSD’s FFS, which has none of these features, and I have never had issues :P

                                                    2. 4

                                                      I use ZFS everywhere I can and am really pleased with it and don’t think I’d feel comfortable going back to something else. The two main values I get from ZFS are that it ensure data is valid via checksums. I have been stung by hardware or software corrupting my data and ZFS has protections against that. The second is snapshots. Snapshots are cheap in ZFS, so on my workstations I take snapshots at various units of 5 min, 15 min, 1 hour, 1 day, 1 week, 1 month and retain them for various periods of time. Then transferring snapshots around is easy so I can back up these to other machines really painlessly.

                                                      With snapshots, you can do other really powerful things that you might not realize you want to do until you have them. The biggest one is boot environments. This makes it so you can snapshot your installation and switch between them on boot. The usecase for this is if you’re going to do a big upgrade you can role it back if it breaks. The power that something like ZFS gives you is that you can ensure the packages and kernel are always in sync. While existing OS’s like Ubuntu might keep multiple kernel versions laying around, you don’t have any guarantees that the rest of the system still makes sense if you rollback. You do have those guarantees with boot environments.

                                                      Then there are other nice things you can do, for example if you have a lot of data and you want to experiment with it, you can clone it (cheap), play with it, and destroy it, without harming the original data. If you are using Solaris or FreeBSD, there are things like jails which are whole-system containers, that become much easier and more powerful with ZFS (creating new ones becomes fast and cheap, so you can make heavy use of them).

                                                      Then, if you’re admining any system, ZFS lets you do a lot of useful things, even delegate control of portions of the filesystem to users so they can do the work they want themselves. Running any serious storage box benefits from ZFS on basically every axis (performance, durability, operationally).

                                                      So, of course, it depends. For myself, ZFS has given me the ability to do things I didn’t realize I wanted to do before as well as given me increased safety about my data. On top of that, it’s benefiting me as a person who admins some machines and as regular-joe-user. I used to rsync data to multiple USB drives as backup, now I can just transfer incremental snapshots around which is much safer and significantly faster.

                                                    1. 1

                                                      The “fixup” option does the same thing as the “squash” option the author uses, but it does the second step of removing the useless commit message for you.

                                                      1. 27

                                                        Personally, I strive to write code in a way that makes the “what” obvious, then explain the “why” that can’t easily be encoded in the code itself.

                                                        In my experience this approach is quite maintainable since the “why” is usually relatively static (the comments don’t need to change often and so don’t fall out of date), and the “what” should be obvious from the code itself, which changes frequently but is kept up to date by being clear in the first place.

                                                        1. 10

                                                          In my experience this approach is quite maintainable

                                                          Which speaks to the real issue – comment rot. After my first 5 years as a contractor, I stopped reading “what” comments entirely – it was pointless, they were bundles of lies making code comprehension exponentially more complicated. After a few more years, I had editor toggles to set comments to background color to make them disappear.

                                                          // Adds 18 to the specialValue
                                                          specialValue += 5; // Adds 9 to the specialValue
                                                          1. 2

                                                            I kind of like that feature of heavily commented code, because it makes a nice canary to notice when people are writing patches without paying even bare-minimum attention to context. If someone’s written a patch, but not looked at enough context to even bother to update the comment immediately on the line above the code they changed, then that’s a big red flag to me, suggesting a lot more is likely to be wrong.

                                                          2. 6

                                                            In my experience, this is the best way to do it. I had to heavily document the project I worked on for my internship this past summer (because…internship, and I wouldn’t be there to explain what/why afterward). I basically had 3 levels of documentation:

                                                            • High-level why stored in some markdown files (pulled in by jsdoc) and also available on the team’s wiki
                                                            • Source code comments that explained why a specific piece of code did what it did. This was especially important in some hairy layout code that needed both what and why to make any sense of it. (For example: it can be pretty trivial to see that a piece of code aligns the x-coordinates of two elements. Why? Because this prevents a certain kind of nastiness in the layout)
                                                            • Low-level what as the actual source code. Your usual grab-bag of sensible naming, abstraction where helpful, and componentization where possible.
                                                          1. 5

                                                            Be sure to take note of a thorough comment: http://disq.us/8menfc

                                                            As interesting as the post itself, at least!

                                                            1. 5

                                                              Armstrong’s theory (makes a lot of sense to me, and is consistent with other phenomena. For instance, your perception of what’s happening is often “delayed” a second or two behind real time. You can experimentally verify this by watching somebody chopping wood at a distance. The sound of the ax hitting the wood will be synchronized with the visual, even though that’s physically impossible. But the brain knows the two events belong together, and bends time to make it so. Up to a certain distance, that is. If you get far enough away, the gap is too much, and you’ll see a silent chop, then hear the chop.

                                                              Some other experiments resulting in apparent reversal of cause and effect are on the wikipedia page. http://en.wikipedia.org/wiki/Time_perception

                                                            1. 2
                                                              1. 5

                                                                A property in comments I strive toward is that comments should describe the why, not the what. If your code is self-documenting, the “what” should be relatively straightforward. The “why”, however, is much harder to divine weeks, months, and years after the initial code was written.

                                                                This is especially important as it relates to business logic, since many times there’s not a clear “why” to it. Sometimes, “Because FooCorp didn’t like the default backoff increment” is an arbitrary, but necessary, thing to remember when refactoring code at a later date.

                                                                1. 5

                                                                  Are any of these actually particularly common? I am graduating soon – are these behaviours I’m likely to run into in the general Silicon-Vally/software/tech company arena? (I’ve been lucky enough to work at great places for internships so far, and don’t think I’ve seen anything like these.)

                                                                  1. 9

                                                                    It depends entirely on the company. The only real way to try and determine whether these things occur at a given company is to ask the people that work there questions to that effect. Really, the best way is to work there, but this is a solid second-place choice ;)

                                                                    Questions to ask during the interview process to suss out details (by applicable question number):

                                                                    • 2) What does the process for committing code look like? How effective is it/how often is it followed?
                                                                    • 4) What’s your least-favorite project you’ve been on since you’ve started?
                                                                    • 10) What kind of hardware do you develop on?
                                                                    • 17) How well do you feel like you understand the company’s direction?

                                                                    When spread out over several different interviewers (who likely aren’t communicating their specific answers between each other), you can start to get a good idea of what the people that work there really think about their company.

                                                                    The interviewers probably won’t lie to you outright (would you?), but they may not tell the whole truth either. Asking a bunch of times gets you answers you can synthesize into a picture of the environment. If things sound too shady, or you can’t get a straight answer, passing on the job might be a good idea!

                                                                    Edit: Asking these kinds of questions is always a good idea, as well. Not only does it get you a good idea about the company as a whole, but it marks you as somebody who’s actually thought about the kinds of things they’d like to see in a company. From first-hand experience as an interviewer, this makes a great impression!

                                                                    1. 4

                                                                      There’s a little truth in all of them, but the ones I see most commonly are #1, #4, and #14.

                                                                      1. 3

                                                                        As someone who is working in a start up, has worked in 2 previous start ups, several months at enterprise, one tremendous personal failure, and has friends in start ups around ~10-20 people that are vocal about the problems. I will say that you will encounter at least a few of these. A company that has all of these is a company you should leave immediately without question.

                                                                        If you are planning to work in this wonderful profession (no sarcasm, this is a beautiful profession with the right people) for many years, be prepared to:

                                                                        • Work for someone who won’t treat you the same way a recruiting document may suggest.
                                                                        • Work for someone who has little to no understanding of software development.
                                                                        • Work for someone who has little to no understanding of how to build a team.
                                                                        • Be in a situation where the business is struggling and stress becomes a daily thing.

                                                                        Also remember as you work, you change and develop new perspectives. You will also inevitably make mistakes and see a new side to people.

                                                                        Ideally you never have to deal with any of these, and you can focus on building awesome things that explore your creativity as well as the needs of people you care about.

                                                                        1. 2

                                                                          #3 gave me flashbacks! both the ETA crap, and interrupting me to explain how I was solving the problem (for no very good reason other than the warm fuzzy illusion of managing the effort)

                                                                          1. 2

                                                                            At my last place, 1,5,6,9,10,19 & 21 are pretty spot on unfortunately.

                                                                            1. 2

                                                                              That is brutal. 5 is incredibly tough to deal with. 10 is hilarious if they tell you during the interview that you can pick the hardware you want.

                                                                              1. 1

                                                                                5 is pretty common when an exec has been sold technology before a project starts, thinking it will make everything easier. Then 5 becomes a requirement for the project or it is a) money wasted b) makes said exec look bad. The piece should be renamed to, “How Bureaucracies Function.”

                                                                            2. 2

                                                                              Bad management is very widespread within sv / tech companies, because fast-moving companies inevitably end up with managers with little experience/training.

                                                                              You can dodge some of these issues by working for larger / more-established companies, but even the most well-respected companies have bad managers thriving on some teams. Sometimes top-performing teams can be managed by complete assholes of some form or another (e.g. many of the stories about Steve Jobs).

                                                                              Learning to “manage up” with inexperienced managers, or change positions when the situation is untenable is a reality of business, and not unique to tech.

                                                                              Just to be clear, though – this is a managerial problem and not something you should suffer through! There are plenty of fantastic people out there to work for. As a tech worker you hold sway and can try to seek out places that don’t have these antipatterns, and work to reverse them when you see them.

                                                                            1. 4

                                                                              One of the best parts of the article is the “Operations Catalog” near the bottom, with concise pictorial representations of the discussed operations. http://martinfowler.com/articles/collection-pipeline/#op-catalog

                                                                              1. 2

                                                                                Wow, those are great!

                                                                                1. 2

                                                                                  I like how you can translate the symbols to types pretty straightforwardly (sort of):

                                                                                  -- Col representing a collection

                                                                                  filter:: Col a -> Col a

                                                                                  flatten:: Col (Col a) -> Col a

                                                                                  map :: (a->b) -> Col a -> Col b

                                                                                  reduce :: (a -> a -> a) -> Col a -> a

                                                                                  groupBy is a little more problematic: either the result of the function has an ordering and you return the same collection with that ordering or you lose the nesting collection and return a map as the article indicates, but I guess the symbol is somewhat understandable:

                                                                                  groupBy :: Ord b => (a -> b) -> Col a -> Col (Col a)

                                                                                  groupBy :: (a -> b) -> Col a -> Map (Col a)

                                                                                  flatMap is the one I don’t think the pictorial representation gives much grasp of what’s happening, as the “shape” of the function is what differentiates it from map

                                                                                  (I guess the pic should be something like f: o => [ x ... ] )

                                                                                  flatMap :: (a-> Col b) -> Col a -> Col b

                                                                                  The interesting part is when you replace Col with any abritrary “thing that can have other things ”“inside”“ ” (formally, any type constructor of kind *->* if I got it right :) )

                                                                                1. 5

                                                                                  Learning that Vim shortcuts weren’t just randomly-chosen keys but mnemonic devices that stood for nouns and verbs was what got me over the hump for learning Vim years ago.

                                                                                  Now, I’m constantly saying those little sentences to myself in my head: “delete inside word”, “change inside parenthesis”, “yank twice-forward to the next period”, etc.

                                                                                  1. 2

                                                                                    My university department is largely a vim house (with the exception of our Emacs-loving department head), but despite the institutional affection I’ve never gotten too deeply into vim because of the opaque syntax. This has finally made it click for me. It’s amazing how such a simple change of conceptual context makes it so much easier to understand. This may be what I need to finally start using vim like something more than Notepad. (although I’m not sure if it can pull me away from Sublime Text now).

                                                                                  1. 2

                                                                                    Apparently, you can use the FSA alone to obtain a unique hash code for each item:

                                                                                    This is not necessary at all, since an acyclic finite state automaton can be used as a perfect hash function. If you sum the right language cardinalities of the transitions followed, you get a unique hash code (1..n, where n is the number of strings in the automaton). For efficiency, you can precompute the cardinalities in the transitions or states. You can then use a normal array for whatever data you want to associate with a word.

                                                                                    Source: danieldk

                                                                                    1. 12

                                                                                      I use KeePass2/KeePassX (keypass.info).

                                                                                      It’s not the prettiest thing out there, but it’s free, works on Windows/Mac/Linux/Android/iOS, and encrypts everything locally so you can use whatever sync solution you like to keep the database up-to-date.

                                                                                      1. 5

                                                                                        I use KeePassX too on my Mac, with some satisfaction. Note that KeePass tries hard to not leave the unencrypted passwords in memory or in the swap. I expect the vim solution to not offer the same guarantee.

                                                                                        1. 1

                                                                                          Yes, I was surprised he hadn’t seen Keepass as an option.