1. 48

    This is advocating that you always be a disposable commodity within a labor marker. A repackaging of the “free labour” idea from liberalism - that wage labour frees the worker to engage in any contract as they please. But the reality of being an exchangeable commodity is rather different.

    1. 30

      You can still be indispensable through your unique contribution and areas of focus that others would not have pioneered. By making it easy for people to follow in your footsteps and take over from you, you are influential, you change the way things work, and people notice that. When it’s for the organization’s betterment they appreciate it too. :)

      I don’t want to be indispensable in the sense of a bus factor. I do want to be indispensable in the sense of “Wow, it’s a good thing /u/kevinc works here.”

      1. 16

        That’s perfectly reasonable, but in order for it to work, there has to be a company at the other end that needs, values, and can recognize innovation and unique contribution. All companies claim they do because you don’t want to project a boring software sweatshop image, but realistically, many of them don’t. Only a pretty small fraction of today’s computer industry checks out the “needs” part, and you still got two boxes to go. For many, if not most people in our field, making yourself indispensable in the sense of a bus factor is an unfortunate but perfectly valid – in fact, often the only – career choice that their geography and/or employment allows for.

        1. 9

          Well technically we’re all bus-replacable. Some of us have enough experience and/or good-will built up in the company that if you actually do what the article proposes, you actually won’t be easily replacable even if you make yourself “replacable”. It’ll be either too expensive for the company to find and train your replacements, or they’ll lose on the value you’re bringing.

          What the article doesn’t mention, though, is that you can’t do any of that stuff if you’re a green junior dev. It’s easy to find a job when you’re good at it and you can prove it, but just getting kicked out on the street while I was still young in the industry would scare me shitless.

          1. 1

            I agree you want to find a workplace that does value you, and even if you do find that, you have to watch for the organization changing out from under you. Just, on your way there, you can earn some great referrals by giving what you know instead of hoarding it.

            As an engineer, is it valid to make yourself a wrench in the works entrusted to you? I think no. But to your point, you’re a person first and an engineer second. If survival is on the line, it’s another story.

            1. 3

              Just, on your way there, you can earn some great referrals by giving what you know instead of hoarding it.

              I absolutely agree that it is invalid to make yourself a wrench in the works entrusted to you, but computer stuff is absolutely secondary to many companies out there.

              Note: I edited my comment because, in spite of my clever efforts at anonymising things, I’m preeeetty sure they can still be traced to the companies in question. I’ll just leave the gist of it: so far, my thing (documentation) has not earned me any referrals. It has, however, earned me several Very Serious Talks with managers, and HR got involved in one of them, too.

              I know, and continue to firmly believe (just like you, I think) that good work trumps everything else, but I did learn a valuable lesson (after several tries, of course): never underestimate good work’s potential to embarrass people, or to make things worse for a company that’s in the business of selling something other than good work.

        2. 8

          I think this is a bit unfair. I’ve worked with people who have hidden information and jealously guarded their position in a company and it makes it harder to do your job. You have to dance around all sorts politics and all changes are viewed with suspicion. You have to learn what any given person is protecting in order to get what you need to do your job. You hear stories about people getting bribed to do their jobs. People won’t tell you how to do things, but will do them so they are unreplaceable. People build systems with the eye towards capturing other parts of the organization.

          Most of that would go away if people did what was described in the article.

          1. 9

            Maybe if IT workers had a better way of protecting their job security – such as a union – there wouldn’t be the motivation to do this kind of thing.

            (Note: I don’t do this kind of thing, but I totally understand why someone would, and worker solidarity prevents me from criticizing them for it.)

            1. 2

              I don’t know if I agree with you in this specific case. It was at a place that never fired anyone. People who were not capable of doing their jobs were kept for years. It seemed to be more predicated on face saving, inter team rivalry and competition for budget.

          2. 6

            Yes, I had the same thought as you. It’s true that “if you can’t be replaced, you can’t be promoted”, but since when are people promoted anymore? The outlook of this article is that job security is not something you can always take for granted; indeed, that you can take upward (or at least lateral) mobility for granted. Maybe that’s true for highly-marketable (white, cis-male, young, able-bodied) developers in certain urban areas, but at my age, I wouldn’t want to count on it.

            1. 4

              Being a disposable commodity doesn’t necessarily imply low value. You can do something that is highly uniform and fungible, and also well compensated, I think.

              1. 17

                you think wrong. Historically “deskilling” (this is the term for when a worker becomes standardized and easily replaceable) corresponds to salaries going down. This happens for a variety of reasons: you cannot complain, you cannot unionize easily, you cannot negotiate your salary. You get the money you get just because your employer has no mean to find somebody that can do exactly the same and get paid less. If that becomes possible and you don’t have rights that protect (minimum wage, collective agreements, industry-wide agreements) or collective organizations that can protect you, the salaries go down. Fighting deskilling is not necessarily the most efficient strategy and doesn’t have to be the only one, but for sure giving up on that is no good.

                On top of that, deskilling is coupled with more alienation, less commitment and in general a much worse working experience, because you know you don’t make a difference. You become less human and more machine.

                Programming, I believe, naturally fights against deskilling because what can be standardized and therefore automated will eventually be automated. But the industry is capable of generating new (often pointless) jobs on top of these new layers of automation of tasks that before were done manually. Actively pursuing deskilling is unreasonable also from an efficiency point of view, because the same problem of “scale” is already solved by our own discipline. The same is not true for most other professions: a skilled factory worker cannot build the machine he’s using or improve it (with rare exceptions). A programmer can and will if necessary. Deskilling means employing people that will only execute and not be able to control the process or the organization, leaving that privilege and responsibility to managers.

                1. 7

                  the article is not about deskilling, it’s about communicating your work with your peers. Those are very different things.

                  1. 8

                    it says explicitely to try to be disposable. Disposability and deskilling are equivalent. The term, in the labor context, is not just used to say “this job should require less skill to be done”. It’s used for any factor that makes you disposable or not, regardless of the level of skill involved. Clearly skill plays a relevant role in the vast majority of the cases. What he’s advocating is to surrender any knowledge of the company, the platform and so on, so that you can be easily replaced by somebody that doesn’t have that knowledge. You’re supposed to put in extra effort deliberately (not on request from your boss and maybe often going against company’s practices) to make this process more frictionless from your employer. That’s what the article is saying

                    1. 3

                      it says explicitely to try to be disposable.

                      While it does say that, I think that the actual meaning of the article is “make the work you do disposable”, not “make yourself disposable”. That way you can go through life making changes that make it easier for everyone around but also highly profitable for the company so that while the work that you currently are doing can be done by whomever, the potential value you bring at each new thing you do is incalculable. So they’d keep you, of course.

                      1. 1

                        What he’s advocating is to surrender any knowledge of the company, the platform and so on, so that you can be easily replaced by somebody that doesn’t have that knowledge.

                        Are you suggesting that the replacement will not have that knowledge, or will at the moment of replacement have gained that knowledge?

                        Disposability and deskilling are equivalent.

                        This is not the case in my mental vocabulary, and I don’t think it is the case in the article linked. Disposability is about upskilling as a team, becoming more engaged in craft, and having a community of practice, so that the community doesn’t rely on a single member to continue to upskill/self-improve.

                    2. 1

                      While I agree that deskilling is a thing, it might be more something that affects blue collar workers working on an assembly line than IT professionals (to an extent). Replacing someone isn’t just firing the more expensive person and hiring a cheaper one. It involves onboarding and training, which may take several months, which directly translates to lost earnings.

                      1. 1

                        It happened to plenty of cognitive workers throughout the work. Deskilling is also replacing accountants, fraud analysts or many other professions with ML models that live on the work of data labelers somewhere in Pakistan.

                1. 11

                  Can’t we just do all those good things, without getting into the mindset that we might quit or be fired at any time?

                  (I don’t think they’re all good things actually, but, in general most points are sensible.)

                  “this will make you a better engineer” - maybe, but I don’t think it’s very healthy.

                  1. 4

                    I don’t think they’re all good things actually

                    Which ones are not sitting right with you?

                    1. 11

                      Identify and train your replacement. In the same vein as training others, to switch roles you’ll need to replace yourself. Identify who that replacement might be and actively and continuously coach them.

                      I think it’s great to provide training to people around you, and (if they’re interested) then enable people to gain the skills that could be used to replace you, but actively targeting someone as a replacement and treating them that way all the time seems unhealthy.

                      Do not make yourself the point of contact. Establish mailing lists or other forms of communication that can accommodate other people, and then grow those groups. (The exception is when management needs names for accountability.)

                      In general I agree that it’s unwise to volunteer as the contact for too many things, and it’s preferable to use open channels of communication. But as a blanket rule I don’t think it’s good advice.

                      1. 5

                        I think it’s great to provide training to people around you, and (if they’re interested) then enable people to gain the skills that could be used to replace you, but actively targeting someone as a replacement and treating them that way all the time seems unhealthy.

                        I agree.

                        In general I agree that it’s unwise to volunteer as the contact for too many things, and it’s preferable to use open channels of communication. But as a blanket rule I don’t think it’s good advice.

                        When I read that my first thought was instead of having people coming to me about the internal lib I’ve made - make a Slack channel for the lib support.

                        1. 7

                          When I read that my first thought was instead of having people coming to me about the internal lib I’ve made - make a Slack channel for the lib support.

                          Yeah! This seems like a good idea. Keeping communication open and transparent has tons of other benefits: you’re not the single point of failure, other people can find out context around a situation, it fosters a sense of community, people are less likely to become the defacto decision-maker, etc etc.

                          It’s just that having a rule of “I won’t be point of contact unless management specifically ask me to be” feels like a bad attitude to have.

                          1. 2

                            IMHO, if your management asks you to be the point of contact, they may not be good management. Volunteer to set up the group channels of communication, and have a good attitude about it! The problem with being a point of contact is that you can’t go on vacation, etc, without handing off the baton. The handing of the baton creates overhead for the team. Overhead creates drag, which makes the team less effective. I am in the middle of leaving my current position, and I try to follow most of the rules in the the linked article. The things that are painful, are the places where those rules were not followed. Also, it’s my understanding that if you ever have to do ISO27001, it will help to have responsibilities delegated to roles instead of individual points-of-contact.

                  1. 1

                    Thanks for taking the time to write this post. I was surprised by some parts about you flying under the radar. I am more of a maintain-rails-apps than write-rails-apps person, but your name/handle are pretty familiar to me. The post makes it feel like rails is a pretty divided community. I’ve kind of felt that way for a while, as it seems to me like Ruby OSS author-type folks often go on to write in other languages, (go, elixir, etc), and the users are more likely to file bugs than PRs, or just use some kind of a local mixin. I feel like RoR is moving to the unpleasant part of the maturity spectrum unfortunately. This is just my perspective. I really liked the NVC bits, and hope to use them as I go forward. Keep it up, you are doing the important work.

                    1. 14

                      This is pretty interesting, but I think the real test for “society-changing” applications of cryptography like this is if normal people can use it without making mistakes. In particular, the interface for restaurant owners probably needs to be basically seamless— no restaurant owner wants to have to learn the details of public key cryptography, they want to make and serve food.

                      also, it doesn’t look like there’s any details on how deliveries are going to work? all I can see is a somewhat overengineered protocol for placing an order with a restaurant, for a which a great decentralized system already exists: a landline phone. How would you prevent a delivery driver from jacking your food, or someone who claims to be a delivery driver from taking food?

                      1. 4

                        There are countless stories of people losing money because they’ve accidentally deleted their BitCon wallet key, or they’ve sent money to someone else and it turned out that they were a fraudulent entity. Much as I like decentralisation, centralised solutions have one big advantage: accountability. In the case of fraud, my bank can reverse transactions. If they do something illegal, they have a registered address that the police can visit and a load of recorded assets that can be seized. As a result of this, ‘know your customer’ legislation can be enforced, which make it much easier to find the perpetrators of fraud and help shift the liability towards banks that enable it. These things are really hard to replicate in a decentralised system.

                        These aggregators do provide a few bits of value to restaurants, only some of which are captured by this:

                        • They provide a single place to browse for a load of different things, which helps discovery. I’ve tried a bunch of takeaways via Deliveroo and Just Eat that I’d never have heard of otherwise.
                        • They allow restaurants to outsource delivery. Unless you’re doing a lot of delivery business, paying people to deliver for you can be expensive. If you have slack times, you’re paying them anyway, whereas outsourcing it means that other restaurants can take up some of that slack.
                        • They provide a reputation system. Deliveroo’s differentiating feature at launch was that they were selective in the restaurants that they’d sign up. If a restaurant gets too many bad reviews, they’re kicked off. Even Just Eat, which accepts pretty much anyone onto the platform, tracks reviews and knows that the person leaving the review actually ordered (and paid for) the food (and it was picked up by a delivery person who wasn’t affiliated with the restaurant), which makes it much harder to scam.
                        • They handle refunds. I had a pizza delivery person accelerate too hard on his scooter so that my pizza was smushed into one end of the box. Just Eat handled the refund immediately. Again, knowing that I won’t have problems with refunds increases my confidence and makes it much lower risk for a customer to try a new take-away.
                        • They handle all of the payments. Most restaurants can handle credit card payments in person, but doing so online requires more infrastructure. Outsourcing this reduces costs.

                        The big problems with these companies are that they’re abusive to their delivery workers (Deliveroo recently had an IPO and their share price tanked immediately, in a large part due to the fact that they’re expected to be taken to court soon and end up having to pay their riders more) and they take a disproportionate cut of the price.

                        In a decentralised system, there are a bunch of other questions:

                        • How is the data handled privately?
                        • Who is liable in case of a GDPR violation (is it the individual restaurants who opt in?)
                        • How do I know a restaurant is legitimate / of decent quality?
                        • If it handles matching riders with restaurants, how does it comply with employment law?
                        1. 4

                          Accountability is just the flip side of power abuse. If you can prevent people from signing up, reverse transactions, delete their accounts without traces, etc. you might as well do a good thing when someone asks you kindly.

                          Distributed and decentralized solutions are usually created to avoid giving some instance power, and this is one example of the downside that this position grants you, but it is to a certain degree unavoidable – unless you regress on the central principle of being distributed and/or decentralized.

                          That doesn’t mean it doesn’t have to be all bad. Different approaches can be taken. Maybe you could have “trustworthiness indexes”, where some food critic you trust (or pay) publishes how good a restaurant is, comparable to block-lists for ad-blockers. Maybe you could have a gossip system where friends can recommend or advise against visiting a restaurant? It is difficult, but without a central authority, there is no “definitive” knowledge. But then again, “real life” is also a distributed network of humans and their relations that suffers from the same problem.

                        2. 1

                          “ …great decentralized system already exists: a landline phone…”

                          That does not offer automation or elasticity/scale. Not trying trivialize a response to your assertion, but I think automation is needed by any small business with a razor-thin margin.

                          “…How would you prevent a delivery driver from jacking your food, or someone who claims to be a delivery driver from taking food?”

                          From the readme, they seem to rely on strong digital identity of each service provider:

                          “Identity creation for resource providers is made costly with a computational proof-of-work mechanism based on the partial preimage discovery first employed by Hashcash…. Free Food requires resource providers to supply a photographic proof of identity which includes a unique symbol which is mathematically bound to the proof-of-work associated with their public key.”

                          That does not prevent a verified provider to do bad things, of course. But we can assume that the bad behavior, if it happens, at all, would happen only once. So the outcomes would not be no different than in centralized solutions.

                          “… cryptography like this is if normal people can use it without making mistakes…”

                          Evolution of technologies.

                          Personal hardware cryptowallets, (that,also, incorporate password wallets) already exist. And, then, in the future, perhaps, incorporate more things (like health record, employment record, EDU record, asset records) – will likely to be a thing (assuming that we are allowed by governments, to have digital personalization, but without centralization).

                          It should not be that difficult for this project, to approach 2 or 3 hardware cryptowallets provider and ask them to allow their solution to store the identity and authorization tokens for the libfood based systems. These, in a way, are tokens of trusts, and have value across locations, countries, decentralized networks, etc. I hope more and more such things are done. As person goes through their life, accumulating good ‘verifiable resume’ is of enormous value (regardless of the industry).

                          Overall this service is looking for ways to decentralize (and therefore, if you accept the leap of faith, democratize ?) the discovery and integrated delivery part of the restaurant business.

                          I can certainly see how this would work way beyond their modest goals, in many other businesses.

                          1. 1

                            also, it doesn’t look like there’s any details on how deliveries are going to work?

                            Lots of local restaurants have, or could hire and staff, drivers, pizza places have been doing it for decades. This gives the restaurant owner/manager more choice and accountability, in comparison to grubhub type services.

                            a landline phone

                            I personally love the landline phone, BUT, I think the benefit over a landline phone is pretty clear to anyone who has worked a rush hour (it’s been a long time, but I remember). In this case, you’re avoiding a lot of things:

                            If you have integrated payment processing (like he mentions), you’re avoiding transcription errors around card numbers. But most importantly, you’re saving time, and queueing in a different part of the system. When I call my local pizza place on a friday night (on their land line), I sometimes can’t get through until the third or fourth try (it’s good Pizza). Also, the system avoids mis-ordering or entry errors on the part of the order-taker. Finally, it gives the buyer an opportunity to double-check a placed order, or add impulse items ;). I think it’s interesting, and it’s just my perspective. Thoughts?

                          1. 2

                            Code should be signed with hardware security modules (HSM)’s. Period. An HSM that is capable of code signing can be had for <100US$. Even if the pipeline builds in the cloud, it is nearly trivial to have an HSM plugged into a rack somewhere, and to leverage that to do online code signing.

                            1. 3

                              That requires hiring a rack, as most organisations would prefer using a VPS / Cloud infra as it is much more scalable, relocatable. I wonder if a TPM in a VPS will be an easier better solution.

                              1. 2

                                Hiring a place in a rack can be done for <$50USD/mo. If you are signing code for substantial consumers/deployments, that cost will be vanishingly small. If you do not want to use any physical infrastructure, HSM’s are also available in AWS, although the cost is substantial for an individual. For an organization like HashiCorp, the cost of an AWS HSM is also vanishingly small. TPM would also definitely work, but this does require the key to be managed effectively in the clear on the internet, so some of the same exploit/supply-chain vector concerns apply if you’re using cloud vendors outside of the above offerings.

                                1. 1

                                  I wonder if a TPM in a VPS will be an easier better solution.

                                  Indeed. TPM is like a cheap, always connected HSM. Additionally with a little bit of effort one could seal the key to system configuration to protect against booting from unsupported configurations (e.g. livecd).

                                  Additionally OpenPGP supports offline primary keys. In case the signing subkey has been compromised it can be revoked and a new one created without affecting the primary key and the key fingerprint.

                              1. 2

                                Your posts don’t always get a lot of comments here on the lobster, but let me say your blog is my go-to for an overview of any kind of new database technology. You usually do soup-to-nuts install, etc, with a consistent use case and very clearly written. I wish that I had the confidence/risk profile to go out on my own as you mentioned on your comment a few months ago. Unfortunately, the US healthcare system is so broken, that leaving the benefits-negotiating power of a larger business is an unpleasant and complicated hurdle in my view. Anyway, thanks for the quality content!

                                1. 1

                                  Thank you for the kind words.

                                  I can say there have been numerous times in my life where changing jobs and living conditions looked very risky but looking back now things turned out fine and did a lot to improve my quality of life.

                                  I’ve never lived in the US, only visited with travel insurance so I can’t comment on healthcare being tied to an employer. But I do wish for the sake of all my American friends that those two things could be affordablely decoupled.

                                1. 3

                                  I don’t understand what could possibly be a realistic solution to the hypothetical in the article. What could possibly be done, other than sharing off peak servers with someone else, as done on the cloud?

                                  1. 3

                                    I think the OP is talking about making the actual software faster. Basic performance things, but things that are primarily available via profiling and analysis. I think the retooling the article is taking about revolves around improving that efficiency, but it requires thinking things like “we want to know how efficient this is, so we can make it better”, rather than thinking, “the efficiency of this is X, so we need Y/X things, where Y is our objective performance level”.

                                  1. 25

                                    I have no doubt that the author of this article has done their research, but it’s really hard to follow because of what I perceive to be his personal animosity to the author of the original claim of Bitcoin’s environmental damage.

                                    However I must take issue with this statement, in the last paragraph:

                                    [Bitcoin’s] energy efficiency gets better every day.

                                    This is not how the proof of work idea functions. If a process is invented that calculate hashes more efficiently, the difficulty will adjust to ensure that the rate of block creation is fixed (at one every 10 minutes, in Bitcoin’s case). There is literally no way to mine bitcoin or any other PoW based cryptocurrency more efficiently, in the sense that the unit cost decreases with the application of a new process.

                                    1. 7

                                      This point is consistently left out of these discussions and it annoys me no end. Thanks for bringing it up.

                                      1. 2

                                        Doesn’t that mean the energy usage can be more or less consistent at a point?

                                        1. 6

                                          If the price is stable, yes. If there’s money to be made by mining, expect people to pour energy into it until it stops being profitable.

                                          1. 2

                                            Not necessarily, the energy usage is determined by transaction volume * compute cost, roughly speaking. The compute cost may stay roughly the same but volume won’t.

                                        2. 0

                                          There is literally no way to mine bitcoin or any other PoW based cryptocurrency more efficiently, in the sense that the unit cost decreases with the application of a new process.

                                          My economics are rusty, and to be honest I couldn’t get through the angry tone of the linked article but I think that may not be entirely true, let me explain why: There is an interesting analogy here which I will put forward, although it may be flawed: the cash for clunkers program. The idea is to make the market undesirable for inefficient systems to operate in. Suppose a new process is discovered, but not made widely available. With the bitcoin network, this seems possible. Now, suppose the new actors are able to drive up the hashrate considerably using the new process. Other, less efficient systems become less profitable to operate, and are turned off. The PoW retargets, and the difficulty goes up. But the system as a whole is more efficient. He mentions falling exchange rates and competing cryptocurrencies. If you add all of these factors together, total system power consumption could legitimately go down. We think of this market as being relatively efficient, but I don’t think that’s accurate. The barriers to entry and complex political elements could come into play to result in a more efficient bitcoin irrespective of the retargeted PoW.

                                          1. 2

                                            There are different ways to view efficiency, and it’s possible the author of the linked post means the view you have outlined.

                                            I personally view efficiency in this way: Bitcoin’s job is to provide transactions between addresses. As it stands, Bitcoin is fundamentally constrained by the number of blocks generated per time unit, and the space allotted within each block for transactions.[1].

                                            There is no way to get more transactions per second by using an improved method (faster hardware or smarter algorithms). This is by design.

                                            This is a design choice that makes sense considering Bitcoin’s goal of decentralization. Limiting efficiency is one of the ways of preventing an entity of amassing enough hash power to take control of the network.

                                            [1] An obvious way to alleviate this issue is to increase the size of the block - and this is the approach taken by the alternative chains Bitcoin Cash and Bitcoin SV. But each block must have a chance to propagate enough through the network so there is a practical size limit based on the combination of block size and average network speed.

                                            1. 1

                                              I think it is pretty clear that the article is about the impact of bitcoin power usage on the environment. So treating transactions per second as the efficiency the article is talking about seems misled. I think the article is talking about transactions per watt, which can actually change significantly with different methods and as the bitcoin community grows or shrinks.