1. 24

  2. 9

    It’s a good read and I’m upvoting it, but I deeply disagree with the author’s standpoint.

    The complaint about Hamilton being snubbed seems grounded mainly in her later work in the Apollo program; at the time, she was just another student sharing the resources. The author seems to hint and nudge that this is some sort of specific slighting of her by the hackers involved, or at the very least that their actions were taken to snub or damage other folks and women–a view view firmed up by later complaints of there not being enough women represented in the book. Frustration at the one shouldn’t be cause for dismissal or critique of the other.

    The author’s continual raising of points like…

    If the hacker ethic is good, how could it have produced this clearly unfair and horrible and angry-making situation?

    …is really annoying. One might as well say “If cookie recipes are good, why are so many people dying of diabeetus in the United States?” Though the concepts are linked, there’s a lot more going on.

    The four applications of the Hacker’s Ethic that the author makes in light of this situation all are kind of specious. Perhaps the most galling to me is the “mistrust of authority” thing: it’s pretty clear that there is some difference between mistrusting centralized authority and the issues raised in stepping on each others' toes…especially since history has borne out basically every concern about centralized authority in IT.

    The Unicode example is just kinda rushed through to the effect of “See, see! Suppressing culture!”, without regard to the technical reasoning that led to the Han Unification nor the fact that the countries concerned are all involved in the process. A better example would’ve been, say, Unicode’s support for Linear-A but not certain living languages today–but that’s not what the author chose to go with, instead again making a shaky argument to try and again paint those mean old hackers as being biased jerks without concern for others.

    Finally, there’s the matter of the proposed new code of ethics. The author brings in some really interesting questions that are once answered by what they replace and still fail to capture the intent of the originals.

    Instead of saying access to computers should be unlimited and total, we should ask “Who gets to use what I make? Who am I leaving out? How does what I make facilitate or hinder access?”

    Those questions are all answered by the original version: everyone should have complete access to machines. That position answers the questions by with “everyone, nobody, no purposeful hindering is allowed”. At the same time, those questions fail to make a normative statement, fail to make any guarantees or guidelines beyond letting us feel like we’ve considered the matter fully as we go about doing the wrong things.

    Instead of saying all information should be free, we could ask “What data am I using? Whose labor produced it and what biases and assumptions are built into it? Why choose this particular phenomenon for digitization or transcription? And what do the data leave out?”

    Again, the author raises questions without providing guidance for the correct answers. Here more than before, though, the author manages to pick issues that are somewhat orthogonal to the core issue of “all information should be free and open for all to access”, instead opting for somewhat unrelated questions about audience and storage fidelity.

    Instead of saying mistrust authority, promote decentralization, we should ask “What systems of authority am I enacting through what I make? What systems of support do I rely on? How does what I make support other people?”

    Good questions, but a bit off the track of “hey, centralized systems are to be avoided”. Even answering those questions can still lead us to do terrible.

    And instead of saying hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position, we should ask “What kind of community am I assuming? What community do I invite through what I make? How are my own personal values reflected in what I make?”

    This is maybe the worst–the original oversight (such as it was) in the rule was not being sufficiently inclusive re: gender/sex, but beyond that the message is clear and useful: judge based on the accomplishment, not on the background of the person who accomplished it. These questions fail miserably to provide that basic guidance, instead opting for questions that while interesting and useful are no substitute for an ethos.

    1. 3

      Some issues are really easy targets unless you delve deeper, and even in the Han case one could argue that all CJK characters should be broken off now that we can store more data.

      I saw a video once, maybe a TED talj, where the lady was commenting on white western men calling the shots, because she couldn’t represent her name accurately on a computer.

      I brought this up as an obvious problem with a linguist, and she pretty much snubbed it off, like she’d heard it before, and said that the shots were called based on given information, so it was not white western men to blame.

      And yeah, I was waiting for something more concrete; sample answers for example.

      The one advantage tech has to offer is answers and solutions, even if they aren’t perfect, because they can be improved upon.

    2. 4

      There’s a shift of perspective this talk assumes that I think deserves more attention. The questions the author poses at the end all assume the perspective of “you’re a craftsman and your primary concern should be the effect your craft has on the world.” This isn’t what the hacker ethos is. The hacker ethos is about curiosity, and it’s about shaping the computer into what you want it to be. Making something widely-used or popular or all-inclusive is important if you’re trying to sell or distribute your work, but it’s absolutely not intrinsic to what “hacker” means. When I write code outside of company dime, my concern isn’t about what the rest of the world will think about my design decisions, it’s about what I want to accomplish. This is the same POV that led MIT hackers to use Lisp: other, less flexible languages would have seen wider adoption of their work, but that wasn’t the point. This is the author’s fundamental misunderstanding: hacking is an individual pursuit, so of course looking at it through the lens of community and social causes leaves something to be desired. That isn’t what it’s for.

      Relatedly, I cringed when she said hacker culture “sort of evolved into the tech industry.” I’m sure much of the tech industry would like to think so, but I see tech company attitudes as a perversion of hacker culture rather than an extension of it - the Silicon Valley ethos is just capitalism relieved of the burden of having to work to expand your business (as in: company like Slack can handle millions of customers with essentially zero capital investment, which is what allows tech companies to expand and pop so easily). This is why 70’s hacker culture flourished primarily at universities, not companies.

      Unrelatedly, the author blithely writing off human systems as “unknowable” rubbed me the wrong way. A lot of things have been popularly considered unknowable in the past (nature of the stars, genetics, etc), and the track record of these predictions isn’t great. That sort of defeatism just discourages people from actually trying to solve problems.

      1. 1

        There is indeed a shift of perspective in the talk that I think needs to be highlighted. However, I think it is not from the perspective of “you’re a craftsman and your primary concern should be the effect your craft has on the world” but more from the perspective “you are a social being and your actions will have effect on the world, it would be good if your ethics reflect that”. There is no person alive who lives in a vacuum, and respecting that fact in choosing ones actions so as to result in a net-positive impact on your surroundings is valuable, but very hard indeed. The questions the author posed seem to me to be intended as reformulations of the tenets of the hacker ethic designed to clarify of whether and how to apply the ethic to your actions. Unfortunately for us the questions only point to there being a way of reconsidering the ethic, thereby possibly improving it, but do nothing to actually guide one to such a possibly better ethic, which the talk is ostensibly about. Furthermore, the talk does not make a truly convicing case that the ethic itself is flawed.

        Ultimately, the talk is valuable in my opinion because it invites one to reconsider ones biases, and at least to me provided a new light that could be shed on a familiar set of topics.

      2. 3

        This was fantastic, thank you.

        So, the hubris inherent in the Hands-On Imperative is what convinced Stewart Nelson that his modification of the PDP-1 would have no repercussions. He believed himself to have a perfect understanding of the PDP-1, and failed to consider that it had other uses and other affordances outside of his own expectations. The Hands-On Imperative in some encourages an attitude in which a system is seen as just the sum of its parts. The surrounding context, whether it’s technological or social, is never taken into account.

        But that’s just a small example of this philosophy in action.

        1. 2

          I’d say it’s a very small example, because it’s one mess-up. Saying the context is never taken into account simply isn’t true.

          It may also sometimes be worth breaking an API on purpose and I saw nothing here about how to communicate it. The PDP-1 example is a case for better communication, one which also a bit dated now when computer hardware isn’t as hackable nor so precious it has to be cordoned off.