1. 5

    This fun macro has been in Arc Lisp from the start under the name accum, and I find it extremely useful. For example, I recently added support for list comprehensions using it: http://akkartik.name/post/list-comprehensions-in-anarki

    One difference between the two: gathering uses a hardcoded function name gather, whereas accum takes a first arg that’s the function you call to ‘accumulate’ new item. Often we use acc, so calls begin (accum acc ...).

    1.  

      Interesting posr about list comprehensions!

      It is commonly use collect or with-collector in Lisp. The named variant is also common.

      https://github.com/jscl-project/jscl/blob/master/src/utils.lisp

      There is even a variant to collect into multiple queues, in which case one has to provide a name for each collector.

      1.  

        Serapeum has a version called collect too. I didn’t call it collect because that conflicts with iterate’s collect clause, and I tend to use iterate pretty heavily in my personal stuff..

        1.  

          Thanks for that pointer! I wasn’t aware, but in my toy lisp I called the pair collect and yield. Glad I was on the right track!

      1. 3

        This is great for comprehension of individual functions but it’s be great to have a strategy that scaled up to creating a mental model of the architecture of a program that contains thousands of lines of code.

        1. 1

          One thing I’ve been trying with my students is binary search to insert prints. Still a work in progress; requires lots of hand holding. They’re like 12.

        1. 4

          Summary: Spending 5–10 minutes teaching a strategy to read code can lead to improved reading performance, helping prevent low-performers from becoming overwhelmed and giving up

          This is a good technique to cover, especially because for most active programmers it is so internalised that only tell novices to do it, but don’t teach it.

          1. 1

            Thank you! This is certainly useful and relates to goal #2 above.

            1. 1

              Thanks, I totally missed this when it first appeared.

              1. 1

                Thank you for looking up quobit’s post on Lobsters via which I, too had found it at the time. And thank you, quobit, for sharing the article on Lobsters.

            1. 12

              I think I’ve read this paper a half dozen times now after seeing someone or other wax lyrical about it online. And I just don’t get why people like it so much. Which means either I’m too much of an insider and this is old news, or I don’t appreciate what I don’t know.

              Part of the problem is that it seems to overstate its contributions. It isn’t really “identifying” causes of complexity. Fred Brooks pointed out essential vs accidental complexity back in 1975, and there’s been a steady stream of articulation ever since. To any Haskeller the point that complexity comes from state is old news. Relational algebra is ancient. At best OP is laying things out in a new way.


              This time I figured I’d speed up my rereading time by instead chasing down past threads. And I discovered a decent summary. This was helpful, because it helped me to focus on the ‘causes of complexity’ portion of OP.

              Causes of complexity according to OP:

              • State (difficulty of enumerating states)
              • Control (difficulty of choosing an ordering)
              • Concurrency
              • Volume
              • Duplicated/dead code, unnecessary abstraction

              Compare the list I made a couple of years ago:

              • Compatibility. Forcing ourselves to continue supporting bad ideas.
              • Vestigial features. Continuing to support stuff long after it’s been obsoleted by circumstances. Simply because it’s hard to detect obsolescence.
              • People churn. Losing institutional fingerspitzengefühl about the most elegant place to make a given change. Or knowing when some kludge has become obsolete.

              Comparing these two lists, it looks like there’s a tension between the top-down and bottom-up views of software management. In the bottom-up view people seem to think about software like physics, trying to gain insight about a system by studying the atoms and forces between atoms. You tend to divide complexity into state and order, essential and accidental. Reductionism is the occupational hazard.

              In my top-down view I tend to focus on the life cycle of software. The fact that software gets more complex over time, in a very tangible way. If we could avoid monotonically adding complexity over time, life would be much better. Regardless of how many zeroes the state count has. In this worldview, I tend to focus on the stream of changes flowing into a codebase over time, alongside the stream of changes happening to its environment. This view naturally leads me to categorize complexity based on its source. Is it coming from new feature requirements, or changes to the operating environment? How can I keep my raft slender as I skim the phase transition boundary between streams?

              The blind spot of the bottom-up view is that it tends to end up at unrealistic idealizations (spherical cows as @minimax put it in this thread). The blind spot of the top-down view is that there’s a tendency to under-estimate the complexity of even small systems. Blub. The meme of the dog saying “this is fine” while surrounded by flames.

              It seems worth keeping both sides in mind. In my experience the top-down perspective doesn’t get articulated as often, and remains under-appreciated.

              1. 5

                Here’s my take on it: https://news.ycombinator.com/item?id=15776629

                I also don’t think it’s a great paper. It’s long on ideas but short on evidence, experience, and examples. I don’t think you’re missing anything.

                1. 4

                  I have a similar take to you. I think it’s one of those papers that is easy to get excited about and everyone can agree that complexity is bad and all that. But I have not seen any successful application of the ideas in there. The author’s haven’t even successfully implemented the ideas beyond a little prototype, so we don’t have any idea if what they say actually pans out.

                  And to toss my unscientific hat into the ring: IME the biggest source of complexity is not programming model but people just not being disciplined about how they implement things. For example, I’m currently working in a code base where the same thing is implemented 3 times, each differently, for no obvious reason. On top of that, the same thing is some times and id, sometimes the id is a string and sometimes an int, and sometimes the string is a URL, and it’s never clear when or why. This paper is not going to help with that.

                  1. 2

                    If what you say is true, then the success of LAMP stacks with associated ecosystems for new people and veterans alike might make it high on “evidence, experience, and examples.” That architecture worked for all kinds of situations even with time and money limitations. Except that the specific implementation introduces lots of the complexities they’d want people to avoid. So, maybe instead the Haskell answer to a LAMP-style stack or something like that fitting their complexity-reduction ideas.

                    Although the others shot it down as unrealistic, your characterization seems to open doors for ways to prove or refute their ideas with mainstream stuff done in a new way. Maybe what they themselves should’ve have done or do they/others do later.

                    1. 4

                      Yes, so depending on the scope of their claims, it’s either trivial and doesn’t acknowledge the state of the art, or it’s making claims without evidence.

                      Appreciating LAMP is perhaps nontrivial. Google services traditionally used “NoSQL” for reasons of scaling, but the relatively recent development of Spanner makes your architecture look more like LAMP.

                      But either way I don’t think that LAMP can be “proven” or “refuted” using their methodology. It’s too far removed from practice.

                  2. 4

                    In my top-down view I tend to focus on the life cycle of software. The fact that software gets more complex over time, in a very tangible way. If we could avoid monotonically adding complexity over time, life would be much better.

                    Thanks for the interesting commentary. Some parts definitely resonated, particularly about the half-features and difficulty of knowing how and where to make the right change.

                    This is only the germ of an idea, but it is perhaps novel and perhaps there is an approach by analogy with forest management. Periodic and sufficiently frequent fires keep the brush under control but don’t get so big that they threaten the major trees or cause other problems.

                    Could there be a way of developing software where we don’t look at what is there and try to simplify/remove/refactor, but instead periodically open up an empty new repo and move into it the stuff we want from our working system in order to build a replacement? The big ‘trees’ of well understood core functionality are most easily moved and survive the burn, but the old crufty coupling doesn’t make the cut.

                    Some gating on what would be moved would be needed. The general idea though is that only sufficiently-well-understood code would make it across to the new repo. And perhaps sufficiently reliable/congealed black boxes. It would interplay a lot with the particularly language’s module/package and testing systems.

                    The cost would be periodic re-development of some features (with associated time costs and instability). The benefit would be the loss of those code areas which accrete complexity.

                    1. 2

                      Yes, definitely worth trying. The caveat is that it may be easy to fall into the trap of a full rewrite. There’s a lot of wisdom encoded in the dictum to avoid rewrites. So the question becomes: how would you make sure you don’t leave behind the most time consuming bugfixes you had to make in production on the old repo? Those one-liners that took a week to write?

                    2. 3

                      This paper was written in 2006, two years before Applicatives were introduced. The Haskell community’s understanding of how to best structure programs has been refined a lot in that time and I think you underestimate the insights of this paper even if it is only a refinement of Brooke’s ideas from 40 years ago.

                      1. 1

                        Thanks, I hadn’t seen that paper. What made you cite it in particular?

                        1. 1

                          It’s where Applicatives were introduced, as far as I know.

                          1. 7

                            Can you describe the connection you’re making between Applicatives and this paper?

                            1. 1

                              I got the impression that akkartik was saying that OOTP hadn’t added much new. My claim is that the fact that Applicatives were only introduced 10 years ago shows that the bar for novelty is actually quite low.

                      2. 1

                        “This was helpful, because it helped me to focus on the ‘causes of complexity’ portion of OP.”

                        That’s the part I read in detail. I skimmed the second half saving it for later since it was big. The first half I liked because it presented all those concepts you listed in one location in an approachable way. It seems like I learned that stuff in pieces from different sub-fields, paradigms, levels of formality, etc. Seeing it all in one paper published ahead of some of these things going into mainstream languages was impressive. Might have utility to introduce new programmers to these fundamental concepts if nothing else I thought. Understanding it doesn’t require a functional programming or math back ground.

                        Ok, so that’s their requirements. Minimax mentions things like business factors. You mention changes with their motivations. I’ll add social factors which includes things that are arbitrary and random. I don’t think these necessarily refute the idea of where the technical complexity comes from. It might refute their solutions for use in the real world such as business. However, each piece of the first half is already getting adoption in better forms in the business world on a small scale. There’s also always people doing their own thing in greenfield projects trying unconventional methods. So, there’s at least two ways their thinking might be useful in some way. From there, I’d have to look at the specifics which I haven’t gotten to.

                        I do thank you for the Reddit search given those are about the only discussions I’m seeing on this outside here. dylund’s comments on Forth were more interesting than this work. Seemed like they were overselling the positives while downplaying negatives, too, though.

                      1. 11

                        A good read, for sure. And some good ideas. But the authors only focus on technical factors, as though software were developed exclusively by programmers for their own purposes. They don’t address, for example, Conway’s Law or any other sources of complexity which don’t originate in the development process itself. They talk about formalizing requirements, but not where the requirements come from or how they got to be the way they are, or how they change over the course of development.

                        It’s certainly easier to frame the issue as being about technical problems and technical solutions. And there’s certainly plenty to talk about in that frame. But technological determinism by itself usually doesn’t have much predictive or explanatory power, which is why these kind of accounts have largely been abandoned by professional historians and sociologists who study technology. Even amateur software historians (who are doing most of the work!) typically point to business, marketing, or economic factors as being decisive influences in the development of the technologies they document.

                        Take your favorite “radically simple” system: say APL, or Forth, or Oberon, or Smalltalk or whatever. Step away from the shiny stuff and look at the people and organizations involved: who actually developed it, who paid for it, who used it and what they used it for. Then do the same for whatever “typically complex” web-app or C++ game or Free Software OS or government payroll system or whatever. The differences may be instructive.

                        1. 7

                          I’ll start. The big difference between simple systems and typically complex web apps is scale. Small codebases can do more with fewer team members. They become more likely to have better programmers. They suffer less from knowledge evaporation due to people leaving. They hire less, and so they tend to have fewer layers of organization. This keeps Conway’s Law from kicking in. The extrinsic motivational factors of money, raises and promotion don’t swamp intrinsic motivation as much.

                          I’ve gained a lot of appreciation over the past decade for the difference between technical and social problems. But in this instance the best solution for the social problem seems to be a technical solution: do more with less (people, code, concerns, etc., etc.). It doesn’t matter what weird thing you have to do to keep things cosy. Maybe you decide to type a lot of parentheses. Or you stop naming your variables (Forth). Or you give up text files (Smalltalk).

                          Once you have a simple system, the challenge becomes to keep the scale small over time, and in spite of adoption. I think[1] that’s where Lisp ‘failed’; of all your examples Lisp is the only one to have tasted a reasonable amount of adoption (for a short period). It reacted to it by growing a big tent. People tend to blame the fragmentation of Lisp dialects. I think that was fine. What killed Lisp was the later attempt to unify all the dialects into a single standard/community. To allow all existing Lisp programs to interoperate with each other. Without modification. Lisp is super easy to modify, why would you fear asking people to modify Lisp code?

                          Perhaps the original mistake was the name Lisp itself. Different Lisp dialects can differ as greatly as imperative languages. Why use a common name when all you share is s-expressions?

                          A certain amount of curmudgeonly misanthropism in a community can be a very good thing.

                          [1] I’m just a young whippersnapper coming in after the fact with my half-assed pontificating, etc., etc. I don’t mean to side-track a discussion of complexity with Yet Another Flamewar About Lisp. (Though I’d appreciate any history lessons!)

                          1. 5

                            We now examine a simple example FRP system. […] To keep things simple, this system operates under some restrictions:

                            1. Sales only — no rentals / lettings
                            2. People only have one home, and the owners reside at the property they are selling
                            3. Rooms are perfectly rectangular
                            4. Offer acceptance is binding (ie an accepted offer constitutes a sale)

                            This kind of toy example makes their observations on software complexity in general harder to take seriously. It reminds me of the hoary genre of “spherical cow” jokes. All of those simplifying assumptions (and no doubt plenty more unstated ones!) make their example system more or less completely useless to an actual real estate business.

                            1. 1

                              I agree. Especially on No.‘s 2-4 since they represent situations that either don’t map cleanly to a neat model or just ignore the corner cases that real systems can’t ignore. The models always need to be tested with the ugly requirements on top of the easy ones.

                          1. -1

                            Repetitive, irrelevant, and … pointless.

                            Much of the man page corpus is just plain wrong. Many changed the code and never bothered to change the documentation. One can easily get misled.

                            UNIX/POSIX … is getting massively “bit-rotted” in its old age. Time for different metaphors, possibly maintained by ML to keep them effective and relevant?

                            1. 13

                              Do you have any examples of outdated manpages? Your comment is awfully vague.

                              1. 5

                                I run across examples semi-regularly, and try to report upstream when I find them (some upstreams are easier to report to than others). Mostly I’m pretty happy with manual pages, though.

                                Just recently, I noticed that pngcrush(1) on Debian is missing the newish -ow option. Upstream doesn’t ship a manpage at all, so Debian wrote one, but it doesn’t stay in sync automatically. Therefore I should probably report this to Debian. Upstream does ship a fairly complete pngcrush -v help text though, so I wonder if the manpage could be auto-synced with that.

                                I’m pretty sure I’ve reported a bunch of others in the past, but the only example that comes to mind at the moment is that privileges(5) on illumos used to be years out of date, missing some newer privileges, but it was fixed fairly quickly after I reported a bug.

                              2. 1

                                I really want to see documentation generated via machine learning systems. I wouldn’t want to use that documentation for anything, but I’d like to see it.

                              1. 2

                                @nickpsecurity, very interesting thesis. The caveats are on page 67:

                                • No mutation
                                • No polymorphism
                                • No separate compilation
                                • No mutual recursion
                                • No higher-order functions
                                1. 1

                                  I think mutation and separate compilation are only ones that would be potential problems for most coders in system setting. The mutation could be isolated with a different mechanism to check it. The separate compilation might be handled with a special linker. I’ve always thought a better linker was necessary anyway to prevent linker errors. Still extra work for anyone building on these techniques, though.

                                  1. 2

                                    Mutation is addressed later in the thesis (chapter 6) in a refinement of the original model. Seems to make the analysis much more complex, particularly if you also want polymorphism.

                                    1. 1

                                      The extra complexity seems to be a general problem we just covered here. ;) Of course, there’s more research all the time into automated methods for separation logic and other methods for handling that stuff. So, the question becomes whether one could mix that with the models having trouble with mutation. I didn’t dig into the paper enough to attempt to answer that.

                                1. 5

                                  Interesting idea. But you’re missing a couple of major drawbacks:

                                  1. Software is usually a liability rather than an asset. It can increase your attack surface. It can be a time sink to maintain over time. It can not quite do what you want and bleed energy. It can be actively malicious, insidiously stealing your data or bitcoin. (I’m seeing a lot more awareness of these issues after the security vulnerabilities and Cambridge Analytica of the last few years. For example, this article can be distilled down to, “stop using software to trust people, software will never be trustworthy enough.” Software’s benefits are usually residents of Mediocristan, but its drawbacks often live in Extremistan.)

                                  2. Dependencies matter. You list as an advantage that “Unless the language or library used is part of the requirements, programmers proficient in different languages can still trade.” But you’re reading that fact from exactly the wrong direction. Since anything I’m unable to get running is by definition useless, and since installing languages and libraries is non-trivial in the most general case, platform dependencies will grow to become an essential part of requirements. An obvious example: any programs I distribute are useless to someone running Windows. Another example: If I don’t want to install Java, you either don’t use Java, or you need to come up with a way to distribute your program with Java in some sandboxed fashion. (We all know how successful such attempts have been.)

                                  With these issues in mind, I have two suggestions.

                                  Suggestion #1: Reduce the unit of exchange. If people exchanged programs that could be built in an hour or two rather than a week or two, the programs would be more obviously right, more generally useful and easier to specify, detect duplicates, probably rely on fewer dependencies, have smaller risks for the author, etc., etc.

                                  If the expected project size were to go down I can contribute a couple of small scripts I’ve built that I find very useful:

                                  • search: looks in a unix-like directory tree for files containing all of a list of terms. Basically a search-engine-like interface atop grep. Useful for maildirs, but since I started using it I’ve also started structuring other data sources as directories with one file per unit.
                                  • accumulate: an add-on for search above that allows me to try multiple searches when looking for something specific, without getting duplicate results on successive searches.

                                  (Hmm, as I look at these I notice a couple more drawbacks to your idea: 3. Many programs are demand-creating rather than demand-satisfying. People may not know they want them until they see them in action. 4. Programs often work in systems. Unix pipes. Or operating on a directory of files with one file per unit, as I do above. Or pipes operating on Yaml. Individual programs in these sets aren’t useful without the right context and environment.)

                                  Suggestion #2: Cater to a more specific kind of person. I can imagine at least two axes to segment programmers by: time rich vs time poor and money rich vs money poor. So for example someone money rich but time poor and someone time rich but money poor would end up transacting the conventional job for money. Your current approach has benefits along both time and money axes, but the cost is the drawbacks enumerated above. For example, I’m not particularly time rich, but I know that a program I write will do just what I want, nothing I don’t want, be fairly parsimonious of dependencies, and set at least some sort of lower bound for secure practices. So if I have an idea for a program it’s better to wait and build it myself rather than try to optimize too hard for time. But maybe there’s some way to mitigate the drawbacks if you focus on just one of the quadrants?

                                  1. 2

                                    Thanks for your thought on this.

                                    I think you are mainly thinking of software that will be used on servers open to the internet and when I wrote this, I was mainly thinking of desktop apps used to make things (which may then be put on Internet-facing servers in production, but not the desktop app itself).

                                    But aren’t these general problems with using open source libraries in your projects (and not something specific to trades)?

                                    1. For the potentially malicious versions, shouldn’t having the source code at least help quite a bit in that direction.

                                    Depending on how bad it is, I’d argue that you didn’t actually get the software you wanted if it ends up being so much trouble. (And consequently shouldn’t be used despite another user having made it, which is unfortunate but avoids the downsides.)

                                    1. I’d say this would all just go into the requirements. I’m hope there’s similar enough requirements that they would center around major platforms (that makes matches easier), which is still less restrictive than a specific language or library.

                                    Maybe I just happen to make stuff that’s more cross platform by default.

                                    Since anything I’m unable to get running is by definition useless,

                                    See the “usage” pattern I described. In my case, not being able to run it would still lose a lot of value, isn’t useless.

                                    Suggestion #1: I’m happy to try this version if there’s enough people.

                                    Though I don’t think I have anything I could safely say can be made in 1-2 hours from scratch. Any random hiccup can take up the entire time. It also makes thinking about and writing out the specs a bit less worthwhile.

                                    But I don’t know. “A weekend” wasn’t that thought out of a time length either. So I’d give this one a go if more people prefer it.

                                    If the expected project size were to go down I can contribute a couple of small scripts I’ve built that I find very useful:

                                    I hadn’t thought of just releasing existing source. But that works too! Really any way works, even finding an existing project (if licenced right).

                                    You still have to add programs that you want. And the way I described it, it could be that someone else is assigned to make search and accumulate (which you already have) while you’re assigned something else from your list.

                                    People may not know they want them until they see them in action.

                                    I guess people can browse the existing list, although this doensn’t help see it in action. I was thinking more of programs that do not yet exist so you couldn’t see them in action anyways.

                                    Suggestion #2 This is an interesting suggestion but I didn’t entirely follow your discussion.

                                    How would focusing on only one quadrant (potentially) help with the drawbacks listed? Do you mean all the time poor, money right would tend to want the same kind of things?

                                    Or do you mean some of the quadrants wouldn’t find the drawbacks as much?

                                    (I’m in no way in disagreement, just didn’t follow the reasoning entirely.)

                                    1. 2

                                      Yes, you’re right that my drawbacks apply as well to existing open source. But you’re proposing something new, something where people’s reputations aren’t involved yet the way they’re involved in conventional projects. And something where I have to precommit to using something that doesn’t exist yet. The competition isn’t existing open source projects. The competition is me making something myself.

                                      (I’m trying to attack these drawbacks for open source as well.)

                                      See the “usage” pattern I described. In my case, not being able to run it would still lose a lot of value, isn’t useless.

                                      I don’t follow. I don’t see the word “usage” anywhere in OP.

                                      Maybe I just happen to make stuff that’s more cross platform by default.

                                      What sort of stack do you use? How do you test on multiple platforms?

                                      1. 1

                                        Thanks again for your thoughts. I have to admit, I’m even more confused, which could be good since it can be some important blind spot I’ve missed.

                                        Yes, you’re right that my drawbacks apply as well to existing open source. But you’re proposing something new, something where people’s reputations aren’t involved yet [in] the way they’re involved in conventional projects.

                                        I don’t know if you mean reputation as in skill level or reputation as in not being a bad actor.

                                        For the first, we probably need a few round of this to see how it goes. Since trades are per project, the more skilled can put it less time. (There’s still the question of quality of the output. But I think that would have the same discussion as in the “Failure” section.)

                                        For the second, I’m thinking the projects are small enough to make an audit not as hard.

                                        I mean, again, this reputation problem happens for new accounts that show up on github with new projects entirely made on their own. You still have to decide to use them or not somehow. I’m not saying open source doesn’t have its drawbacks, but collaborative trades isn’t intended to solve any of those problems. Its intended to improve coordination.

                                        And something where I have to precommit to using something that doesn’t exist yet. The competition isn’t existing open source projects. The competition is me making something myself.

                                        But if you hire someone or even thinking of writing the software yourself, you’re still precommiting to a non-existent program (with just a spec). Basically, I didn’t understant the extra burden your are trying to highlight when comparing this to writing everything yourself.

                                        Let me try to explain the difference anyways (but I might be going in a completely wrong direction).

                                        Scenario: You want programs A, B, C, D (they are independent and all don’t exist).

                                        • Writing it all yourself: Takes time of writing all four of A, B, C and D. Full control over everything.
                                        • Collaborative software trade: Takes time of writing one of A, B, C or D (you don’t get to pick which one, the matcher assigns it to you). Full control over your assigned program, no/little control over the others. When you complete your assigned program, it has to be released as open source.

                                        The result in both cases is that you now have all four programs. (In the collaborative trade, so does everyone else.)

                                        As described, you might have to modify the three programs that you receive to better suit your needs, but the people who made them took your wishes into consideration to make editing them easier. This does means you should take others wishes into account when make your assign program.

                                        In the case where I would write and release the source for all programs anyways, the trade is much better. Even if all other programs received this way are really bad, I can just (re)write them myself as I was going to do anyways without the trade. But most likely, I can at least salvage some useful things out of them. In the more optimistic case, what I get is much better.

                                        I don’t follow. I don’t see the word “usage” anywhere in OP.

                                        Sorry I meant the stuff I typed in the other reply in here:

                                        In my case, on top of documentation (or even instead of it), I’d like to have enough instructions for rebuilding the whole thing from scratch. This means favouring simpler internals and fewer large list of cases.

                                        For servers, we know to have script and configuration setup to restore state when the machine reboot. This would be the human version of that.

                                        As a bonus, on top of describing what works, add a few remarks about what’d be lost with some “obvious” simplifications to the system.

                                        I really only need the program or library to prod it to check and understand the properties it has.

                                        What sort of stack do you use? How do you test on multiple platforms?

                                        Most of what I’ve written is in Python, using Tkinter if graphics are needed. I did not expect the programs to be particularly cross-platform (and so no such test was done!) but I know people have tried at least on Mac. It could also be that it doesn’t work for some platforms and no-one said anything yet.

                                        1. 3

                                          I understand that scenario from the original article. But let me try to restate my objections:

                                          Takes time of writing all four of A, B, C and D.

                                          I agree. But the time taken to write them may not be important for everyone.

                                          this reputation problem happens for new accounts that show up on github with new projects entirely made on their own. You still have to decide to use them or not somehow.

                                          True. But I look at hundreds of open source projects over the years, but start using only a few. Less than 1% yield. With trades that sort of strike rate would yield nothing.

                                          if you hire someone or even thinking of writing the software yourself, you’re still precommiting to a non-existent program (with just a spec).

                                          But you usually have more steering control over the process. Hmm, perhaps there’s some way to add that back into your idea? Maybe four of us start out building programs, but we meet once a week or something to chat about how it’s going. This would make it easier for us to take each other’s wishes into consideration, as you put it.

                                          Even if all other programs received this way are really bad, I can just (re)write them myself as I was going to do anyways without the trade. But most likely, I can at least salvage some useful things out of them.

                                          I think this is the crucial disagreement. See my strike rate with GitHub projects above. I don’t think salvaging useful things out of bad projects is either common or economic. It can take way longer to understand a bad project that does something in a bad way with 1% overlap with what you care about, than it would to just build the 1% yourself surrounded by a different 99%.


                                          As I write this, I thought of a couple more objections that I hadn’t aired yet:

                                          1. I don’t usually have a list of programs I wish existed. I usually get ideas one at a time. And then I forget the ones that I wasn’t very into, and obsess one at a time about the ones that I really care about. So I don’t know how common it is for programmers to have a list of programs A, B, C and D they wish to exist.

                                          2. Oftentimes my goal when programming isn’t mainly about the utilitarian value of the final program. It’s the wish to create an “object of conversation”, a locus for collaboration with another programmer. If I found 3 other people who wanted the same programs to exist as myself, my preferred approach would be to collaborate with them on all four programs, rather than to go our separate ways and build one program in isolation. Working together just seems more fun.


                                          These are my objections, and I care about conveying them to you just so you understand them. But I don’t mean for them to discourage you. My objections are all abstract, and if you don’t find them important that’s fine. Just keep going, and either you’ll understand them better, or you’ll discover that my theoretical objections don’t actually matter in practice, that there’s a segment of programmers who can use this even if it doesn’t include me.

                                          1. 1

                                            Thanks again. Now I understand.

                                            These two probably go together.

                                            I agree. But the time taken to write them may not be important for everyone.

                                            1. I don’t usually have a list of programs I wish existed. I usually get ideas one at a time. And then I forget the ones that I wasn’t very into, and obsess one at a time about the ones that I really care about. So I don’t know how common it is for programmers to have a list of programs A, B, C and D they wish to exist.

                                            I think this is indeed one of the differences.

                                            If there is a list and the list keeps growing in size or its members in scope, the total estimated time can exceed human life expectancy. Because of how bad we are at estimate, even 10x less estimated time is cutting it too close.

                                            Most things I want are editors and before they exist, so there also needs some time to actually use them afterward. :)

                                            Having said that, indeed if time taken to write the programs is not important then trades don’t offer anything even in the most optimistic outcome (and so there’d be no reason to participate in that case).

                                            True. But I look at hundreds of open source projects over the years, but start using only a few. Less than 1% yield. With trades that sort of strike rate would yield nothing.

                                            Well, most projects I’ve looked at don’t have anything near my requirements taken into account. Do you mean that of the ones that matches your description, you only get 1% useful? Or of the ones matching keyword search had 1% yield?

                                            But you usually have more steering control over the process. Hmm, perhaps there’s some way to add that back into your idea? Maybe four of us start out building programs, but we meet once a week or something to chat about how it’s going. This would make it easier for us to take each other’s wishes into consideration, as you put it.

                                            I’ve thought about something along those lines. Basically, the same thing can be done with incorporating other’s requirement at any threshold since its all symmetric. So for the control gained on the other projects, you’d lose control over your own.

                                            With it set this way, I think you’d lose flexibility on skill difference and potential asynchonicity (if we all just agree to have the projects done by some point, people can start/stop any time between now and then). Depending on how the meetings are conducted, you might also lose

                                            I thought finding people who want the same things and have comparable abilities (for whichever measure) would be much harder. I haven’t run this yet so I don’t know.

                                            In my particular case, I’d be happy to just take the weekly discussions as the output. Although I’d just want the final summary, not the entire thread.

                                            It’d also be one meeting per projects since the three other people’s list aren’t the same. I don’t quite see how the logistics of that would work out yet.

                                            I think this is the crucial disagreement. See my strike rate with GitHub projects above. I don’t think salvaging useful things out of bad projects is either common or economic. It can take way longer to understand a bad project that does something in a bad way with 1% overlap with what you care about, than it would to just build the 1% yourself surrounded by a different 99%.

                                            You’re right. Although I think the difference in expected rate might more central to the disagreement. Yes, 1% would not make it worthwhile. (My requirements includes instructions for rebuilding it so I could tell if its worth it to dig through more easily.)

                                            I don’t have a particular reason to think the initial rate would be higher. However, I’d like to argue that this system can maintain a high rate if it already has one, while something like Github cannot. Namely, the programmers with the lowest failure rates on Github are likely to move to much more non-open source while everyone else stays.

                                            Here is failures can be made to net nothing (not sure how to implement that yet) and success is rewards at the system’s current rate then I think its stable (provided the time savings are a good enough reason to stay, which isn’t always the case as discussed much earlier).

                                            Oftentimes my goal when programming isn’t mainly about the utilitarian value of the final program. It’s the wish to create an “object of conversation”, a locus for collaboration with another programmer. If I found 3 other people who wanted the same programs to exist as myself,

                                            See the above, but its most likely that each one only wants one program in common with you. (Maybe this doesn’t affect what follows but its 3 times as much communication.)

                                            my preferred approach would be to collaborate with them on all four programs, rather than to go our separate ways and build one program in isolation. Working together just seems more fun.

                                            That’s partly in the “Lack of discussion” section. I don’t know how well that would work with people with different languages and libraries preference. High level discussion might still work.

                                            Oftentimes my goal when programming isn’t mainly about the utilitarian value of the final program. It’s the wish to create an “object of conversation”, a locus for collaboration with another programmer. If I found 3 other people who wanted the same programs to exist as myself, my preferred approach would be to collaborate with them on all four programs, rather than to go our separate ways and build one program in isolation. Working together just seems more fun.

                                            I think this is another major difference in objectives. I most definitely want the final product first. I’d much rather discuss the post-mortems with them than follow along.

                                            (Although I’m wondering if there’s more similarity than difference here. You want to discuss and work with people. I want to be in the same state as if I had those discussions and collaborations, but reached more quickly.)

                                            These are my objections, and I care about conveying them to you just so you understand them. But I don’t mean for them to discourage you. My objections are all abstract, and if you don’t find them important that’s fine. Just keep going, and either you’ll understand them better, or you’ll discover that my theoretical objections don’t actually matter in practice, that there’s a segment of programmers who can use this even if it doesn’t include me.

                                            Thanks again for chiming in. This discussion is much appreciated!

                                  1. 5

                                    This is a nice effort, but one wonders why the author doesn’t want to use vmstat(8).

                                    Side note: The author doesn’t seem to be too familiar with OpenBSD and its conventions. The man page was written in man(7), which is deprecated in favor of mdoc(7) on OpenBSD

                                    1. 3

                                      Thanks for educating me about the distinction: https://github.com/blinkkin/blinkkin.github.com/wiki/man-vs-mdoc

                                      1. 4

                                        I’d suggest Practical UNIX Manuals for introductionary reading for mdoc, too: https://manpages.bsd.lv/mdoc.html

                                      2. 3

                                        Thanks very much for your pointing out to use mdoc!

                                        Compared to vmstat(8), my simple toy has following differences:
                                        (1) Add displaying swap space;
                                        (2) Only consider active pages as “used” memory, others are all counted as “free” memory.IMHO, for the end user who doesn’t care the guts of Operating System, maybe this method is more plausible?

                                        All in all, I just write a small tool for fun, and thanks very much again for giving pertinent advice!

                                        1. 2

                                          Agreed. Sometimes you don’t really care about everything vmstat offers. free is dirty neat :)

                                          • TIL about mdoc
                                          1. 1

                                            P.S. After some testing, I modify the calculating free method just now: use free pages as “free” memory, then others are considered as “used” memory.

                                        1. 2

                                          One question I’ve been puzzling over since your previous post: where does the “forkness” of ‘+/ % #’ lie? I’m probably being the fish asking what water is, but I’m used to languages having a well defined evaluation model with precedence. Like expanding inner expressions first, or applying operators from left to right. How does the ‘mean’ construction above know to pass its input to both ‘+/’ and ‘#’? Is it just hardcoded into a few operations (akin to Common Lisp special forms)? Or is there a way to construct hooks out of arbitrary operators?

                                          1. 2

                                            Or is there a way to construct hooks out of arbitrary operators?

                                            Any three verbs in a row are a (monadic) fork (there are also dyadic forks). Adverbs (e.g. /) are evaluated before verbs (e.g. %), so a verb is produced by the verb-adverb +/ and thus you get three verbs (plus-over, divide, tally) in a row and thus a fork.

                                            There’s no hard coding of it:

                                                plus =: +
                                                over =: /
                                                plusover =: plus over
                                                divide =: %
                                                tally =: #
                                                average =: plusover divide tally
                                                average i.4
                                            1.5
                                            
                                            1. 2

                                              Like @lorddimwit mentioned, any three verbs in a row form a fork. Forks and hooks do have well defined evaluation models. It’s spelled out pretty well on this page.

                                              In the case of mean, we’re looking at a “monadic fork” (a fork that takes a single argument). The evaluation model for monadic forks looks like this:

                                              (V0 V1 V2) Ny  is  (V0 Ny) V1 (V2 Ny)
                                              

                                              Subbing V0, V1, and V2 for +/, %, and # gives us:

                                              (+/ Ny) % (# Ny)
                                              

                                              Where Ny is your argument.

                                            1. 11

                                              Look up “multiple values” in the Hyperspec. Mind. Blown :)

                                              Also, no realistic Common Lisp code would use lists for storing vectors. There is actually a VECTOR datatype..

                                              EDIT: you certainly have a point though when speaking of some simplest Lisps, but it hasn’t been state of the art from 1970s.

                                              1. 4

                                                I still see this every once in a while. Can be rephrased as, “makes prototyping too easy”. Which can be a strength rather than a weakness provided you eventually create data structures.

                                                It’s no longer restricted to Lisp these days. I use arrays all the time when prototyping Python.

                                              1. 4

                                                What about if your HLL is declarative and non-turing complete? Something like Dhall. Seems like the dhall interpreter could find a bunch of misuse problems.

                                                1. 1

                                                  That would help, but the key is perhaps not letting the HLL change. Is Dhall pretty stable?

                                                  1. 2

                                                    Development status

                                                    I am beginning to author a formal language standard for Dhall to help with porting Dhall to other languages. If you would like to assist with either standardizing the language or creating new bindings just let me know through the issue tracker.

                                                    This would seem to indicate that it isn’t and I don’t see any official releases on the github page.

                                                1. 2

                                                  Does that kill the idea? Perhaps I’ve misunderstood, but is it saveable with the notion that the system can know:

                                                  • the time a config file was written/last read
                                                  • the version of the language on the system in use at that time
                                                  • language constructs which have changed since then

                                                  Think about something like ‘go fix’ which programmatically move you from one version to another.

                                                  Either at language upgrade time, or at config file load time, you could run language upgrade steps and checks.

                                                  1. 1

                                                    The problem isn’t changing all programs, it’s thinking about the security ramifications of all tools, and the programs they could receive, that may not actually be in the system at the moment.

                                                    I do appreciate the attempt :)

                                                  1. 2

                                                    How would you prevent any user of this stack from implementing a programming language and using it to configure a component of itself? Lots of genuine problems to be solved with a computer program involve implementing a language of some kind - the command line flags to ls, even, form something like a small DSL.

                                                    1. 1

                                                      Oh, I should clarify that I was never trying to restrict what end users do with the stack. The goal was only to shift the out-of-the-box experience to more parsimonious defaults, and to shift the expectations of end users – if a system is profligate in the number of languages it uses, that should be cause for question. All things being equal, prefer stacks with fewer languages.

                                                      1. 1

                                                        the command line flags to ls, even, form something like a small DSL

                                                        I agree, commandline flag creep is also a problem. It’s been addressed before by people like Russ Cox. The DSL nature is even more apparent with find.

                                                        I was kinda always aware that the notion of “language” is not very well-posed, but this doesn’t seem like a big issue. With a little taste we can recognize a language when we see it, and ask if we can use an off-the-shelf alternative rather than brewing new moonshine. Our world is filled with reasoning around terms we can’t precisely define.

                                                        I think just considering the number of languages in a system would level up the conversation around what constitutes good design.

                                                      1. 19

                                                        sighs

                                                        Replacing a corporate data aggregator with a distributed one doesn’t actually reduce the amount of data gathered.

                                                        If you don’t want your information online and searchable don’t put it online.

                                                        It doesn’t matter if it’s a friendly mastadon instead of a Harvard dudebro–sharing data means your data is shared. Staaaaaahp.

                                                        EDIT: Mastadon also has some interesting history.

                                                        1. 34

                                                          If you don’t want your information online and searchable don’t put it online.

                                                          This is not a panacea. Facebook has my phone number because other people chose to upload their contacts. Google has incredibly personal conversations because other people chose them for email. Equifax has my credit history because nearly every banking institution reports to them. Nielsen-Catalina Solutions knows my shopping preferences because retailers secretly sell it to them.

                                                          If you don’t want your information online and searchable, get data protection laws.

                                                          1. 3

                                                            Laws help, but we also have to take responsibility for not sharing our data (or the data of our friends) online.

                                                            1. 1

                                                              Unfortunately most users don’t know or don’t care that Facebook uploads their contacts.

                                                          2. 15

                                                            That article is below the standards I expect from this site.

                                                            edited after finishing reading: That article is absolute, complete garbage.

                                                            1. 6

                                                              Please elaborate. I thought it was an interesting look into experience of having vastly different cultures using the same messaging fabric, and the issues that that gives rise to.

                                                              1. 2

                                                                I don’t think it’s garbage. I think it could have been better written, but as you point out the culture clash thing is an interesting phenomena.

                                                                I also don’t think said history would have any bearing on which social media platform you choose for most people.

                                                              2. 2

                                                                That article is absolute, complete garbage.

                                                                Do you see it as garbage because of an abundance of factual inaccuracies, or something else?

                                                                The reason I ask is that clearly there’s an absolutist free-speech position being promoted, but certainly all the stuff about Japanese and Spanish speaking Mastodon activity correlates well with what I saw at the time. I don’t know anything about people getting upset about Eugen being paid though, or any of the behind the scenes stuff.

                                                              3. 5

                                                                Replacing a corporate data aggregator with a distributed one doesn’t actually reduce the amount of data gathered.

                                                                It does if the data you share is subject to aggregator influence. And it is, since the aggregator controls the platform and its defaults.

                                                                Facebook went through a period where everytime I checked my privacy settings I found something open that I didn’t want to be open. The years of the Cambridge Analytica scrape line up pretty well with that phenomenon. Facebook used to be hugely incented to make as much of your data public to the world (search engines and, it turns out, CA) as possible. Mastodon has no such incentives.

                                                                Yes, if I share something with someone I share it with them. But I’d like to not share it with everyone else.

                                                                1. 1

                                                                  It does if the data you share is subject to aggregator influence

                                                                  I’m not quite sure what this means, do you mind elaborating?

                                                                  1. 5

                                                                    I thought I did in the rest of my comment? Basically I’d enter some data in my profile with some understanding of what was visible to whom. Then I’d come back a month or three later, and somehow stuff I intended to be visible only to friends would somehow be visible to some new vector (apps) or API. Facebook’s privacy settings sprawled out of control for a couple of years. Here’s some links I was able to dig up in a quick search:

                                                                    http://mattmckeon.com/facebook-privacy

                                                                    https://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-changes-good-bad-and-ugly

                                                                    https://www.eff.org/deeplinks/2010/04/facebook-timeline

                                                                    https://www.washingtonpost.com/news/the-switch/wp/2014/04/08/your-facebook-privacy-settings-are-about-to-change-again

                                                                2. 3

                                                                  I agree with this sentiment but I think all the bruhaha is currently about something entirely different. When you use an account on Mastodon, your toots are federated across the global timeline. That, along with an email address that stays local to the server you signed up on, and maybe some HTTPS traffic logs on your server, is the sum total of the information you are exposing via Mastdon until you choose to add more.

                                                                  This is, from where I stand at least, a vastly different kettle of fish than Facebook.

                                                                  1. 2

                                                                    I agree. To some extend the distributed nature even makes it harder to remove data you don’t want online anymore.

                                                                    1. 3

                                                                      Removing data is already impossible in the information-theoretical sense. You just get lucky a lot of the time.

                                                                      To address this particular issue, IPFS has blacklists that track DMCA notices and abusive content. They’re opted in to by consumers.

                                                                      1. 2

                                                                        On the other hand the data is also distributed across many instances as opposed to being owned by a single entity. There’s also the fact that Mastodon doesn’t try to track your personal identity, and the interactions can be completely anonymous. Meanwhile, the whole purpose of a site like Facebook is to build an intimate profile of you and your friends.

                                                                        1. 1

                                                                          Depends, some instances have ElasticSearch enabled, ostensibly to enable full-text search, but ES can be used for more insidious ““big data” purposes, to profile users with. Tools like Kibana from the ES people make such tasks trivial compared to writing tedious queries by hand. And due to the nature of federation, if someone from that instance follows you, they have your toots, which the admin can use for said purposes.

                                                                    1. 13

                                                                      GitHub are, of course, a company that thrives from content creators acting as sharecroppers on their centralised hosting platform. The dichotomy of “freedom to post whatever you want to GitHub” vs “OMG the Fahrenheit 451 future of Europe” is a false one, because you can post your open source project’s code to your open source project’s GitLab, Kallithea, or other instance. GitHub are downplaying that alternative so that “freedom” is recast as “the freedom for GitHub to have all your codes”.

                                                                      1. 4

                                                                        Wouldn’t this legislation apply to Gitlab or any other alternative as well?

                                                                        1. 2

                                                                          Wait, my hard drive can store stuff too, now we need to add copyright detection to virus scanner a too!

                                                                          1. 0

                                                                            I can run my own gitlab, I cannot run my own github. If I run my own gitlab then I can know that only my own project code is hosted on the gitlab.

                                                                            1. 4

                                                                              And what, you don’t plan to ever collaborate with anyone? You don’t plan to ever use any open-source libraries written by others? You’re sure you aren’t going to hit any false positives? How do you think Gitlab is being built for your use? Pointing out OP’s self-interest doesn’t actually replace addressing its criticisms.

                                                                              If this goes through, copyright trolls will become a thing. Get a lawyer, squat on some maximally general pattern of bits, and now projects can’t upload stuff matching it without paying you.

                                                                              1. 1

                                                                                If he sets up public repositories people can contribute code to his repository on his own Gitlab instance.

                                                                                1. 1

                                                                                  i run my own gitlab for my software projects, people join there to collaborate or send me patches via email / pastebin.

                                                                            2. 6

                                                                              You got the point here. GitHub is trying to stay in a grey area instead so people won’t move away from their services, “supporting” both freedom and law by passing the ball to us with their Call to Action.

                                                                              1. 2

                                                                                They explicitly mention that for smaller players introduction of content upload filters would be even more burdensome. And also they don’t mention it, it’s obvious that GitHub of all companies would have the resources to implement such a thing. So I don’t see why you try to cast it as GitHub caring only for themselves.

                                                                                Besides, “listen to what’s being said, not who’s saying”. The concern is valid and well articulated. Any attempt from copyright mongers to tax another human activity is counterproductive to progress and should be stopped.

                                                                                1. -2

                                                                                  github explicitly mention that github are the best people to solve this problem? interesting.

                                                                                  1. 2

                                                                                    Sorry, where did you get that? :-) It’s neither in the text, nor in my comment.

                                                                                    1. -1

                                                                                      so, when you said “They explicitly mention”, you didn’t mean the “they” we were talking about? interesting.

                                                                                      1. 3

                                                                                        Let’s assume you’re not trolling me on purpose here…

                                                                                        They is GitHub. I did say GitHub would be the least affected themselves by such a law:

                                                                                        GitHub of all companies would have the resources to implement such a thing

                                                                                        I did not say they “are the best people to solve this problem”. It’s just a completely different thing.

                                                                              1. 19

                                                                                I love this caveat:

                                                                                I’d like to ask that you do not use the information below as an excuse to be unkind to anyone, whether new learners or experienced Python programmers.

                                                                                1. 16

                                                                                  I fucking hate reCaptcha, partly because the problems seem to be getting harder over time. Sometimes I literally can’t spot the cars in all the tiles.

                                                                                  1. 19

                                                                                    It’s also very effective at keeping Tor out. ReCATPCHA will, more often than not, refuse to even serve a CAPTCHA (or serve an unsolveable one) to Tor users. Then remember that a lot of websites are behind CloudFlare and CloudFlare uses ReCAPTCHA to check users.

                                                                                    Oops.

                                                                                    1. 2

                                                                                      For the Cloudflare issue you can install Cloudflare’s Privacy Pass extension that maintains anonymity, but still greatly reduces or removes the amount of reCaptchas Cloudflare shows you if you’re coming from an IP with bad reputation, such as a lot of the Tor exit nodes.

                                                                                      (Disclaimer: I work at Cloudflare but in an unrelated department)

                                                                                      1. 2

                                                                                        Luckily, CloudFlare makes it easy for site owners to whitelist Tor so Tor users don’t get checked.

                                                                                        1. 9

                                                                                          Realistically, how many site owners do that, though?

                                                                                      2. 16

                                                                                        I don’t hate it because it’s hard. I hate it because I think Google lost its moral compass. So, the last thing that I want to do is to be a free annotator for their ML efforts. Unfortunately, I have to be a free annotator anyway, because some non-Google sites use reCaptcha.

                                                                                        1. 7

                                                                                          Indeed, also annoying is you have to guess at what the stupid thing is trying to indicate as “cars”. Is it a full image of the car or not? Does the “car” span multiple tiles? Is it obscured in one tile and not in another? Which of those “count” if so? Should I include all the tiles if say the front bumper is in one tile or not? (my experiments have indicated not).

                                                                                          Or the store fronts, some don’t have any signage, they could be store fronts, or not, literally unknowable by a human or an AI with that limited of information.

                                                                                          I’m sick of being used as a training set for AI data, this is even more annoying than trying to guess if the text in question was using Fraktur and the ligature in question is what google thinks is an f, or an s. I love getting told I’m wrong by a majority of people not being able to read Fraktur and distinguish an f from an s from say an italic i or l. Now I get to be told I can’t distinguish a “car” by an image training algorithm.

                                                                                          1. 4

                                                                                            At some point, only machines will be able to spot the cars.

                                                                                          1. 10

                                                                                            I work in the info sec field and honestly I’d repremand an employee for not investigating an annomoly on the network. Unless the cluster is for testing purposes and the employee’s title contains the word “scientist” they shouldn’t be running their own ad-hoc tests. The fact that they believe their biggest mistake was telling their boss makes me cringe too. IMHO this is one whiny worker and I’d recommend getting rid of them.

                                                                                            1. 14

                                                                                              IMHO this is one whiny worker and I’d recommend getting rid of them.

                                                                                              And you’d lose a great deal of expertise, if you were familiar with the author’s work and past writing. :)

                                                                                              People stuck working under bozos develop certain pathologies, and it takes solid leadership to build trust and correct those pathologies.

                                                                                              1. 6

                                                                                                I have read a bit of the author’s other work and it’s largely filled with the same “everyone doesn’t work as hard as me!” rhetoric. Just because someone writes about how they’re the only one who does anything doesn’t mean it’s true.

                                                                                                1. 9

                                                                                                  Sure, but it doesn’t also mean it’s false either.

                                                                                                  It’s entirely possible (given their employment history) that they actually ended up in dysfunctional orgs and units.

                                                                                                  1. 2

                                                                                                    That’s a good point, but there is also the flip side: they’re a dysfunctional problem worker.

                                                                                                  2. 9

                                                                                                    Some people are competent but grind up against incompetent orgs. Some people are incompetent and eventually flushed out of competent orgs. They tell similar stories. I was right and everyone was wrong. There’s usually a tell or two that reveals which it is though.

                                                                                                    1. 7

                                                                                                      I seriously don’t understand why there’s a question about this. I too have concerns about this post, but reading past posts it seems blindingly obvious that Rachel Kroll is competent and knowledgeable. Regardless of what you think of her personality.

                                                                                                      /cc @friendlysock and @tedu. Yes, in general it can be this or it can be that. But in this instance is there really any doubt?

                                                                                                    2. 2

                                                                                                      I think this post is more illustrative of her poor leadership skills than of her good technical skills. Furtheremore, she doesn’t seem to be aware of that aspect of it at all. She seems genuinely surprised that her bahaviour was not welcome by everyone in management.

                                                                                                    3. 7

                                                                                                      What about all the other people who didn’t even spot the anomaly because they weren’t trying?

                                                                                                      1. 22

                                                                                                        It’s the author’s opinion that others weren’t working as hard so I will take that assessment with a grain of salt. I don’t think it’s an individual’s prerogative to make work traps for other employees so they can be shown as “not working that hard”. If you’re really concerned about the performance of others then have an honest discussion with your manager about it, don’t try to measure others with a metric of your choosing.

                                                                                                        1. 5

                                                                                                          If the anomaly persists for two months without anyone seeming to notice, is it really a problem? If it is causing a problem, that suggests that key metrics aren’t being observed - a problem exists but nobody knows - in which case you’ve got a bigger problem!

                                                                                                          1. 4

                                                                                                            What about them? Were they even supposed to be trying? If the author always fixes the problem, like she claims, it seems possible that other people on the team may have thought it was her responsibility.

                                                                                                            In any case, when she saw the problem she should have told her boss and said something like, “I see there’s a cluster with an extra node, but I don’t have time to fix it myself right now, can you have somebody else investigate?”

                                                                                                            1. 5

                                                                                                              Yeah, they’re supposed to be trying.

                                                                                                              There was no division of duties on the team. Everyone was responsible for the system as a whole.

                                                                                                              If I leave my trash next to your desk every day, and you always throw it out for me, are you the one littering when a soda can doesn’t get picked up? Am I even supposed to be trying, once I become dependent on you doing my job for me?

                                                                                                        1. 3

                                                                                                          Nitpicker’s corner: minicomputers were the big fridge-sized ones, because they were smaller than the room-filling ones. Microcomputers are what we’d call this.

                                                                                                          1. 1

                                                                                                            Ack, I knew that. Too bad I can’t edit it now.

                                                                                                            1. 1

                                                                                                              Fixed.