Threads for kavec

  1. 2

    In this specific case, I believe the model only fails if you pick the constants appropriately: spawn more goroutines than buffered slots. But that’s basically the bug in the first place, so it leaves me with the feeling that it helps you find the bug if you know what the bug is already. That said, I can see how writing the model might help focus the author on the choice of constants and help them discover the bug.

    I’d be interested to hear the authors thoughts on that. Does that feeling sound fair? Are there tools that would find choices for the constants that fail for you?

    1. 6

      In this case I modeled it to closely match the code, but normally you’d design the spec so that it “sweeps” through a space of possible initial values. It’d look something like

      variables
        Routines \in {1..x: x \in 1..MaxRoutines};
        NumTokens \in 1..MaxTokens;
        channels = [limitCh |-> 0, found |-> {}];
        buffered = [limitCh |-> NumTokens];
        initialized = [w \in Routines |-> FALSE];
      

      Then it would try 1 routine and 1 token, 1 routine and 2 tokens, etc.

      1. 1

        Here I believe the NumRoutines constant exists to ensure the model is bounded– TLC will check every number of potential goroutines from 1 to whatever NumRoutines ends up being. In addition, there are actually only two buffered channels to begin with (“limitCh” and “found”). As long as NumRoutines >= 3, then, TLA will find this issue.

        In my experience writing the model helps you better understand the problem and thus pick appropriate constants (as you mentioned). But even if it didn’t, it wouldn’t be unreasonable with this spec to plug in 10 for NumRoutines and see what happens.

        With TLA+ models I find that I wind up with an emotion similar to writing untested code: if it 100% works the first time I get immediately suspicious and start poking at things until they break as a means to convince myself it was actually working.

        1. 1

          In addition, there are actually only two buffered channels to begin with (“limitCh” and “found”). As long as NumRoutines >= 3, then, TLA will find this issue.

          I disagree. Found is unbuffered, and limitCh is buffered. I believe it will only find the bug if NumRoutines >= the buffer size of limitCh.

          1. 1

            Oh beans, in my pre-meeting rush I misread the spec. Which is definitely a separate problem!

            So it looks like limitCh has a buffer the size of NumTokens and you’d want to ensure you pick NumRoutines > NumTokens during one of the runs, right? I’m not sure there’s a tool that checks your work there.

          2. 1

            Here I believe the NumRoutines constant exists to ensure the model is bounded– TLC will check every number of potential goroutines from 1 to whatever NumRoutines ends up being. In addition, there are actually only two buffered channels to begin with (“limitCh” and “found”). As long as NumRoutines >= 3, then, TLA will find this issue.

            While that’s best practice, here I fixed the number of routines at NumRoutines for simplicity. It will find also find a bug with 2 routines and 1 token.

        1. 13

          “This is in essence how GPT-3, or for that matter, all of what you call AI works. By all means, a very complex process, but one void of magic there or signs of thought emergence. It’s a strictly definite and discrete problem, and the machines of today seem to be doing a good job of solving it.”

          A college prof of mine back in the ‘80s pointed this out as a paradox of AI: as soon as we figure how to make a computer do something difficult, we stop thinking of it as a sign of intelligence; it’s just a clever trick. In the 1950s it was chess; now it’s recognizing faces and generating high-school-level English prose (or poetry!)

          The word “magic” in the quote above is telling — implying that to be intelligence it has to be like magic. I don’t buy it.

          The rest of the arguments are similar to Searle’s old “Chinese Room” argument: that because we can’t point to some specific part of GPT3 that’s an “English recognizer” or “English generator”, it can’t be said to “know” English in any sense.

          Obviously GPT3 isn’t a true general AI. (For one thing, it’s got severe short-term memory issues!) And I don’t think this approach could simply be scaled up to produce one. But I think (as a non-AI-guru) that the way it works has some interesting similarities to the way human consciousness may have evolved. Once we came up with primitive forms of language to communicate with other people, it was a short step to using language to communicate with ourselves, a feedback loop that creates an internal stream of consciousness. So the brain is generating words and thinking about them, which triggers likely successor words, etc.

          I’m not saying our brains are doing the same thing as GPT3, just as I don’t think our visual centers do exactly what Deep Dream does. But the similarities at a high level are striking.

          1. 7

            The problem isn’t that the goalposts move, it’s that the goals end up being tractable to approaches that don’t get us as much as we expected. For example, Go AI was supposed to be a revelation, but in fact it turns out that random playouts are more than enough to get stronger than human pros… but just like with chess, that’s not how humans play or think, so we can achieve the simple goal of “win a game” but can only derive patterns/insight from that strong play with human analysis.

            It seems to me that the patterns and insight are what we’re really after, not the wins.

            1. 4

              Woah, wait a minute… professional Go players are (consistently) worse than random?!

              That seems like a very important insight, albeit much bleaker than the sort AI researchers were looking for.

              1. 3

                I think what asthasr is referring to is the way Alpha Go iteratively played against itself to gain ground. My understanding is that it started out with ~random noise vs ~random noise and improved by figuring out which side did better and repeating that process an inhuman number of times.

                It’s not entirely unlike how a (human) novice might get more with the game taken to the limit. We got some novel game states that humans hadn’t (yet) stumbled on to, but as far as I’m aware Alpha Go provides very little insight into how (human) professionals approach the board.

                1. 1

                  kavec’s comment is correct, but even later engines use random playouts, pruned by the results of playouts in similar positions, to choose their next move. It works. It’s led to some interesting analysis (by humans), but the AI in itself isn’t doing that analysis.

                  1. 1

                    I believe what you meant is the Monte-Carlo Tree Search part. I don’t think that is uniform randomizaton. Reading page 3: https://arxiv.org/pdf/1712.01815.pdf, it suggests to expand the nodes biased by DNN’s evaluation rather than uniform random rollout.

                    1. 1

                      It’s not uniform randomization. Go is too “big” for that. However, it’s essentially treating positional analysis as a function from board position to board position, without any heuristics or sector analysis. That’s not how people play or think about the game; in essence it’s very good because it can run Monte Carlo playouts fast and figure out, given the entire board position, what the next move ought to be… but it has no “why.”

                      1. 1

                        Because it is not uniform random and the node expansion biased by DNN’s evaluation output, the heuristics or sector analysis could simply be moved to the DNN (the convolutional neural net is translation-invariant, and we don’t have internals of the DNN to poke with). The heuristics from neural nets is essential for AlphaZero’s sucess. I won’t discount that and say random rollout from MCTS, which has been in-use for Go since 2000s is as crucial. MCTS is important to explore the state space, but the “intuition / memorization” from neural nets is crucial.

                        1. 1

                          It’s possible that generalizations can be teased out. There are people trying and I await the results eagerly. But crucially, once again, it’s not the AI that’s capable of doing it. If it’s accomplished, it will be the humans running the AI who do it.

              2. 1

                I’ve thought about the same thing. A form of (at least apparent) “consciousness”, it seems to me, could be built out of a “language generator” like GPT-3, with a feedback loop, and with a way to feed in information about the outside world.

                How much research has there been on this field? Surely someone has tried to feed GPT-3 into itself and seen what happens?

                1. 3

                  Sort of like that very creepy video that starts with a frame buffer of random noise and iteratively applies Deep Dream, zooms slightly, and repeats. After a few minutes you get an H.R. Giger nightmare of malignant dog noses; that model they used really has some deep seated dog issues it needs to work out in therapy.

                  In the messy neuro-chemical domain, dissociative psychedelics like ketamine, DMT and salvia divinorum seem to work by blocking out the sensorium and amplifying feedback in the stream of consciousness, producing very real-seeming but bonkers dream worlds.

                  1. 3

                    You’re right, this really does end in a nightmare of dog noses and eyeballs! Some of these are really horrifying.

                    https://youtu.be/SCE-QeDfXtA

                2. 1

                  Here, have a book from a (recent) prior generation of AI optimists. Hawken’s thing didn’t quite work out like he was hoping, but it’s a good stepping stone toward current theories of embedded cognition. We’ve got a long way to go, still.

                1. 9

                  This is the second Facebook project (after Jest) to announce switching from Flow to TypeScript in the past week or two. 🤔

                  1. 3

                    The 2nd we know of.

                    1. 2

                      Do you happen to have a link to the other announcement handy? We use Flow at $WORK and today I had a great time banging my head against the wall because it was both not happy with me adding a custom matcher in Jest so that internal DateRange type tests would produce some better failing output and also had to be instructed to not freak out about jsverify so that the aforementioned tests could be property-based.

                      And documentation for Flow built-in types like $ReadOnlyArray simply not existing is its own level of awfulness and.. if a bunch of high profile projects are missing maybe I can get leverage to push for Typescript internally, too.

                      1. 2

                        I suppose that would be Jest? (edit: sorry, I misread the OP, it is Jest) (Accepted) Proposal thread here: https://github.com/facebook/jest/pull/7554

                        1. 1

                          Got it, thanks! Somehow wasn’t able to find this through google

                    1. 3

                      Hello! These look fun!

                      Evaluating different graph databases for use with user data at work. Not entirely sure they’ll be a good fit over our usual combination of RDBMS/Elasticsearch/Cassandra, especially given the REST-like way some of the queries are structured. Planning to test out Neo4j, AWS Neptune, JanusGraph v Plain Old Cassandra; anyone have experience running these or others in production?

                      At home I’ve started on a project to enable using a midi keyboard to control playback of sound effects and ambient music over Discord for my regular Dungeon World roleplaying sessions. Managed to get a horrific thing that’d play any number of youtube videos into Discord voice chat at the same time going in about an hour, so I’m excited about the next steps (reorg code, then put in functionality to queue/layer/etc). Writing code on Windows is still not entirely ergonomic, but I’ve been pretty impressed so far with rustup’s ease-of-use and how far the mingw/msys ecosystem has come since the late-00’s.

                      1. 10

                        I don’t quite see how object-oriented programming or functional programming has to stand in opposition to simple programming. Surely, abusing or forcing idioms from these padigramms, leads to unreadable and confusing code, but a good padigramm will (or should) enable you to simplify solving a more complex problem, by providing an infrastructure to describe it. Ultimately you would have to end up with assembler (or worse), if you follow his idea to the end.

                        A certain degree of abstraction and “complicated” words is always necessary, if only for purley pragmatic reasons - otherwise you’ll end up describing the same existing concepts over and over again.

                        Edit: Spelling mistakes and formatting fixed.

                        1. 15

                          I didn’t read it as being about OOP or FP specifically, but the attitude of needing to always looking for excuses to try out the latest techniques you’ve learned regardless of whether they are appropriate for the setting. I call this the Labyrinth effect, after http://p.hagelb.org/labyrinth.jpg

                          I have worked with folks who have done the whole “I just learned about monads, so I’m going to write a bunch of monadic code at work even though there is no call for it and it just makes the code impossible to follow by anyone but me” thing, and it ended up causing all that code to get thrown out eventually, but only after a long struggle to get it working reliably.

                          Sometimes I wonder if the main advantage of coding side projects is to be a release valve for cleverness.

                          1. 5

                            On the flip sode, you’re not going to recognize that a particular idiom is well suited to a task without burning yourself a couple times to find its limits. To continue with monads, something like railway-oriented programming might be just the abstraction to make your production code easier to reason about and maintain. Not that you should necessarily be twirling your mustache as your coworkers struggle to untie themselves from a monad railway, either…

                            Side projects can be pretty good proving grounds for techniques, but you need to be bad at new abstractions /somewhere/ to figure out how to leverage those into good business value. It’s unfortunate that this distinction between work projects/side projects pushes us either into cartoon villianry or unpaid professional development.

                            Have other crustaceans found an effective way to reconcile these two?

                            1. 4

                              Absolutely; it’s not that you should always try your hardest to avoid making this mistake, but that you should recognize that it is a mistake and learn from it when it happens.

                              1. 2

                                If you’re familiar in the domain, you’re much more likely to have success taking a leap in the code, and vice versa. If it’s the second time you’ve written a renderer, maybe try out that new technique you read about, if it’s the first time maybe stick to what you know.

                              2. 1

                                the attitude of needing to always looking for excuses to try out the latest techniques you’ve learned regardless of whether they are appropriate for the setting.

                                Well that was kind of what I was talking about - when you force one kind of problem into a framework for solving different kinds of problems, you’ll end up in a mess (think of all the design patterns that were though of in connection to Java). But, as mention in the other thread, being frightened of tools, won’t make it easier for everyone.

                            1. 4

                              Sooooo Right.

                              I mean, these days even rants have to be broken up into 140 character subrants! ;-)

                              What’s a twitter rant? A Trant? Twant? Rantter? Rwantter?

                              1. 4

                                A tweetstorm! Merriam-Webster has a writeup on the term and its (short) history here:

                                https://www.merriam-webster.com/words-at-play/tweeting-up-a-tweetstorm

                              1. 19

                                The advice in these CodeWithoutRules posts just seems so…trite? Maybe I’m biased but they seem to be more about getting the author’s name out there rather than giving well thought out advice. For example, the author gives an example of a company using Perl, and then goes on to say what you can do about looking tech up and talking to your manager. Is the author saying that nobody had the unnamed company has ever done this? That seems unlikely to me. IME, technology change only occurs under existential threat, not because someone thinks X will make you a bit better. The author also seems to put a lot of weight on the age and popularity of technology, not if the technology actually is better for the problem it’s solving.

                                In my career, I have rarely seen the advice the author gives actually work, and in the cases I can think of, it’s been due to a crisis not a marginal improvement. Experiences vary but the author doesn’t actually enumerate any real world successes they have had. Maybe that’s what bothers me about these CodingWithoutRules blog posts, they seem like Feel Good Messages, disconnected from reality. But maybe I’m just cynical.

                                1. 8

                                  Maybe I’m biased but they seem to be more about getting the author’s name out there rather than giving well thought out advice.

                                  For the record, this is how they come off to me as well. Most of the advice reads as energentic banality, akin to a corporate pep talk, sneaking in appeals to sign up for a special publication followed by decrees to “buy my book”.

                                  I may also be cynical.

                                  1. 3

                                    Occasionally I write blog posts about how e.g. signal and garbage collection reentrancy interact with Python threading primitives in an unfortunate way (https://codewithoutrules.com/2017/08/16/concurrency-python/).

                                    But… I feel there’s often too much focus in programming on technical skills and too little on other, less easy to articulate skills. So mostly I try to write about those other skills. They’re harder to explain, and so I don’t always succeed at doing so, but they’re important and useful too.

                                    1. 2

                                      With the caveats that writing is a difficult and distinct skill with many subskills, putting your writing out on the internet for peanut gallery hecklers like myself takes courage and initiative, and further that productively engaging with criticism of your work is admirable: The content of these articles seems to trend a lot more towards a first-principles-y style of communication. This works well for articles like your post on the deadlock behavior queue.Queue exhibits in some interactions with __del__, since the behavior hs a concrete, single-source cause.

                                      Unfortunately, human interactions are a goddamn mess that don’t respond well to reasoning from first principles.

                                      If you’re invested in writing insightful posts about dealing with imposter syndrome (“You feel like your growth has been stunted: there are all these skills you should have been learning, but you never did […]”), you may be better off looking into the effectiveness of self-esteem versus self-compassion. If instead you’d like to write about effective learning strategies (“Get your technical skills up to speed, and quickly”), a discussion and extensions of work like Iran-Nejad’s Active and dynamic self-regulation of learning processes would be more appropriate. For how to work effectively in a complex sociotechnical system, Simon Sinek and Edwards Deming and Thomas Schelling and Donella Meadows are all extremely insightful authors with much to say (particularly the late Ms. Meadows).

                                      That is, if your intent is to write cogently and effectively about disciplines foreign to the hard logic of a turing machine, of which there certainly is a need in our industry, your work needs to exhibit more care and attention to prior art in those disciplines. Excepting that, your work at least needs hard-won anecdotes taken from (all too often painful) direct experiences that can help readers reconstruct the tools you yourself learned from those experiences. As-is CodeWithoutRules posts like the parent link are not substantive or grounded enough to do more than express a good understanding of the English language. Since you already have the courage and the practice of posting these, it’d be lovely to see the less technical posts mature into articles that are as important and useful as accessibly documenting unexpected concurrency pitfalls in Python.

                                      1. 1

                                        Since you already have the courage and the practice of posting these, it’d be lovely to see the less technical posts mature into articles that are as important and useful as accessibly documenting unexpected concurrency pitfalls in Python.

                                        I think this is a good point. I’ve nothing against theme and topics in this post – and I do not wish to discourage work in the area – but the writing is not doing it for me. As @apy stated, it sounds too much like the words of a self-help guru.

                                        I also agree that it takes courage to put it out there.

                                        1. 1

                                          Your argument, as I understand it: I should be writing posts based on either prior research, or empiricism. I certainly agree with that, and I have e.g. done reading on the research on learning (https://codewithoutrules.com/2016/03/19/how-learning-works/).

                                          Let’s look at this particular post, then, and see how it does. It has the following outline:

                                          1. Topic: “Your skills are obsolete: now what?”
                                          2. You, the reader, find this distressing.
                                          3. Almost old projects use old technology.
                                          4. Therefore, you can upgrade this new technology where suitable; here’s some tips on how to do so.
                                          5. Not seeing technology that needs upgrading is an instance of a bigger problem: waiting for problems to be handed to you, rather than actively seeking them out.

                                          Expanding on each:

                                          1. The topic is “obsolete skills”. Not impostor syndrome, not learning techniques.
                                          2. This is a real situation. The particular sentences you quote are based on someone in this exact situation: they’re not suffering from impostor syndrome, they really are in a bad place. I have other observed other programmers in this situation. @neonski has observed people in this situation (see his comment elsewhere on this page).
                                          3. Use of old technology is an empirical observation, or maybe a well-known fact given e.g. state of security updates. I could find some research to cite, but it wouldn’t add much to the article.
                                          4. This is a suggestion based on empirical observation of a skill I and many other programmers have. In particular I, and many other programmers, will simply go ahead and upgrade, or suggest upgrading technologies every day at work. Every other week I end up researching new technology on the job. I almost never learn new technology at home. From the perspective of someone who has been in technical leadership of a team, junior people saying “hey here’s a problem, here’s a solution, can we try it” is always great.
                                          5. This is standard engineering skill tree progression. E.g. Charisma column of https://docs.google.com/spreadsheets/d/1k4sO6pyCl_YYnf0PAXSBcX776rNcTjSOqDxZ5SDty-4/edit#gid=0 but that’s just an easily findable version. Pretty much every organization that bothers to write this down has a similar progression.

                                          It’s possible my full blog post didn’t do the best job of expressing this outline. But there’s no reasoning from first principles.

                                        2. 1

                                          So mostly I try to write about those other skills. They’re harder to explain, and so I don’t always succeed at doing so, but they’re important and useful too.

                                          The flip side is, like any self-help advice, it’s much easier to bullshit than the hardcore tech stuff. I’m sure you have good intentions but writing about stuff that is easy to make up because nobody knows the difference means you have to be all the better of an author. For me, at least, and I mean this in the most constructive way possible, it’s really hard to tell if you’re some self-help guru trying to make a name off suckers or actually have insights.

                                      2. 6

                                        The company using Perl is pretty much stuck with Perl for years to come, though I can imagine changing over a decade or so, component by component. But, they also uses many other technologies, and those have been upgraded over the years:

                                        1. At one point they switched from doing all communication between systems via Oracle to communicating via RabbitMQ.
                                        2. They have a web UI, so plenty of opportunity to upgrade there, though I don’t know details.
                                        3. They tended to use end-to-end tests, but at some point there was a push towards adding unit tests as well, which involved introducing new Perl libraries.

                                        No doubt many other changes as well.

                                        Those changes were all introduced by someone. Sometimes by people in charge, but I know some were introduced by lower level employees.

                                        1. 1

                                          Those changes were all introduced by someone. Sometimes by people in charge, but I know some were introduced by lower level employees.

                                          Sure, but were they introduced using the technique you suggest? IME, a lot of these happen because when nobody is looking someone just Does Something and now we’re stuck with it, for better or for worse.

                                          Just because companies do change doesn’t mean they used the method you suggest. Have you successfully used the method you suggest? If so, some first hand experience would at least lend some strength to your statement, right now it just feels like your logiced your way to how this must work and wrote a blog post about it regardless of if it’s reality.

                                          1. 1

                                            Sure, but were they introduced using the technique you suggest?

                                            Talking to one’s manager and pointing out a problem? That did happen, yes: I believe the unit test library they used was proposed by someone to the team lead. And I’ve certainly done this many times.

                                      1. 4

                                        Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                        ((I’m strongly in favor of SQL over NoSQL is 99% of cases, just curious about how other lobsters answer the puzzle.))

                                        1. 18

                                          Two acronyms that trigger me immensely after seeing a lot of devs abuse them are YAGNI and DRY, usually because they are parroted back blindly by people that aren’t thinking holistically about their systems and the people building and maintaining those systems. For DRY, as an example, a bunch of copy-paste config scripts or boilerplate can actually be a lot easier to troubleshoot and maintain than a byzantine architecture design to abstract away things to let people skip writing var window = new Window(0,0,200,200); var window2 = new Window(100,100,200,200);.

                                          More to your point, with YAGNI, the answer for me is that yeah, starting out it’s honestly faster to use a memory store (say, var sessions = Object.create(null)) instead of even Mongo! If you need persistence quick, use property files or json blobs flushed to disk periodically.

                                          But, and this is where people usually screw up, you use your experience to inform what you’re going to need. Things that every business needs within the first few months of development:

                                          • Monitoring, even a simple heartbeat 200 route.
                                          • Sending emails
                                          • Collecting user emails
                                          • Authenticating (not authorizing!) users
                                          • Metrics on pageviews to show traffic and conversion
                                          • Querying relationships between business domain entities
                                          • Logging for when things blow up
                                          • Persisting user data to disk

                                          Decades of work has shown that there are no special snowflakes in these regards!

                                          And yet, claiming YAGNI, a lot of places pretend that those things are not a concern right now and never will be a concern and end up doing really heinous shit that even a moment of reflection would’ve prevented. Example of this would be building an e-commerce site (one of the literal academic exercises for SQL) with a store like Mongo.

                                          Like, yes, right now there is no need to do a rollup of quaterly sales by product line and vendor, but that is something we know you’re going to want as soon as you figure out that such a thing exists. But, if people have been strict lean-startup YAGNI the whole time, you’re probably going to find out that the way forward is to retroactively bolt on some hideous schema and relational model to the application layer and hope that that gives you real numbers.

                                          Similar things that people cry YAGNI on:

                                          • “We don’t need transactions for our database yet.”
                                          • “We don’t need more than one prod server yet.”
                                          • “We aren’t going to need site analytics.”
                                          • “We aren’t going to need an HTTP API.”
                                          • “We don’t need linters and code climate stuff yet.”

                                          One of the signs of seniority, in my opinion, is to have an engineer that recognizes when the business is, in fact, going to need it–and in all other cases, aggressively fake it in such a way as to not hamper later fixes.

                                          1. 7

                                            There’s also a tendency to mistake “I don’t know it” for “it’s too complex,” when other people who you can hire are more likely to know it than the “simpler alternative.” Relational databases are the best example: I’ve never seen anyone argue against them who was comfortable with them. If you base your system on the Next New Thing, how likely is it that it will still be around in forty years, like SQL? Or that you’ll be able to hire someone to help you with it?

                                          2. 5

                                            Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                            The complexity of a RDBMS is isolated in a single unit that was thoroughly tested and presents a simple API to the programmer. The complexity of a key-value store is mostly in the programmer having to maintain a scheme outside the database and having to deal with new and exciting bugs that usually end up with the kind of data loss that would make MySQL look sane.

                                            1. 4

                                              YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                              “Simpler internally” is not the same as “simple to deal with.” The latter is more relevant.

                                              I’d ask two questions: 1) what needs are we most likely to have in the future? and 2) how much pain will we have if we’re wrong?

                                              For instance, you may need high scalability. You also may need relational integrity.

                                              Which one are you more likely to need? I’d guess “relational integrity”, as every system I’ve ever seen has had at least some relational data. (Even loosely-structured document data needs to belong to a specific user.)

                                              Which one is harder to bolt on later? If you pick a RDBMS and need to scale it, indexes, caching, sharding and clustering are all things that can help. If you pick a NoSQL database and need to add relational integrity and transactions… you’re basically sunk.

                                              Which problem hurts more to have? If you have scaling problems (and your business model is sane) you have proportionally large revenue and can afford to work on scaling. If you have data integrity problems, they may be costing you the only customers you have.

                                              1. 3

                                                It depends, doesn’t it? If you’ve already got a RDBMS humming along, adding a second type of database is definitely more complex. If you don’t, I can see a case to be made for grabbing MongoDB, but.. throwing random schemaless json documents can cause headaches without adding in complexity in the form of discipline/coordination of changes/monitoring/etc. Some of which you may already have, further complicating the analysis. Either way, it’s easiest to work with the grain of your architecture– which fits the spirit of YAGNI.

                                                Looking at YAGNI in particular, does the term itself ever get invoked in discussion as something more than a tool to shut down conversation?

                                                1. 1

                                                  Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                                  I think people misunderstand YAGNI. As an engineering principle the idea is that, when you find yourself asking “Hmm, should I do X or build Y now, because someone might want it”, then the answer should be No. It arose in opposition to the Java Factory Factory Factory Overapplication pattern of adding extra injection points “just in case” someone wanted to introduce a different kind of FooBean down the road, which usually never happened, leaving you with a lot of extra complexity to read through for zero real-world gain.

                                                  It doesn’t really apply to questions like “Do I want X or Y?”, in my opinion. It’s purely a heuristic for rejecting undertaking “just in case” work you don’t actually have a concrete use-case for.

                                                  (In the case of RDBMS vs Mongo, I propose a different heuristic: NENM. Nobody Ever Needs Mongo).

                                                1. 1

                                                  Sorry if I missed it, but I’m assuming the study allowed students to use the laptop as they would in real class. That means a lot probably tabbed out of their note-taking program to browse the Internet. That definitely proves it’s inferior to taking notes by hands, which provides less distractions, but I’d be interested in seeing the result of students taking notes on laptops without any distractions.

                                                  I personally took notes by hand but I never actually referred back to them. Even when studying, I’d take notes again instead of reading my old ones.

                                                  1. 7

                                                    The study opens up discussing how previous studies allowed for laptops to become distractions via multitasking and internet browsing. The authors here ensured that the laptops weren’t connected to the internet during the experiment and they also explored a few different methods to address weaknesses in prior studies. Overall, it seems like a pretty thorough experimental design.

                                                    1. 2

                                                      That means a lot probably tabbed out of their note-taking program to browse the Internet.

                                                      You can also doodle on the paper and that is as good a distraction as any. You can run a space battle simulation this way even.

                                                    1. 3

                                                      The author calls out an interesting phenomenon w.r.t. to backpressure– I’ve found that at my new employer, my task queue tends to grow without bounds when using digital tools due to the lack of finite space available. At the same time, it’s extremely easy to pick up enough responsibility to overflow the backup and editing capabilities of setting tasks to the pen. Not to mention the potential to discard a list that’s still active and lurking beneath a pile of other documents!

                                                      Personally, I’ve transitioned to editing a list in a single plain-text ‘tasks’ file on my desktop (file length seems to provide a soft-cap to my queue size), but what systems do other crustaceans use to avoid taking on so many tasks that it’s impossible to complete them before the heat death of the universe?

                                                      1. 2

                                                        I’ve been a big fan of Trello, but it’s not immune to overflowing. I recently split my main todo list into “todo list” and “someday hell” to split all the wacky ideas and fear-based someday tasks , and other nerd add stuff somewhere else. Always be refactoring your todo list!

                                                        The other thing I do for indie software development in trello is move a list of finished tasks for a given month to a ‘myproject done board’ to remind me of times when I’ve been really productive.

                                                        When Joel / fog creek first started trello they mentioned wanting a “five things” board. Something where the design made sure having more than five things wouldn’t appear as easily.

                                                        One of the other things I’ve experimented with is keeping a “pervasive today” list on my pebble watch via the “agenda watchface”.

                                                        1. 6

                                                          I actually use a card system myself, but not a strict GTD system - my cards are notes and ideas and lists and brainstorming put on 2 ¼” x 4” blank Rolodex cards rather than index cards.

                                                          I have a three 66704s on my desk - one is for for “overflow”, one for “completed”, and one for “archive”. I generally don’t have a hard limit but I carry whatever is manageable and appropriate and when I get overwhelmed I move cards to the overflow file. I have this setup at home and work.

                                                          Each file holds 500 cards and when it’s full I look back on the oldest idea and decide if it’s a lost treasure worth revisiting - if it is, it goes right back to the active deck I carry and if not it goes into the archive file.

                                                          The “completed” file works the same - once it’s full with 500 cards they move from complete to archive.

                                                          Once “archive” is full the removed cards get put in a drawer and every few months I rubber band the cards in the drawer up, label with the date, and toss them in a box.

                                                          That box ends up like my journal or diary of sorts, with all my ideas and musings, good and bad, completed and never started projects, etc.

                                                          I end up looking at the cards in the “completed” Rolodex more than I’d expect, and the “overflow” less than I would like. On a day where I get am either bored - or maybe particularly energized, as the case may be - I will look through the “overflow” in search of a great but almost forgotten idea, and add it back to my active “carry” deck.

                                                          FYI - my coworkers all laugh at me and make punch card jokes and such, and ask how I access my cards in the cloud, or if I worry about the Russians hacking my pens. :(

                                                          However, my work-life balance has improved by having physical Rolodex’s separated between my house and my office. It enforces a rough “locality hygiene” in that I keep mostly focused on work at work and my personal grandiose projects stay mostly at home, besides what mixes in the active deck.

                                                      1. 22

                                                        This is getting to be a dumpster fire of a thread, but I’d like to add something I haven’t seen brought up, with regards to the sentiment expressed by several crustaceans that the author inserted gender or identity politics everywhere while at GitHub.

                                                        Here’s the thing about identity: it’s not a thing. It’s an is. Identity already is, and it already is everywhere. There’s no inserting it, because it just is.

                                                        The only groups who don’t know this intuitively are those who are in the sociologically “default” groups. In the U.S., that’s straight, white, males.

                                                        1. 14

                                                          I’m going to try my best to gingerly step around this and if I manage to make an oaf of it, I’m sorry.


                                                          Continuing along this line of thought– when trying to debug a technical issue, there are a lot of things that are that still remain irrelevant to developing a fix or better way forward. For instance: if some machine in AWS fails during traffic failover, the most relevant facts will tend involve the machine experiencing more traffic than expected. Total RAM, disk capacity, systems handling traffic, etc are all pretty safe bets to check out to make progress during either debugging or a post-mortem, while physical facts that can be identified about the machine are much lower on the scale of probable issues. However, it’s complete true that we cannot ever get away from facts about the machine. It will have some physical location, it will have some class of CPU with some memory capacity. You can enumerate any number of facts about the machine that form its identity– these things just are.

                                                          In the same way, to quote the author:

                                                          In the midst of my discussions with the editorial team, trying to reach a compromise, a (male) engineer from another team completely rewrote the blog post and published it without talking to me.

                                                          The core injury, as I can see it, is that another engineer rewrote the author’s blog post without consulting her. That sucks. In solving the core injury, I do not think it important that the engineer is male– one individual’s actions removed agency from another individual here. There may or may not have been good reasons for it, but either way it’s not something I’d like to happen to me or anyone else. To that end, it feels more important to both understand why and prevent that sort of thing happening in the future. Though the other engineer can be identified as male certainly adds insult to this injury, this would still feel be bad if they were female. Or transgender. Or an agender markov chain with a graph coloring problem, whatever.

                                                          This is, I think, where we start to see people describing that the author is inserting gender or identity politics. It’s not that identity ceases to exist, because that’s absolutely inescapable, but mentioning the package between another engineer’s legs isn’t likely relevant to fixing the core issue. It could very well be– going back to AWS failover example, all the machines in a specific rack may just be bad and that’s the real problem– but without significantly more data the mention of this other engineer’s gender serves only to bias the reader away from a deeper analysis of the situation.

                                                          I personally find it rather difficult to not become distrustful of the author’s stated intent when she uses identity in this way throughout her post rather than spending more care describing the motivations and rationale of other individuals she has written about. That’s not to say that Github is absolutely blameless, either– taking the latter parts of the post at face value, the author’s manager at least could have handled things better. Just that the situation is probably more complex than the author lets on and probably not so overwhelmingly related to the specific gender or commitment to diversity of any individuals she has written about.

                                                          1. 4

                                                            First of all, thanks for being willing to engage sincerely. These kinds of topics are politically and emotionally charged, and they’re not easy to talk about. It’s very easy to dismiss these issues as the rantings of an “SJW,” a mentally ill person, or a hypocrite (all of which have been done in this thread), but I think Lobsters can (and should) do better than that. Thanks.

                                                            On to the point: you are, in principle, right. It’s entirely possible this specific incident did not involve anyone who was motivated by animus towards individuals with a particular gender expression.

                                                            However, that kind of argument stretches the credulity of folks who study this, and of folks who are on the receiving end of gender discrimination in our society. In keeping with the debugging analogy — which is really nice, by the way, can I borrow it? — an experienced debugger starts to get an extra sense for when there’s a bad block on a disk, or there’s a race condition in a piece of software, or a bit flipped in a big non-ECC memory array. These are based on patterns and heuristics that are hard-won over years of encountering real problems, finding enough evidence to conclusively decide upon a root cause, and then recognizing those patterns the next time they come around. If you’re right often enough to make a career out of it, or develop a reputation over it, then most people are comfortable saying that you’re correct in your assessments, and that when you smell smoke folks should get ready for fire, even if no one else has smelled it yet. Nonetheless, pick any one of those incidents, and the facts might not be convincing to an observer brought in to examine that incident alone.

                                                            The standard of evidence here is not that of the courtroom or the laboratory, though — it’s that of the water cooler (or the blogosphere). This person is posting their interpretation of events that happened to them, in order to offer a warning to others who might be in a similar situation, with similar concerns. Nothing more, nothing less.

                                                            You might notice, however, if you’re on the lookout for these patterns, or have had someone spit on you because of your gender presentation a few times, that when someone writes a blog post called “Amazon Burned Me Out and Takes Advantage of College Students,” we end up with discussions about what reasonable labor expectations are, but when someone writes an article called “My Year at GitHub,” talking about their experiences with gender discrimination, we get discussions about how “SJWs are hypocrites,” “this person is mentally ill,” or “aren’t they just looking for gender discrimination and finding it because they want to?” To your credit, you asked the last question, which is by far the most reasonable of the three. But perhaps you can see why it rings hollow to someone who has dealt with this so often, and for their whole life: to them, you’re like the junior sysadmin asking “shouldn’t we be calling support,” while the guy in the corner, with a tube of thermal paste, is mumbling about how he can hear the heat buildup from the loose heatsink on the 9th processor core.

                                                            1. 4

                                                              Yeah, thanks! It seems to be really difficult to engage with this topic in good faith, so I deeply appreciate your responses as well. As far as the debugging analogy goes, words are pretty thin air; borrow as you’d please. :)

                                                              To get to the meat of your reply, it’s… we’ll obviously have different heuristics we can match against the situation the author described here. Even in just our conversation, I don’t feel confidently equipped with a reasonable standard of evidence for water cooler discussions. In my experience these kind of informal conversations carry significantly more weight than expected, but that doesn’t seem to be the case for others. Or maybe I don’t have enough information to make that kind of judgement. And as far as things ringing hollow…

                                                              Let me back up a bit and lay out a few assumptions I try and operate with that are maybe(?) relevant.

                                                              • The author’s experience qualifies as an ongoing catastrophe. For her, for Github, and for the wider technical community regardless of race, creed, gender, sex, ability, age, etc, etc

                                                              • Any system with more than two components (be they individuals, management systems, technological systems, machines, et al.) is a complex system

                                                              • My piddly meat-brain does not have the computing power to fully model complex systems of any stripe

                                                              • Complex systems fail in complex ways

                                                              It’s that last part that I want to emphasize– particularly (from the linked pdf):

                                                              1. Catastrophe requires multiple failures - single point failures are not enough.
                                                              1. Post-accident attribution to a ‘root cause’ is fundamentally wrong.
                                                              1. Views of ‘cause’ limit the effectiveness of defenses against future events.
                                                              1. Safety is a characteristic of systems and not of their components

                                                              (and basically all the other ones, too. It’s a good read, highly recommend it if you have the time ¯\_(ツ)_/¯)

                                                              My difficulty with the author’s work, and a lot of similar work, is that it suggests that these catastrophic experiences are the result of a singular category of root cause. Call it sexism, racism, general discrimination, patriarchy, systematic oppression¹, etc, these all pattern match to me as “individuals of this outgroup inherently don’t like people in my ingroup and because of this treat us badly”. Which is a huge problem! It sucks, it’s counterproductive, and I really wish I knew the words to express that without marginalizing it with the inevitable, “and also…”. Because to me, even with the grossest delineation of components possible, we still wind up with interactions between the author’s past self, Github the sociotechnical organization, and the community discussing the author’s work. Considering I am nowhere near intelligent enough to model three things in my head, I’m comfortable describing it as a complex system. And because complex systems fail in complex ways, there’s significantly more unique faults here than Github’s poor behavior as an organization as written.

                                                              At the end of the day, I can’t solve discrimination or stereotypes or the million billion of microaggressions the technical community lavishes onto anyone who isn’t easily type-classed as cis-white-hetero-male-college-graduate-under-thirty-enjoys-social-drinking-reads-hacker-news-check-out-this-cool-framework-docker-docker-mesos-startup-docker without fundamentally solving the disease that we call the human condition². I don’t even know where to begin with that, but the author’s work puts a lot of emphasis on the selfsame identity of individuals they interacted with. To be clear: I think it’s important that we accept this as a candid reflection of her subjective experience without significant evidence otherwise.

                                                              And then also, there are other factors that contributed to this catatrophe. Many of which are easier to solve and significantly more productive to discuss than the ways in which identity interacts with bias. For example, we could talk about what respectful, workplace feedback between individuals of any identity looks like– the author’s written communication style seems rather blunt to me, and I can understand why the data scientist described early in the post was upset. A Crustacean elsewhere in this thread had feedback on the survey question itself that I found surprising; it would be interesting to read other opinions on that, too. What kind of tradeoffs should we be making between factually correct and emotionally sensitive feedback? When is it possible to get both, when is it not? Another Crustacean brought up that PIPs were not for improvement, despite the goals clearly stated in the name. Is this a widespread practice and/or can we avoid working for companies who have such policies?

                                                              Discussions along these lines lead to contributing factors that are easier to solve or work around than the widespread mistreatment of classes of people by classes of other people³. Above all, it frustrates me (and likely many others) that much of what good, actionable work we can source to make things better feels like it immediately gets tossed out the window when we start discussing identity in these contexts.


                                                              ¹ - That *-isms are an emergent property of the complex web of social systems we engage with is the most interesting view to me in that it at least acknowledges the wider context in which we all interact. Unfortunately, it also seems like it’s easy to short-circuit on that description and throw up your arms in learned helplessness at the idea, too. Don’t know of any good solutions though, only tradeoffs.

                                                              ² - Poe’s law warning.

                                                              ³ - This is a defining and unfortunate trait of humanity as a whole. My fear is that it’s not entirely maladaptive, either.

                                                              1. 4

                                                                Hey, sorry for the late response, but I didn’t want to just let this hang, because your response is thoughtful and worthwhile.

                                                                I’d like to give a point-by-point response, but time is unfortunately short, and I just don’t have the time to do it well. Instead, I’ll focus on two broad points you raised, which I hope might help you to see where the author (and I) are coming from.

                                                                First, on the topic of complex systems, I agree that it’s a great read — as an operations guy, it’s basically required reading for me — and I also agree that, fundamentally, every particular interaction has a huge number of variables, and it’s unlikely only one or two of them contributed to the incident. I don’t think that’s actually what’s being argued here, however. In addition, I also think some of your premises are incorrect. In support of that, I’d offer a few points.

                                                                While our piddly meat-brains are not good at modeling most complex systems, they’re existentially good at modeling human cultures. It’s literally the condition of their existence. Homo sapiens is to the extent it is cultural, and culture is how we’ve managed to become a technological society. So while we’re certainly not perfect at modeling complex societies in our minds, I suspect we’re very good at kind of principal social component analysis of our existential threats. And while the author’s case was probably not existential, systematic forms of oppression are, in the general case. A lifetime of living at the end of one of those barrels will, of course, make you gun shy.

                                                                I also think you underestimate how much we, as a species, know about this stuff. Which is normal, and essentially the fault of academics for being quiet geeks. Nonetheless, a ton of research goes into the study of society, and as a result of reality being such as it is, a ton of research goes into the study of oppressive structures. There’s a tendency among a certain milieu (computer nerds, like me) to dismiss the fields that study this (sociology, anthropology, history, political science), as “not really science,” but I think this is an impoverished (and incoherent) view of what science entails. As someone who studied the anthropology of liberal democracies (yes, we do that!) intensively in the past, with the intent to make a career of it, I am very comfortable in saying that the evidence these systematic forms of oppression are real, and that they contribute to these smaller incidents (“microagressions”) in a real way, is overwhelming. To my ear, the insistence that the debate is somehow wide open on this sounds similar to the suggestion that anthropomorphic climate change is under serious debate. I’m not suggesting that of you, to be clear, but much of that research produces actionable results, which of course is basically stuck in journals that only universities can afford.

                                                                A couple things indicate to me that you (like most people) are thinking about the whole situation differently than the author or I. This struck me in particular:

                                                                Call it sexism, racism, general discrimination, patriarchy, systematic oppression¹, etc, these all pattern match to me as “individuals of this outgroup inherently don’t like people in my ingroup and because of this treat us badly”.

                                                                No one is suggesting this is “inherent,” and it’s really not even a matter of “like,” and moreover, not of “individuals.” I can only speak for myself, but most of the “lefty” persuasion would say that this is a structural issue, which has expression in individual interactions, but it doesn’t necessarily mean that the individual doing the expressing harbors any dislike of the person on the receiving end. Moreover, that’s essentially beside the point. Even if the individual harbored no ill intent, and no feeling of dislike, the structural issue remains. If you’re interested in dealing with problem, you have to deal with those individual expressions, too.

                                                                Now, the classic, Marxist answer to this issue is that you should not deal with the individual expressions, you should seize state power and end oppressive relations once and for all. Aside of the difficulty of doing so successfully and without becoming the abyss, as it were, the objection to this that came out of critical theory and identity politics is this: as people, we’re born and stewed in this society, and we’re shaped by it. Just because the workers seize the means of production, it doesn’t mean the white people will want to hang out with the black people, and it doesn’t mean trans folks will stop being murdered at a higher rate than other groups. Besides, the objection continues, did you notice 1970, and the neoliberal “Washington consensus?” Did you watch them deregulate the markets, destroy unions, and have a democrat dismantle welfare? There’s no working class consciousness anymore, we’re not going to seize power in this century, we need to do something in the meantime.

                                                                So we try to confront individuals, and we try to get private corporations to put some pressure on the structural issues. This is a ridiculous, almost farcical task, because we know individuals hate being told they’re hurting someone when they didn’t mean to, and we know the corporations don’t really care. We also know (despite some of the more absurd things that have been asserted in this thread), that we don’t have much power. Just look at the demographic distribution of presidents, legislators, judges, local politicians, or corporate leaders and you can confirm it. A five year old could see it. So when the author is critiquing people directly, and critiquing GitHub directly, it’s coming from a place of being cornered, of being threatened, and of having to fight for the right to be treated like everyone else for your whole life. You develop a shorter, direct tone. You don’t preface every criticism with, “I know this person was trying very hard,” or “I’m sure the person at GitHub” who started this program really cared.”

                                                                The final point I want to make, and it’s harder to swallow if you feel like you don’t have skin in the game, is that for me, and probably for the author, this is all more than a question of “is (say) GitHub a nice place to work,” or even “is GitHub the kind of place I’d like to work at?” It’s part of a broader question: “which side are you on?” For the author, the social structure chose the side already. For me, it’s a political and moral commitment based on my religious beliefs. But in both cases, for us, the answer to the kind of questions you posed at the end of your comment — like “what kind of tradeoffs should we be making between factually correct and emotionally sensitive feedback?” — are the wrong question in these cases. The right question for us, is “which side are you on, the side of the weak, or of the powerful?” In an ideal world, I would love if the best question we could ask was always about the individual case. But so long as the last is last and the first is first, I feel that I must always be with the last.

                                                                I get that my conclusion here is not something everyone is on board with, but I think it’s important to note that an article like this is coming from a different place than most people. It’s shop talk — “hey fellow civilization-destroying SJWs, got a new job this weekend, just FYI they don’t get it, they’re not approaching hiring underrepresented groups as countering structural issues, it’s basically tokenism, probably stay away. Okay, see you next time.”

                                                        1. 17

                                                          I’m not comfortable adding tags to accommodate unjust policies. Should we add an “anti-Kim Jong-un” tag to accommodate North Koreans who don’t want to be prosecuted for viewing articles that speak against the supreme leader?

                                                          The fact that this is only relevant to people with security clearance also puts severe limits on its usefulness. If you have some level of clearance, you would be more inclined to read articles about documents that you are cleared for. So “classified” wouldn’t be enough for people with security clearance to decide whether they’re allowed to read an article. You’d really need a tag for each classification level and handling caveat (SECRET, NOFORN, etc.), which is already taken care of upstream in the headers that agencies put on classified documents.

                                                          1. 3

                                                            Just because you’re cleared to handle SECRET information doesn’t mean you have the need to know any particular datum and, anyway, downloading or viewing any level of classified information on unclassified machines is not kosher for anyone– individual classification tags are probably not necessary.

                                                            1. 1

                                                              But shouldn’t you be more interested in such an article, as you have potentially more context than the general public, and it’s potentially more relevant to your life?

                                                              1. 3

                                                                Interest isn’t the issue. If you have a clearance and don’t have need to know, even if the information is within your level of access to classified material, you cannot legally access the material.

                                                                1. 2

                                                                  Ah I see. So is it illegal for journalists to look at documents they receive from leakers? Or does this somehow only apply to people with some level of clearance?

                                                                  1. 3

                                                                    It’s not that it “somehow” only applies to people with clearances. People with clearances agree, as part of getting a clearance, to only access classified information within their clearance level and for which they have need to know. This agreement is binding, and violation of it is potentially a crime. The average person has not entered into such an agreement, and more broadly it would be unreasonable and impractical for the government to try and punish all people who become privy to the contents of publicly disclosed but still classified materials.

                                                                    1. 13

                                                                      To be quite blunt: it’s their job to ensure this. Independent of whether we should add a tag or not, making it easier for people entering certain agreements to uphold this agreement is not the job of this website. Browse at your own risk.

                                                                      We navigate an unlabeled world all the time, and while I’d prefer everything to have a clear label (for other reasons), suddenly making these people the special case where it’s absolutely necessary is odd to me.

                                                                      1. 3

                                                                        making it easier for people entering certain agreements to uphold this agreement is not the job of this website. Browse at your own risk.

                                                                        Its just a tag. You are making it out to be something that is lots of work for little reward, but it is exactly the opposite, it wouldn’t take much work to add a tag and it helps lots of people to protect their jobs and prevent accidentally becoming a criminal.

                                                                        1. 2

                                                                          “helps lots of people to protect their jobs and prevent accidentally becoming a criminal.”

                                                                          To help lots of people with clearances clicking links on an obscure forum known to contain illegal releases of classified information and frequented by self-reported hackers. That sounds bad enough for a warrant already. Then, they are possibly in the NSA collection system the moment they open the front page in a scenario like that due to 3 degrees policy depending on if a monitored person replies to thread. With that backdrop, I’m surprised they’d even connect to the site at all without anonymity tools or using a shared access point (library or wifi) for deniability.

                                                                        2. 3

                                                                          I can appreciate the point you’re making here. The number of Lobste.rs readers who would care about such a tag as a means of protecting their jobs is likely small. OTOH, it would serve as an interesting data point for those of us who DO NOT have such jobs and might want to read such articles, and I suspect that audience might be larger.

                                                                          1. 2

                                                                            I’m not saying otherwise. I was simply explaining the legal issues underpinning the ability of people with clearances to read publicly available but still classified material.

                                                                          2. 1

                                                                            I don’t understand your first sentence, which seems to contradict the rest of your comment, but thanks for the info.

                                                                            1. 1

                                                                              I meant that saying “somehow” makes it sound strange and nefarious, and I wanted to disagree with that connotation.

                                                                        3. 1

                                                                          This is correct. It was explained to me that the clearance is a vetting process saying you potentially could access something at that level. The specific things that you can access are what you need access to.

                                                                          Then there’s extra complexity once we go into ownership (did they authorize officially?), SCI, and SAP’s. Basic concepts of clearance, compartments, and need to know cover vast majority of situations, though.

                                                                    2. 2

                                                                      With respect, are you qualified to judge what is just and what is not in absolute terms? What if the people who work in such jobs consider themselves to in fact be just in their actions?

                                                                      1. 2

                                                                        Nobody is qualified to judge what is just in absolute terms, but that’s no reason to give up all conception of justice. And to be clear I am saying the policies are unjust, not actions of the individuals who work under those policies.

                                                                        1. 1

                                                                          Are they? You’re a government. Your goal is to protect your people and further your goals, economic and social.

                                                                          You come to realize that there are certain pieces of information which, if they got into the wrong hands, could hurt you (again ‘you’ here being the nation in questioon).

                                                                          So, you define certain sets of people who can see certain things. Now, I realize, for hard core “all information wants to be free” types, this is a ring zero violation right here. However, for the purpose of this discussion let’s say that not everyone agrees with this as an absolute.

                                                                          You need to define rules to keep the wrong people from seeing critical information, including penalties to keep these rules from being patently ignored.

                                                                          What is inherently unjust about the above scenario? The right of a nation to protect its secrets? Or the idea that said nation can legislate what information its employees can or can’t consume? Note that getting a job with clearance is a choice. It’s a voluntary obligation people are putting themselves under.

                                                                          1. 2

                                                                            Or the idea that said nation can legislate what information its employees can or can’t consume?

                                                                            This would be what I feel is unjust. In particular, when this information is public, it prevents said employees from being informed and engaged citizens. The fact that their employment is voluntary doesn’t make a difference to me - indentured servitude is unjust even if it’s the result of a voluntary agreement.

                                                                            1. 1

                                                                              That is an interesting conundrum, and maybe there’s some room there for reform in the intelligence community.

                                                                            2. 2

                                                                              That scenario you gave is not how the classification systems actually work. They’re a combo of that with political moves and crimes covered up by the classification. In the US, classification of criminal acts isn’t even legal but they do it & punish leakers anyway. Much of defense activity is also driven by corruption where military and politicians get money from contractors plus politicians get votes or jobs in their districts. The possibly-classified justification for or performance of those programs would be lies to justify profiteering on wasted tax dollars. Trailblazer was a recent example.

                                                                              So, these things are what we need to consider if assessing how just or unjust a classification system is. The U.S.’s is a mixed bag of just classifications, unnecessary classifications (“overclassification”), underperforming in declassification (FOIA), and hiding criminal activity. Definitely needs a ton of reform.

                                                                              Although, the Jason Society did a proposal for a replacement system that sounded good, too. So, reform or replace.

                                                                      1. 5

                                                                        Hi poptart, would you classify yourself as a public servant? If so, can you explain how that jives with not being allowed to read / comprehend material that is available to the public, and more importantly, perhaps very relevant to your work?

                                                                        Edit: another question: would it make sense, similar to NDAs, that once confidential material is leaked to the public through no fault of your own, it is no longer confidential by definition, and therefore can be discussed by people who would otherwise be prevented (because of an NDA / clearance)?

                                                                        1. 4

                                                                          I’m no lawyer, but I think the gov’s position on classified material is that it never “becomes public”, even when it de facto does, and anyone obtaining it, transmitting it, or just holding on to it is potentially covered by the Espionage Act. Realistically, the damage should be nullified by the fact that the cat is out of the bag, but this is federal criminal law, so reality carries very little weight.

                                                                          1. 4

                                                                            That position seems clearly nonsensical. Therefore there is no sense in supporting it.

                                                                            We are not slaves, we are not drones or zombies either. We are human beings who know better thanks to historical events like the Holocaust, that “just following orders” of the clearly invalid and harmful kind, is wrong. And we do not need lawyers to tell us that.

                                                                            1. 1

                                                                              “Following orders” in the genocidal sense is probably on a different moral spectrum than “following orders” in the “not reading Wikileaks release” sense.

                                                                              More seriously, though, it is not purely in itself hypocritical to participate in a system while being against parts of it. I think university should be free, but I still paid for it. I hope enough people can commit the energy to get classification rules changed for the absurd situation for widely publicized information. Even if they want to keep classification in general.

                                                                              1. 4

                                                                                “Following orders” in the genocidal sense is probably on a different moral spectrum than “following orders” in the “not reading Wikileaks release” sense.

                                                                                And if those leaks are exactly about genocide?

                                                                                There are many who allege/claim that many high-ranking Nazis were not well informed about what was going on in the concentration camps.

                                                                              2. 1

                                                                                We are human beings who know better thanks to historical events like the Holocaust, that “just following orders” of the clearly invalid and harmful kind, is wrong. And we do not need lawyers to tell us that.

                                                                                I don’t really know an elaborate answer to that, except that I feel that the learnings from the Shoa had a very short half-life time. I don’t agree with that blanket statement at all.

                                                                                1. 1

                                                                                  I don’t agree with that blanket statement at all.

                                                                                  You don’t agree that we’re humans? Or you don’t agree that “just following orders” that are clearly harmful is wrong? Or you think we need to rely on lawyers to understand that?

                                                                                2. 0

                                                                                  In a wider and more cynical sense, I don’t think we’ve fundamentally moved past ‘just following orders’. A large portion of my past and yearly military training was dedicated to reiterating that if you follow illegal orders you can be tried in a military court for the consequences. The same with mishandling classified information.

                                                                                  This is speculation, but I suspect that the USG is concerned here with young, freshly minted employees and contractors coming into contact with classified information they aren’t approved to handle. Both to prevent having to scrub unclassified machines and workspaces of nominally classified materials and to prevent people from forming conclusions based on data they cannot obtain the context for and shouldn’t have known about in the first place. Couple that with the sheer bureaucratic size of our government and there’s not much room for nuance. “Don’t look at, download, or come into contact with classified materials that you or your system are not cleared for” is a simple rule to follow and enforce.

                                                                                  1. 1

                                                                                    Simple to follow? How does one identify which things are classified and not to be looked at, without first looking at them?

                                                                                    1. 2

                                                                                      People with security clearances are trained on recognizing classified material. Usually a classification is printed on the outside and top of each page/slide so uncleared people quickly recognize it. It’s harder with the leaked news, but luckily there is a thrilling bureaucratic process to report that one accidentally saw classified material one wasn’t cleared for. (Spoiler alert: it’s not at all thrilling.)

                                                                                      1. 2

                                                                                        Maybe a classified-info tag? :P

                                                                                        Less cheeky, there’s generally leeway for honest mistakes in addition to reporting channels (if you care enough). Visiting wikileaks: probably bad, probably blocked. Something published by a news outlet? Often indicated through headlines and slugs, general advisement tends to be “be careful of these links and also remember to protect classified information”.

                                                                                        1. 3

                                                                                          Indeed, but there isn’t a “classified-info” tag for the entire Internet, and “classified” isn’t precise enough to tell you whether you have adequate clearance to click the link anyways. So I feel that this tag request is a bit off the mark.

                                                                                          If government employees really do want to be subordinated to their employer, they should do something client side to block web pages according to their clearance level. The best solution of course would be to use Tor, so they don’t feel pressure to obey unjust policies.

                                                                                          1. 2

                                                                                            This is exactly what it is and you nailed the “still classified even if it’s public”. This is why you will never see a public servant talk about classified data leaks under almost any situation, they are still classified. It’s quite cumbersome and not oriented towards a tech driven world, but what it is is protections for the readers not for the government entities themselves.

                                                                                    2. 2

                                                                                      Regardless of how much sense it makes, you correctly summarized the government’s position.

                                                                                    3. 3

                                                                                      First off, I’m not a public servant. Never have been. But, I do work with people who are and this is a thing that has come up in conversations multiple times and they have all stated how much they appreciate the netsec reddit communities tags. I personally thought it might be useful to have here. hobbified is correct, classified information that is public knowledge is still classified. I can’t justify that and think it’s crazy, but it is a holdover from times before the internet I suspect.

                                                                                      1. 2

                                                                                        So is there anyone here who would actually use the tag?

                                                                                        1. 3

                                                                                          Yup.

                                                                                          On absolutely everything.

                                                                                          It would act as a “push” filter to remove Apparatchiks from the conversation.

                                                                                    1. 8

                                                                                      Ah, interesting edge-case!

                                                                                      Maybe something a little more descriptive than classified? My first thought when I clicked on this was “oh great somebody wants to sell reused lobsters or something”.

                                                                                      badthink? job-hazard? knowledge hazard? sensitive-information?

                                                                                      1. 9

                                                                                        I agree.

                                                                                        Plus, one nice thing about lobsters is that it’s an international community. Let’s not go ahead and have US-centric tags like that.

                                                                                        A sensitive tag might be ok.

                                                                                        1. 4

                                                                                          I like the intent, but trying to actually think up a better word quickly heads into the weeds.

                                                                                          Maybe clearance-hazard?

                                                                                          1. 5

                                                                                            classified-information sounds recognizably straightforward.

                                                                                            1. 3

                                                                                              Better but still big. Maybe classified-info. I’d be surprised if it even matters on this site, though.

                                                                                              1. 2

                                                                                                I wouldn’t be opposed to badthink, still seems superfluous though.

                                                                                                1. 2

                                                                                                  You mean crimethink? Nice. We let them have a tag to help them but they get reminded of the tyranny they work under every time they use it or see it.

                                                                                                  1. 2

                                                                                                    Ah yes I guess it was crimethink. Another alternative would be a “leaks” tag, which would be less useful for people trying to avoid viewing forbidden content, but more useful for civilian users who are interested in leaks. And people who don’t want to see anything that some authority may not want them to see could still filter it out.

                                                                                          2. 1

                                                                                            Honestly I’m not sure being super descriptive is necessary or helpful. I think that sensitive is perfectly reasonable. badthink is humorous, but may not be obvious to find in the tags.

                                                                                          1. 4

                                                                                            This seems like a whole lot of work in abstraction for little to no practical benefit. Granted, it got turned into a library, but how did this library benefit from this abstraction other than by increasing its size?

                                                                                            All this abstraction gives you nothing beyond the obvious facts that diffs can (sometimes) be applied one after the other and that they can be reversed. Knowing that a merge is a pushout brings you no closer to actually performing a merge, which is the central problem that a VCS must solve. This version of the theory also skirts the interesting question of when patches commute (i.e. no merge conflicts), although I suppose you could draw more abstract commutative diagrams to define commutation, without actually giving a method to determine when this happens.

                                                                                            If you want to exercise your neurons, a much more practical use of your time would be to understand diff algorithms (e.g. Myers, patience) and 3-way merge algorithms (e.g. suremerge). Understanding and improving those two can result in lovely results such as a semantic diff/merge that works at the syntax level instead of the line-by-line level.

                                                                                            1. 2

                                                                                              Traditionally the value of these abstractions is that they enable code reuse. Maybe there’s an existing library that offers some Category-based interface that was originally written for e.g. topology that could do something useful when called with this “patch category”?

                                                                                              1. 2

                                                                                                The post struck me as more of a “here’s a workflow for workaday category theory” in the same vein as, say, “deploy your first rails app with docker” rather than a novel abstraction. If the former, I’m really happy to see more stuff like this– using real mathematics to solve small problems can be scary! The more resources, like this, that can be used to seduce developers into using well-defined (or, for that matter, defined at all) formalisms, the better.

                                                                                                1. 3

                                                                                                  I grant that this gave me a very good example for pushouts, which in turn makes me feel like I really know what they are. I always thought of pullbacks in terms of differential geometry, though, because that’s where I first encountered them (maths degree).

                                                                                                  What I wanted to get across, though, is that all of this category theory, at least in this case, does not seem to be something that hackers need to know to any degree of detail any more than they need to know quantum mechanics. Sure, it’s fun to know, but it doesn’t seem to translate into a better way to write code.

                                                                                                  Or maybe a Haskeller would love to prove me wrong.

                                                                                                  1. 1

                                                                                                    I disagree!¹ And if you’ll bare with me through some particularly turgid imagery, I may be able to explain why.

                                                                                                    If you get nothing else from this, I want to emphasize that the value of knowledge is not always in the direct application of that knowledge. In fact, the following statement alone makes this worthwhile as a pedagogical tool.

                                                                                                    I grant that this gave me a very good example for pushouts, which in turn makes me feel like I really know what they are.

                                                                                                    In particular, it’s nice to have a good reassurance that, while the terminology may still be foreign, it’s not the arduous, tooth-and-nail clamber up the rain-slicked walls of a massive, throbbing ivory tower that category and computability theory often seem like to an outside observer. In the same way that, in general, a painter does not need to know the finer points of difference between the tube colors quinacridone magenta and alazarin crimson, knowing the information helps expand the repertoire of tools they can use to effectively solve a problem. And, if you’ll permit me to be annoying for a moment:

                                                                                                    […] all of this category theory, […] does not seem to be something that hackers need to know to any degree of detail any more than they need to know quantum mechanics.

                                                                                                    There is a surprisingly close relation between theoretical physics (such that is used in quantum mechanics) and the mighty buzzphrase Deep Learning. It’s not an immediate, blinding revelation that will help spawn the CMS you have to write, no. It is, however, a useful thing to know and keep in the back of your head until and when the time is appropriate.

                                                                                                    I further take umbrage that people who write code should even aspire to be hackers. To bluntly smash my way through all subtlety and nuance, Facebook is the highest profile company with a ‘hacker culture’ and they have some significant problems. And I pick on Facebook on particular here at the cost of ignoring the entire wasteland of poorly written, poorly specified NPM packages– there are some deeply frustrating days when I meet an aspiring software engineer here in Berkeley, only to find out that their aspirations go no further beyond a ‘full stack’ of Javascript, Node.js, more Javascript, MongoDB, Angular.js, and maybe Ruby on Rails. Here, even Ruby without Rails would be progress!

                                                                                                    Too, I’m dubious on the idea that this particular nugget of category theory isn’t useful in and of itself. Consider a distributed system where you can’t afford, don’t want, or don’t need a full-blown CRDT– an 80% solution where you drop the commutativity requirement may work just as well for your data type– this patch specification is dead simple to implement and still well behaved. Where I could spend months trying to twist a my data model to fit the ideas of a CvRDT, I could instead toss this simple (almost simplistic) and well behaved idea at the problem and (maybe) patch it up with assuming that TCP is a reliable-enough transport. These are real bang-for-your-buck tradeoffs that are the lifeblood of engineering as a discipline, if not a height-weight proportional hack with a charming and bubbly personalty.

                                                                                                    1. That said, I am not a Haskeller and I’m unclear on why that’s important.
                                                                                              1. 4

                                                                                                As I get deeper into learning Elixir, I find that I have trouble using macro-heavy libraries– things just feel too much like magic for me to be comfortable– and really want a tool that I can use to expand macros for the code I’m working with so I can see how the AST is transformed before being fed to the compiler. There’s Mex, which kind of does this, and it’s too tied up in the interactive shell (nowhere near as comfortable as rmate/rsub + Sublime to work in); I’d really like a mix task where I can do something like mix expand Some.Module.In.My.Project and get the expanded version tossed into stdout.

                                                                                                Of course, I also want to expand only those macros not baked in as language features or, say, from a subset of modules that I’ve pulled in via use/require/import.

                                                                                                So, shaving my way out of a yak-hole, mostly.

                                                                                                1. 2

                                                                                                  I believe Alchemist.el now supports expanding macros inline in your emacs, if you’re using that.

                                                                                                  1. 1

                                                                                                    Thanks! I’m not an emacs user, but being able to read through the lisp and see how they approach the problem is going to be very helpful on its own.

                                                                                                    Edit: It appears the lisp writes a temporary file, hands it off to Elixir, and then does +/- the same thing as mex. Time to go look into some compiler source code.

                                                                                                1. 7

                                                                                                  Howdy all!

                                                                                                  Well, I spent last week at the beach with 2 other expat families. I managed to get a lot done though anyway!

                                                                                                  Mid-term break is over and this week, back at home and trying not to go out too much after the last couple of weeks.

                                                                                                  I had a new set of blood tests today (following up on a previous set of tests in September). Just through diet control alone, I’ve made some pretty big improvements and I’m not as stressed about health now. My heart rate is slower than in 3 years, my blood pressure is lower than in a few years, and overall things are improving. Next up, trying to find a cool enough time of day to want to exercise.

                                                                                                  I landed some big changes of mine for my client this week, getting a lot of forward motion with a major emscripten upgrade.

                                                                                                  And for fun, I’ve been making a lot of headway on Mindy, the Dylan interpreter. This last week also saw pull requests from a new contributor and some new people in the Gitter chat.

                                                                                                  We’re thinking about a new surface syntax and are open to discussion.

                                                                                                  I’ve also filed a bunch of issues laying out some longer-term projects.

                                                                                                  Right now, I’ve been working on updating a copy of the 1993-1994 era Apple Dylan test suite to work with Mindy and see how it does. Initial results actually look pretty good. I have to figure out still if this is something that I can push or something that I have to keep private. (As if anyone would even know the legal history…)

                                                                                                  This week, I hope to make more progress on emscripten, Mindy, and I have some things that I need to address for Open Dylan as well.

                                                                                                  1. 4

                                                                                                    On a different note, I am interested in seeing if anyone can translate some of the mathy things in one of the bidirectional typechecking papers into English for me … at least some of them so that I can get the hang of it.

                                                                                                    1. 3

                                                                                                      I’m not a formally trained mathemagician (or computer wizard, for that matter)– so summary with a grain of salt– it looks like the authors managed to find a way to formalize dead code elimination into the type system. In the introduction they start with the Haskell sum type:

                                                                                                      data Sum : Natural_Number -> * where
                                                                                                        Left   : A -> Sum 0
                                                                                                        Right  : A -> Sum (succ n)
                                                                                                      

                                                                                                      (Converted to a tree)

                                                                                                            Natural_Number
                                                                                                              /       \
                                                                                                             0       n + 1
                                                                                                      

                                                                                                      (Converting to go-like pseudo-syntax)

                                                                                                      func Sum(n NatNum) NatNum {
                                                                                                        if n == 0 {
                                                                                                          return 0
                                                                                                        }
                                                                                                        return n + 1
                                                                                                      }
                                                                                                      

                                                                                                      (I may be losing some of the finer details in the logic transform there– i.e., golang doesn’t have sum types, so faking it with a branching function here.)

                                                                                                      And, using that definition, say they can define the following function:

                                                                                                      fn : Sum 0 -> A
                                                                                                      fn (Left a) = a
                                                                                                      

                                                                                                      (Or, again in pseudo-go, totally ignoring pattern matching)

                                                                                                      Sum(a) == a
                                                                                                      

                                                                                                      Which the authors claim that it’s obvious to programmers that the other branch of the sum type will never be evaluated, because there’s no scenario in which Sum(a) == a where a != 0. However:

                                                                                                      …language designers and implementors will have more questions. First, how can [they] implement such a type system? […] Second, designers of functional languages are accustomed to the benefits of the Curry-Howard correspondence¹, and expect to see a logical reading of type systems to accompany the operational reading.

                                                                                                      The authors then go on through the rest of the paper to build a formal type system that can express this idea of, what looks to me, something like dead code elimination; using a whole bunch of math I’m not trained to check to do so. It looks really cool, in general– I’ve just only been snuggling up with Curry and Howard for the last six months or so.

                                                                                                      1. i.e., that proofs are programs– pegging programming as a branch of applied mathematics. For more information, Philip Wadler gave a great presentation related to this idea at this year’s Strange Loop conference.
                                                                                                      1. 2

                                                                                                        Thanks!

                                                                                                        That one was fine … It is the ones after that, starting with equality & reflexivity and going to the end of the paper that bother me. :)

                                                                                                        I understand most of what the prose is saying so far, but feel like I’m missing things due to not being able to read the equations. (I almost certainly am missing things…)

                                                                                                        1. 3

                                                                                                          Oh! Yeah, same here. That’s where my math experience breaks down at the moment. I rewatched the Wadler talk linked in the original post when I woke up this morning– he summarizes the development of computability theory, a couple of logic systems (including the sequent calculus in this paper), and the idea of ‘elimination rules’, which changes my assessment from “mystery math” to “I don’t know the syntax for this programming language” when looking at the more dense proof structures.

                                                                                                          If you’re looking to really understand the mathematical portions, I’d highly recommend that watching that talk and then checking out sequent calculus as a stepping off point.

                                                                                                  1. 10

                                                                                                    My experiment in minimal syntax highlighting will continue! Day 5 so far, and I am starting to feel some benefits and weep less. I initially planned to have no syntax highlighting, but walked back from that a bit allowing myself 4 general groupings: errors, code, comments and strings, which currently looks like -> http://i.imgur.com/6ZVKRRT.png (after screenshot, I removed underlines from strings).

                                                                                                    1. 2

                                                                                                      Hmm I like this, I’m always annoyed when there ere one or two keywords that arent being detected and I like minimalism anyway, I might try this.

                                                                                                      1. 5

                                                                                                        It isn’t just to avoid annoyance, it has a cognitive basis as well (In theory, I am not a cognitive psychologist, but I have spoken to one about this)! When you use syntax highlighting you suffer a mild form of the stroop effect (https://en.wikipedia.org/wiki/Stroop_effect). Additionally, the way our eyes are drawn to colors means you often jump around code in a non-linear fashion. Beyond those two impacts, you also are spending cognitive energy (which is finite) on item-specific processing rather than organizational processing. Item specific processing is trying to understand what a single item does, and storing that information. Organizational processing is about relating items, concepts and document flow.

                                                                                                        The theory (completely unproven / conjecture / insert warning here) is that by reducing cognitive overhead that has little value – you can better reason about code. The first few days were very tough, but it gets much easier on day 3. I initially planned to just do a 7 day experiment, but I will likely be extending it now as I am finding it very helpful.

                                                                                                        I was initial inspired to look into this when I noticed that a lot of great developers I admire use editors that don’t even support syntax highlighting (acme, micro emacs).

                                                                                                          1. 1

                                                                                                            Thanks! I’ll see if I can do some kind of solarized port of this.

                                                                                                            1. 1

                                                                                                              Still really rough and being tweaked as I find things that look horrible (like :hlsearch did on dark). Drop a link in here when you make a solarized version of it, would love to see it.

                                                                                                              1. 2

                                                                                                                I just created a new branch for now, I’ll see if I spin it off to it’s own repo. I just realized that viml doesnt accept variables as color names (unless I’m doing something wrong) https://github.com/Superpat/nofrils/tree/solarized

                                                                                                                Edit: It works now, though terminal colors are a little wonky, the gui ones are great!

                                                                                                          2. 2

                                                                                                            I’m.. unclear on how the stroop effect applies to syntax highlighting, but this looks like a fascinating area to look into. Do you happen to have any published/prepublished research showing these effects?

                                                                                                            1. 1

                                                                                                              It is my understanding there are a few ongoing studies right now with decent sample sizes (specific to coding, not just like color coding verbs, nouns and so forth which has been studied since the 60s). The only completed study I am aware of is https://www.cl.cam.ac.uk/~as2006/files/sarkar_2015_syntax_colouring.pdf which found syntax highlighting to be beneficial. I have issues with its sample size and focus (trivial tasks) and even in that case, syntax highlighting benefit decreased with expertise.

                                                                                                              As for the mild form of the stroop effect, I was referencing mainly different parts of your brain that process color vs. semantics and the theories section of that wiki page which go into various ideas for why stroop might happen, which if you think of the processes happening AS you read code it is very interesting.

                                                                                                              1. 1

                                                                                                                Interesting! I’d be nice to see more results on this; personal anecdote is that I find syntax highlighting really important for orienting myself quickly, but the last time I seriously went without it was way back when I was learning to program Lua in… Windows Notepad.

                                                                                                                So the information may be out of date :V

                                                                                                                1. 1

                                                                                                                  I find syntax highlighting really important for orienting myself quickly

                                                                                                                  I actually found that the most benefit I have gotten is on unfamiliar code, which shocked me. Basically, without syntax highlighting, you really read the code without hoping around, and for me that means I miss a lot less and can hold a lot more code in my head – letting me get up to speed more quickly. This was without a doubt the biggest improvement I have seen thus far. It is almost like syntax highlighting puts me into a scan mode – were I am trying to get the gist as quickly as possible and with it toned down, I am actually reading the code. Turns out that taking that little extra time up front makes me a lot faster overall.

                                                                                                                  Up until this week I had NEVER written code without syntax highlighting on. All the way back to 1993 and Borland C++ – syntax highlighting has been a constant… so much so that I never questioned its value and if it was actually helping me, plus, I thought it looked fantastic. That said, this is all personal anecdote and lots of reasons from honeymoon period, to placebo effect, to ikea effect could be impacting my perspective.

                                                                                                                  Also, worth noting (as I have gotten a few confused messages) – I only gave up syntax highlighting (and really, I still have some, I just nofrils’d it) – I still have tags, jumping around, autocomplete, etc – all modern editor features. This is simply a reduction in visual noise.

                                                                                                                  1. 1

                                                                                                                    Haha, you mention tags/jumping around/autocomplete– I don’t find those helpful at all! :V

                                                                                                                    (Although regex finds and multiple cursors are real nice)

                                                                                                                    Your point about reading unhighlighted foreign code is super interesting. That’s significantly less investment than dropping highlighting out of the text editor entirely and– well, when drawing, it often helps to turn your work upside down so you can check it with fresh eyes. Toggling syntax highlighting on and off while working on a codebase may have a similar effect, lord knows it’d be nice to have an effective way to kick the eyes out of habit and actually double check.

                                                                                                                    1. 1

                                                                                                                      That might be the case – but I was basically useless for a day after making the switch. Maybe if you toggled regularly – but I completely lost my footing for like a day, and didn’t start to see real gains until 4 days in.

                                                                                                      1. 2

                                                                                                        Unicode has what? Combining characters to render all possible families (requiring colour displays, no doubt), flag rendering depending on global geopolitical state and XML specifications for formatting dates? The Unicode Consortium is certifiably insane. What next, rotating 3D glyphs with precise rotational direction and speed specified by combining characters? Man, I just wanted a unified encoding for all earthly languages!

                                                                                                        1. 1

                                                                                                          I am excited for when the Unicode Consortium finally comes up with a way for me to express myself through the language of customizable 3D emojis.

                                                                                                          You can only say so much with the skin color glyph, you know?

                                                                                                          1. 1

                                                                                                            I like your last line. :)