1. 33

    I’m not a huge fan of the article, but I do see the point they try to make. I’d say most tech content is badly contextualized and motivated.

    I’ll give my favourite example: Docker scale talks. The worst one (and I’ve seen a couple) was a very good talk by a Google developer on how they allocate and run seven-figure numbers of docker containers in their ecosystem. Super interesting talk, given at a conference that wanted to be “the first conference for beginner developers”. The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

    Similarly, I’ve seen a talk by Google SRE “for all audiences” that started with setting up an SRE team of at least 3 people in all major timezones. PHEW.

    My takeaway from my years in this industry is how bad this industry is at evaluating solutions and budgeting how many solutions you want to have in a product. There’s value in restraint.

    Still, all effective developers I know first reach to the internet for inspiration and are incredibly good at establishing that context by themselves.

    1. 13

      My takeaway from my years in this industry is how bad this industry is at evaluating solutions and budgeting how many solutions you want to have in a product. There’s value in restraint.

      I think this is because most of the industry never needed to manage spendings in a company. In addition, most of us didn’t get trainings or classes on this.

      You mention it’s hard to know which technologies are matching a company’s needs, but that’s also true for finance. Sometimes, spending 200k$ on prototyping is a drop in the ocean compared to the expected ROI, whereas sometimes 50$ of EC2 per day is a lot, and it’s not always easy to grasp if you never studied economics.

      Maybe that’s something that’s missing in the space, like “finance for IT teams”, that goes beyond the basic of direct ROI and depreciation.

      1. 4

        I fully agree with you there! That’s also why I don’t want to put that on individual engineers or even teams. This behaviour is rarely asked for, therefore the skill is low.

      2. 4

        As far as I can tell, the problem “at scale” is much worse (and considerably deeper) than the article can tell. It’s not just that industry is ridden with superstitious gossip; the “science” that it supposedly rests on is too. Logic, empiricism, skepticism, statistical literacy, and historical awareness all can help… but ultimately there’s no easy way out of this bad situation, because the objects of our study are themselves almost entirely artificial. Lies repeated often enough and loudly enough can indeed become truths. So can less deliberate falsehoods, and of course the many kinds of claims and assumptions which cannot easily be assigned a truth-value. It’s not a special problem unique to our special field, either; brave people in economics and the social sciences have had to face this lack of foundations for longer than “computer science” has even existed.

        Here’s a practical book and a much more theoretical one. Both are well worth reading. Good luck out there!

        1. 4

          The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

          I will never ever need to excavate the side of a mountain but I would still attend a talk about how someone built this monster:

          https://www.youtube.com/watch?v=i0QUDtqJwGQ

          I feel playing thought police at a conference fits in the “ego driven content creation” we’re discussing here. Just relax and let other folks enjoy what they like.

          1. 10

            The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

            I feel playing thought police at a conference fits in the “ego driven content creation” we’re discussing here.

            You are misusing (and watering down) the term “thought police”. (You aren’t alone.)

            In George Orwell’s 1949 dystopian novel Nineteen Eighty-Four, the Thought Police (Thinkpol) are the secret police of the superstate Oceania, who discover and punish thoughtcrime, personal and political thoughts unapproved by the government. The Thinkpol use criminal psychology and omnipresent surveillance via informers, telescreens, cameras, and microphones, to search for and find, monitor and arrest all citizens of Oceania who would commit thoughtcrime in challenge to the status quo authority of the Party and the regime of Big Brother. - Wikipedia

            Engaging in open discussion – even if challenging or critical – with others at a conference is nothing close to being the “thought police”. It doesn’t match any aspect of the description above.

            1. 2

              I’d like to redub some enterprise software tutorial audio over this digging video. https://www.youtube.com/watch?v=PH-2FfFD2PU

          1. 8

            I expect a technical article about the fate of superior choices, and I was disappointed to find the usual bull.

            The likes of AmigaOS and BeOS advanced the state of the art. Inferior solutions such as Windows, MacOS and later OSX were the ones most adopted.

            Now we have seL4, but it’s the same story; Technical superiority means squat. That’s the problem with software. Dumb people with decision power. Otherwise-smart people putting up with them. And we have a lot of that. It’s a wonder progress is ever made despite this fact.

            1. 23

              Maybe we have to look to economic and political factors to understand why Windows and Mac won. We shouldn’t retreat into our techie bubble and pretend those things don’t matter.

              1. 4

                Pretending they do matter is at the heart of the problem.

                Imagine if, rather than going with the flow, we used our brains and did what was right, put effort where it matters.

                There’s nothing sadder than seeing otherwise capable individuals wasting their lives by pursuing the wrong endeavors, just because they are popular.

                If anything, what is severely lacking in society is the ability to take a step back and think, as opposed as following the flow. The few people capable are often the ones that end up making a difference.

                1. 14

                  There’s nothing sadder than seeing otherwise capable individuals wasting their lives by pursuing the wrong endeavors, just because they are popular.

                  I suppose you would consider me guilty of that. I work at Microsoft, on the Windows accessibility team. The work I do benefits not only Microsoft’s bottom line (somewhat), but potentially millions of people. Would it be better if I implemented a screen reader for AROS, or Haiku, or some OS based on SEL4? I could design a beautiful new accessibility architecture, possibly technically superior to anything currently out there. (Then again, I’m probably not actually that brilliant.) But who would it help? That, to me, would be a waste of my time.

                  Of course, this is all ignoring the fact that I probably couldn’t get paid to work on one of those obscure OS’s anyway. It would have to be a volunteer side project, and some problems are just too big to solve on nights and weekends.

                  1. 5

                    The work I do benefits not only Microsoft’s bottom line (somewhat), but potentially millions of people.

                    Microsoft customers, perhaps. Windows is unfortunately not open source as per OSI. It doesn’t count as work for humanity.

                    Those who can do paid work on actually worthy projects are few and far between.

                    I am myself an AWS drone. I do whatever I want on my free time, and I do get paid well, accelerating me towards not having to work at all (so called FIRE), which will free me to do whatever I want, full time.

                    or some OS based on seL4?

                    By all means. You’d be proudly at the forefront of computing, advancing the state of the art.

                    1. 6

                      Windows is unfortunately not open source as per OSI. It doesn’t count as work for humanity.

                      Open source is fantastic but I wouldn’t ignore closed source software like that.

                      1. 2

                        I do not ignore it. I recognize it, but due to its closed nature (source or license), it is prevented from benefiting mankind as a whole.

                        1. 5

                          The problem with that statement is that it isn’t, in any way, true. In fact, it’s downright hard to do any kind of creative thing without benefiting mankind as a whole.

                          1. 1

                            Please define what you mean by ‘benefiting mankind’.

                            1. 1

                              In this context, it was qualified “as a whole” and meant nothing more than not being restricted to Microsoft clients.

                              Of course, give it enough time and if an idea has worth, it will be replicated.

                              1. 1

                                Of course, give it enough time and if an idea has worth, it will be replicated.

                                Are you sure of this? I’m not.

                                Since this statement is conditioned on “give it enough time”, as it stands, it is untestable.

                        2. 1

                          Those who can do paid work on actually worthy projects are few and far between.

                          By your definition of ‘worthy’ or theirs?

                          Do you have a philosophical stance on https://en.m.wikipedia.org/wiki/Moral_relativism ?

                          1. 1

                            By your definition of ‘worthy’ or theirs?

                            By theirs. Only a few fortunate people feel their job is worth doing, money aside. This impression is based on the views my network of acquaintances have on their jobs, and restricted to computer science graduates.

                            Do you have a philosophical stance on https://en.m.wikipedia.org/wiki/Moral_relativism ?

                            This is a dangerous topic I’ll respectfully decline to comment on.

                            1. 3

                              By theirs. Only a few fortunate people feel their job is worth doing, money aside. This impression is based on the views my network of acquaintances have on their jobs, and restricted to computer science graduates.

                              Then it may surprise you to learn that I do believe my job is worth doing, money aside, even though I’m working for Microsoft on Windows. It’s true that my work on Windows accessibility is only available to Microsoft customers and their end-users (e.g. employees or students). But that’s still a lot of people that my work benefits.

                              1. 1

                                What can I say, but congrats for working on a job you feel worth doing.

                              2. 2

                                I think don’t I get how you or they are defining worth. Can you explain more deeply?

                                Some example guesses based on people I know:

                                • If someone meant they wouldn’t do their job if they weren’t paid for it, that would hardly be a surprise. :)

                                • Or perhaps ‘worth’ is meant as a catch-all for job satisfaction?

                                • If someone said their job is to make system X be more efficient, but finds this to ‘lack worth’, perhaps they would like to see more direct results?

                                • If someone says their job is not ‘worth’ doing, perhaps they mean they hoped for better for themselves?

                                • Perhaps someone prioritized pay or experience in the near term as a means to an end, meaning some broader notion of ‘worth’ was not factored in.

                                • Impact aside, some jobs feel draining, demotivating, or worse.

                                • Some jobs feel like backwaters that still exist for historical reasons but add little value to the organization or customers.

                                1. 2

                                  If someone meant they wouldn’t do their job if they weren’t paid for it, that would hardly be a surprise. :)

                                  That one. And yes, I am not joking.

                                  I otherwise see working as a losing proposition, as no amount of pay is actually worth not doing whatever you want with your time, which is limited.

                                  1. 1

                                    I otherwise see working as a losing proposition, as no amount of pay is actually worth not doing whatever you want with your time, which is limited.

                                    I’m not sure how to parse the sentence above. With regards to “otherwise” in particular: Do you mean that work (without money) “is a losing proposition”? And/or do you mean “generally, across the board”… you should simply do what you want with your time? And/or something else?

                                    How do you respond to the following claim?… To the extent work helps you earn money to provide for your basic human needs and wants, it serves a purpose. This purpose gives it worth.

                                    I’m trying to dig in and understand if there is a deeper philosophy at work here that explains your point of view.

                        3. 12

                          I agree that it kind of sucks, but it will always be humans using and developing software, and we cannot expect humans to be rational. We are social beings and we have feelings, and things like popularity matter, whether we like it or not.

                          You’re right that blindly following the flow is what got us into this mess. But as technologists we need to understand the humans and politics behind these decisions so that we can create our own flows for the technically superior solutions.

                          1. 1

                            In context (e.g. day-to-day work, especially in systems regarding human safety), we do want to build better technical solutions because we want them to be more reliable, which means they fail less often and do less damage to people.

                            Some of us also want better technical solutions because it makes these systems more flexible to adapting contexts, which (hopefully) means less money and time is spent rebuilding half-baked systems that is, let’s face it, not the kind of work that many of us are hoping for.

                            Now, for a broader claim: narrowly ‘technically-superior’ solutions in the service of immoral aims are not something we should be striving for.

                      2. 8

                        I really doubt it’s just “dumb people with decision power”. It’s mostly the users.

                        There is a concept called “bike shedding” with an example: if you discuss, with a group of people, plans about building a nuclear power plant and plans for building a bike shed - people will discuss the bike shed a lot more. Because that is what everyone understands. This same concept applies to most everything. Take books. Most popular books are really “dumb”. Everyone can read understand those and they become popular.

                        I think same concept transfers to the software world. We have what we have, because this is what won the “so dumb, everyone can use it” race.

                        1. 2

                          I think same concept transfers to the software world. We have what we have, because this is what won the “so dumb, everyone can use it” race.

                          And it’s still based on misconceptions, unfortunately.

                          For instance, it’s pretty well accepted that concepts of modularity make programming easier, not harder. Concepts such as abstraction (as in the abstraction of the implementation behind an interface), or isolation (user processes run sandboxed with the help of mechanisms such as pagetables).

                          However, when it comes to microkernel, multiserver operating systems, people have trouble with the idea that they are actually more tractable, rather than less. They’ll defend monoliths, even when they’re Linux-tier clusterfucks with little in terms of internal structure.

                          At times, it seems hopeless.

                          1. 2

                            Not every abstraction turns out to be helpful. Sometimes they just make it impossible to figure out what’s going on.

                            1. 1

                              Absolutely.

                              But it’s hard to argue no structure (chaos) is better than structure.

                              1. 2

                                I’ll take code that only uses simple, known-good abstractions (eg structured control, lack of global state) but is otherwise chaotic (eg code duplication with small modifications etc) over code that applies the wrong abstractions any day.

                                1. 2

                                  For chaotic, try and trace function calls within the Linux kernel.

                                  1. 3

                                    That’s exactly the sort of thing I’m talking about - messy, but tractable with static analysis tooling. It’s a hard slog, but you can clearly see how much of a slog there is within a couple of hours investigation.

                                    Compare that to my daily driver - large rails apps. Not only are static analysis tools unable to follow the control graph, but the use of templated strings to find method names means you can’t even use grep to identify a given symbol.

                                    Sometimes there’s no quicker way to figure out what, if anything, uses a given method than to read 100k lines of ruby source. There’s frequently no quicker way to figure out where a method call goes than running it in a debugger.

                                    1. 1

                                      As the kernel runs in supervisor mode, I’d really prefer if it was very clearly structured and the execution flows going through it were obvious and didn’t require running it on a debugger.

                        2. 5

                          I met the developers of seL4. It’s a tool intended for a very specific set of use cases, mostly embedded systems and military tech. It’s not intended to be a replacement for Windows/Mac/Linux and is not at all the “same story”.

                          1. 2

                            That’s not what they originally advertized, though. Originally, it was one of many L4-centric projects that would be used as foundation for desktop, mobile, and embedded applications. Nizza, Perseus, Genode, OKL4A, INTEGRITY-178B, LynxSecure, VxWorks MIKS, etc are all examples which did desktops by putting a Linux VM on top of the kernel. The seL4 kernel had x86 happening to support that but with initial efforts focused on ARM.

                            I guess they realized the difficulty, both technical and marketing, of doing a secure workstation for x86. Most verification funding was also going toward embedded, IoT, and military. A military company bought OK Labs. It looks like they pivoted for now to totally focus on those areas building their component architecture. They even changed their website to only talk about these things. The NICTA website talked about things in this comment.

                            It’s probably good move given the software requirements are simpler. They’ll be more likely to succeed.

                            1. 3

                              Gernot has said that that verification of the multicore kernel is very costly and no individual client is willing to foot the bill. They lost governmental funding and (AFAICT) their primary funding sources are from defense. So yeah, they would like to expand beyond embedded controllers for the military (high assurance VMM for Amazon or something) but no one cares about security enough.

                              1. 2

                                I didn’t know that. I wonder if it means funding authorities dont care about security or dont care to fund that project. The seL4 kernel is a simplified kernel verified using ultra-slow, ultra-costly techniques.

                                They might want to fund methods with higher productivity and/or applicibility to existing systems. Most of the market still won’t buy whatever it is, though. A combo of developer, market, and defense apathy is why I’m doing far less research than before in security.

                            2. 2

                              This is what it is, currently. Doing whatever necessary to get embedded applications running is absolutely a much simpler scenario than a workstation one, and currently very realistic; They do have the examples to point to.

                              But there’s nothing stopping it from going further. Genode’s sculpt manages to demonstrate this really well.

                            3. 2

                              Emotionally, I think I know where you are coming from.

                              However, there isn’t a strong argument here. Some problems I see:

                              • There is not just one problem with software.
                              • You don’t explain what you mean by ‘dumb’ — it comes across as an amorphous insult
                              • There are many kinds of intelligence
                              1. 2

                                Dumb was an unfortunate choice of words. Technically illiterate would have perhaps worked, but there’s a component of closed-mindedness or unwillingness to consider alternatives.

                                The background of the post is having seen people who are otherwise intelligent and capable put up with terrible decisions from above and derived unhappiness. It is often better to stand for your beliefs (as in, if actually sure), and oppose these decisions. Doing so allows for “I told you.”. Should the company’s climate doesn’t allow this much, I’d suggest finding another job.

                              2. 2

                                The likes of AmigaOS and BeOS advanced the state of the art. Inferior solutions such as Windows, MacOS and later OSX were the ones most adopted.

                                Technical superiority has nothing to do with the success of a platform. User experience is the ultimate arbiter in this case. MacOS has better UX than most operating systems. Windows has better UX than Linux or seL4 for a p50 user (example: my mother). People are not dumb to choose Windows or MacOS over Linux / seL4, they simply go for the better UX. If you want to create a superior platform, it has to start with superior UX, everything else is secondary.

                                1. 3

                                  Windows was always in the bottom league when it came to UX, it became a winner because it had guaranteed backwards compatibility with an even worse system: MS-DOS.

                                  1. 2

                                    And monopoly tactics. Similar story for IBM vs better-designed mainframes such as B5000.

                                  2. 1

                                    Technical superiority has nothing to do with the success of a platform.

                                    Do you mean ‘is less important’ rather ‘has nothing to do with’?

                                    If you really mean ‘has nothing to do with’ you have the burden of proving a negative.

                                    A negative claim is a colloquialism for an affirmative claim that asserts the non-existence or exclusion of something.[10] The difference with a positive claim is that it takes only a single example to demonstrate such a positive assertion (“there is a chair in this room,” requires pointing to a single chair), while the inability to give examples demonstrates that the speaker has not yet found or noticed examples rather than demonstrates that no examples exist - Wikipedia: https://en.m.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative

                                    1. 2

                                      Ok I bite.

                                      • VMS technically superior to Unix -> Unix won
                                      • OS2 technically superior to DOS -> DOS won

                                      It appears to me that technical superiority does not have anything to do with how successful a platform is. You can prove me otherwise.

                                      1. 3

                                        I’d argue compatibility is the primary driver of success.

                                        I can run windows 95 games on windows 10; I can open an excel document from 1995 today. Getting PHP or java code from 20 years ago to run is typically no big deal, and that’s a large part of their dominance in their respective niches.

                                        1. 1

                                          The way you are making the claim is oversimplified.

                                          You’ve also shifted your language from ‘success of a platform’ to a notion of ‘winning’. But it raises the question ‘over what timeframe’? These platforms are not static, either.

                                          For example, a big reason that Windows has remained a force (relative to competitors) is that it has improved its underpinnings over time.

                                          Proving a negative is often waste of time unless you are working with precise definitions and deductive reasoning.

                                          Let me suggest your time would be better spent by clarifying what you mean rather than making absolute statements.

                                          Or maybe you want to write a thesis showing every software platform and demonstrating that in every case, technical aspects played no role in their evolution and success across various time scales? If so, go for it. :P (Be careful not to cherry pick the time scale to suit your argument. Or leave out examples that don’t fit.)

                                          I’m trying to explain why oversimplified forms of argument are not very useful to me. My goal is to understand how these factors relate not only in the past but also in the future.

                                          Your version of your argument in your head may be useful to you in some sense, but the way you’re stating it is way too blunt. I think by adding some nuance, your mental model of the situation will improve. I intend this to be taken in the spirit of constructive criticism.

                                  1. 2

                                    I find the author looking in the wrong place. There are a lot of different types of software, but three types form a large majority: control (and data acquisition) systems; simulations; systems featuring the recording of transactions for commercial or legal reasons.

                                    In common are that they are all about modeling some system, real or imagined, in computer code. These include those systems that manage your money, taxes, insurances, health, utilities, environment, purchasing, supply, services, education and reservations. They include the systems that operate your elevator, reticulation system, production line, or automated warehouse.

                                    What’s wrong with software is that programmers have not been able to adopt and teach a predictable and repeatable way to model into code. As Hertz learned recently with its failed systems by Accenture, the undertaking of high cost complex systems is a dangerous and uncertain undertaking. What progress have we made?

                                    Capital in this context is about the *ownership” of resources that might include a decision to automate some process that could be performed instead given enough human resources and means of communication. Capital and the production of speculative undertakings featuring software are not where software “went wrong”. Instead software might be a mere detail in the relationship between capital and a speculative endeavor of any kind?

                                    1. 1

                                      Would you please define what you mean by ‘model’?

                                      1. 1

                                        There are many definitions of model. The closest one I can think of that matches your usage is:

                                        ‘A schematic description or representation of something, especially a system or phenomenon, that accounts for its properties and is used to study its characteristics.’ (Source: Wordnik)

                                        While this definition is consistent with portions of the three types of software you describe: (‘control (and data acquisition) systems; simulations; systems featuring the recording of transactions for commercial or legal reasons.’), it does cover all aspects of them.

                                        For parts of software systems that interact with humans, there is more than just modeling in the above sense. Of course, there are decisions about what to model. Even more broadly, there is design about the human-computer-interaction: e.g. what is shown to the human (and when), what mechanisms exist for human input.

                                        I would suggest that most/all software systems (ultimately) are to some degree coupled with humans, and thus have designs (implicit or explicit) about their context, which may touch on any or all aspects of human existence.

                                        1. 1

                                          You make a good point that modeling is a hard problem in software.

                                          However, when you write ‘What’s wrong with software is [one thing]’, you commit the ‘fallacy of the single cause.’

                                          The fallacy of the single cause, also known as complex cause, causal oversimplification, causal reductionism, and reduction fallacy, is a fallacy of questionable cause that occurs when it is assumed that there is a single, simple cause of an outcome when in reality it may have been caused by a number of only jointly sufficient causes. -Wikipedia

                                          1. 1

                                            Instead software might be a mere detail in the relationship between capital and a speculative endeavor of any kind?

                                            Many economics technology (generally) in this kind of way. See https://en.m.wikipedia.org/wiki/Technical_progress_(economics)

                                            Saying ‘mere’ implies a value judgement, however. Is software worth understanding in detail? It depends on where you stand.

                                            1. 1

                                              Correction: ‘many economists view technology in this kind of way’

                                            2. 1

                                              What progress have we made?

                                              Is this a rhetorical question?

                                              If you are interested in how much modeling in software has progressed, please dig in and share what you find.

                                            1. 17

                                              Great talk, from a great speaker. The central point is very old and very general, and should be familiar to anyone with a little exposure to the humanities… so, not everybody. That is: “world-structuring beliefs are very powerful” even (especially!) when they are unconscious. The process of becoming more conscious of your own episteme will make you a better programmer, and probably a better person too.

                                              But, regarding types and tests, I think he gives property-based testing way too short a shrift. When he says “QuickCheck just makes more examples”, he ignores that the examples are generated by a formula, which is itself a specification or description of the overall shape of the correctness regions in his little pictures. The test generation process is just sampling within that region. So, PBT lets you use all the expressiveness of your language of choice to specify correct behavior, and brings in some statistical methods to boot. You don’t need a fancy type system to use PBT. (I have nothing against type checking! It’s great, you should use it where and when you can.) Moreover, you can pay a little more and check the very same properties at run time, and now they’re called “contracts”. There’s a strong connection with fuzzing techniques too, which have proven themselves very effective at finding real-world bugs. The difference between point-wise unit tests and PBT is so big that I think it constitutes a paradigm shift in itself. No it’s not everything, but it’s a really good thing for software correctness, and much easier to use than many other powerful techniques.

                                              While I’m up here on my soap box, philosophy is extra credit, but every CS curriculum and boot camp really should include at least a basic stats course. If you missed yours, get remediated as soon as you can! We all need at least a modicum of statistical literacy to have a hope of actually doing empirical science, without which we’re stuck in essentially religious modes of thought.

                                              1. 1

                                                (meta-comment) I would like to respond to each of your (excellent) points/paragraphs separately. I am interested in finding others in the lobsters community who might like more granularity. I think many of us might have some technical/UI/moderation ideas on how to make this easier. Any pointers on how I can get this ball rolling?

                                              1. 4

                                                Gary Bernhardt is easily the most charismatic speaker in our industry; his way of presenting is very engaging. I had not seen this one, thank you for posting it.

                                                Since this is 5 years old, and he doesn’t posit a “fix” for ideology- simply awareness: is there a way of asking yourself if there is an underlying unknown assumption to your belief?

                                                1. 6

                                                  I don’t think it makes much sense to look for a “fix” for ideology, everyone has an ideology. If you think you don’t have an ideology, that just means you’re not aware of it (there is a great expansion of this idea in “The Pervert’s Guide to Ideology” mentioned above). The key is, as you say, being aware of how it affects your opinions, and that’s a matter of making a habit of questioning your assumptions, just as you would do as a part of the scientific method.

                                                  1. 2

                                                    In my experience, some useful ways to “unpack” one’s motivations include:

                                                    1. retrospective: Think about some particular past decisions and your thought processes behind them.

                                                    2. active: In the context of a decision you want to make, write down your thinking, motivations, emotions, and reasons about the decision.

                                                    3. proactive: As a future-oriented thought experiment, visualize a particular decision you need to make. Think about how you will respond. Write down your thinking, rationale, emotions, and so on.

                                                    This can apply to many kinds of decisions: personal, financial, technical, organizational, and so on.

                                                    All of these are just a starting point.

                                                  1. 3

                                                    They’ll automatically link to each other, here’s a recent example. Why add points? There’s some non-obvious complexity here in how stories are sorted that’d make this fairly complicated. Would you expect to see comments together like a merged story, or on separate pages?

                                                    1. 2

                                                      In response to: “Would you expect to see comments together like a merged story, or on separate pages?” Great question… I’ll start by mentioning two ways (of many!):

                                                      1. Combine comments, weighted by upvotes
                                                      • Design rationales:
                                                        • It is important to aggregate comments across both the original submission and resubmissions.
                                                        • Focus discussion around what people have already found valuable.
                                                          • counterpoint: A particular comment tree is a path-dependent construct. Perhaps it is more interesting to ‘shake it up’ from time to time. See ‘fresh takes’, below.
                                                      1. Separate comment pages, one for each resubmission.
                                                      • Design rationales:
                                                        • It is important to give each resubmission its own ‘space’.
                                                          • The idea being that ‘uprooting’ comments around any particular submission and mixing them would lose important context.
                                                        • Promote fresh takes / interpretations of resubmissions.
                                                          • Encourage different ways of slicing/dicing/framing discussions.

                                                      I don’t really feel ‘pulled’ by either. I think listing more alternatives and unpacking the rationale will likely likely to better questions and design criteria.