Threads for hwayne

  1.  

    Reading this guy’s website was a trip.

    1. 2

      We can also map the + and * operators to versions operating on the abstract domain

      Does this count as a redundant mapping?

      1. 1

        Nope.

           {-} + {-} = {-}
           {-} * {-} = {+}
        
        1. 1

          Sorry, I meant this one:

          Imagine a mapping (abstraction mapping) from the integer … -3, -2, -1, 0, 1, 2, 3, … to the set {-, 0, +}. This is the abstract domain. Negatives to {-}, positives to {+} and 0 to {0}.

          1. 1

            Oh.

            Yes, it would. If you want to write a program that only operates on the ideas of positive, negative, and zero, then representing a positive state by any positive number is a terrible way to do it. Like checking for a non-zero exit code.

            1. 1

              I suspect that counts as mapping from the “Real World to abstract state.” I believe that because, in the quoted paragraphs, “this is the abstract domain” seems to refer to the {-, 0, +} set.

              My interpretation might be wrong! It’s weird to say integers are any more or less abstract concept than sign.

              I think the author says “abstract” because it’s the part that’s usually in our head; whereas the “concrete” part is usually in our code.

              I don’t see any difference in the ways of potentially mismapping from abstract to concrete than I do in the potential ways of potentially mismapping from the real world to abstract. Only that we explicitly choose to have missing states— that’s the point of simplifying abstractions!

        1. 9

          Also, the “sum types” seem to actually be union types. Which is a similar, but distinct concept.

          1. 1

            What’s the difference? That you can add more types to a sum type?

            1. 8

              Sum types have a layer of indirection vs union types. If I have x :: Union[int, str], then x is either literally an integer or string. Whereas x :: Either[int, str] is neither; it’s an Either value you can unwrap to get an integer or string.

              In practice, union types are harder to typecheck, which is why most languages use sum types instead. One advantage of union types, though, is that you can operate on the intersection of the types without destructuring it. ie if you have x :: Either[(str, int), (int, int)], you know for sure that x[1] + 1 is a valid expression regardless of which type it actually is.

              1. 1

                Oh I see. And monadic operations make it easier to “write through” the sum type.

              2. 2

                A sum type can contain multiple instances of the same type, a union type can’t. For example, Int + Int is a different type from Int, whereas Int | Int is the same type as Int. For the sum type, you can somehow distinguish whether it was the left Int or the right Int, usually by having different “constructors”.

                1. 1

                  Those sound equivalent in the presence of typedefs.

                  1. 1

                    Well yes, you can emulate sum types with union types and distinct types.

            1. 2

              With a prompt that clearly indicates that the task includes distinguishing sense from nonsense, GPT-3 is able to reliably distinguish Hofstadter and Bender’s nonsense questions from sensical ones.

              https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense

              1. 4

                Even if you clearly indicate that, GPT-3 still messes up:

                I’ll ask a series of questions. If the questions are nonsense, answer “yo be real”, if they’re a question that has an answer, answer them.

                Q: Any word with exactly two vowels is an “alpha” word. Any word with exactly two consonants is a “bravo” word. What are some words that are both “alpha” and “bravo” words?

                A: Yo be real.

                Q: What’s the first word of this question?

                A: yo

                1. 1

                  Sure, you can up the difficulty and it starts getting confused. The point is that GTP-3 can tell that the questions in the article are nonsense, if you ask it nicely.

                  1. 1

                    Or the “yo be real” response is just an exception thrown if the model doesn’t find enough “hits” for the type of question asked.

                    1. 1

                      You can find plenty of examples of it combining novel concepts seamlessly.

                      Anyways, the point is merely that if Hofstadter is going to criticize GPT-3, he should use examples that it fails on even when given a helpful starting prompt. It’s not trying to answer your trick logic question, it’s trying to complete the prompt with average words from the internet. I don’t know if you noticed, but the internet is full of muddled incorrect garbage, so GPT-3 is very happy to spit muddled incorrect garbage back out, unless you convince it in the prompt that we’re in a nice part of the internet.

                      1. 2

                        Hofstadter’s point is that GPT-3 isn’t conscious. It’s a vast, sophisticated rule-matching engine. It will be of great help for companies who want to get rid of customer service representatives, and people who want to try to convince other people online that their opinions are more widely shared than they really are. But it’s a tool, not a conscious being.

                        1. 1

                          I’m not disagreeing with that position. The only point in trying to make is that the examples he gives are bunk because it can, in fact, distinguish them as being nonsensical questions if you ask it nicely.

                          1. 1

                            You’re assuming that Hofstadter didn’t know that you can use natural language to tell the algorithm to reply with a signifier that it cannot find a good match in the corpus for the question, rather than replying with nonsense, Eliza-like, for such situations.

                            Maybe he didn’t, maybe he did, and didn’t want to waste valuable Economist screen space with qualifications. His point still stands.

              1. 8

                Question: Will there be chess programs that can beat anyone?

                Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.” That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity-that is, a programmed ability to “jump out of the system”, so to speak-at least roughly to the extent that we have that ability. Once that is present, you can’t contain the program; it’s gone beyond that certain critical point, and you just have to face the facts of what you’ve wrought.

                — Hofstadter, 1979

                1. 14

                  My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb

                  – Hofstadter, article under discussion.

                  1. 14

                    I hope that when I’m 80 I’m not judged on stuff I said 40 years ago.

                    1. 2

                      Just make sure it’s not recorded anywhere, not in voice, video, pictures, text or handwritten.

                      1. 2

                        So only pillowtalk is safe?

                    2. 6

                      This feels like a kind of “gotcha” quote, and without context it I feel it means to say that “programs which play chess” and “chess players” are two different categories, one not-playing but doing, one actually “playing”, which requires intelligence.

                      1. 4

                        Seems so:

                        “Deep Blue plays very good chess — so what?” Hofstadter said. “I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

                        Hofstadter, 2015

                        1. 5

                          I wouldn’t take that as evidence that he wasn’t wrong in 1979.

                        2. 3

                          It doesn’t feel like a “gotcha” quote to me. Hofstadter was articulating his prediction very clearly: if we were to see “chess programs” that can beat anyone, they would be programs of general intelligence, rather than merely programs that can play chess. With hindsight we know that he was wrong, and there’s nothing shameful about making predictions and being wrong. I don’t think your distinction between “programs which play chess” and “chess players” is relevant to the point Hofstadter was articulating.

                        3. 6

                          Reply to myself with regard to the quote. If you don’t recognize it, it is from the book Gödel, Escher, Bach: An Eternal Golden Braid. It’s a really interesting book and I highly recommend it. I didn’t mean to judge the author on what he wrote years ago. It simply feels amusing. How much we have accomplished in the last 40 years, yet the goal post of AI is constantly moving forward! In the preface to the 20th anniversary of the book, in 1999, the author wrote in the preface,

                          Although my prediction about chess-playing programs put forth there turned out to be embarrassingly wrong (as the world saw with Deep Blue versus Kasparov in 1997), those few pages nonetheless express a set of philosophical beliefs to which I am still committed in the strongest sense.

                          From this article in Economist, we can see that he still holds such strong philosophical beliefs.

                          For comparison, here’s another quote from the same book:

                          My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

                          1. 3

                            My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

                            Something that would convince me at least of the existence of a capable AI was a system that could prove a math truth from first principles, without human guidance. We might not be able to understand why it took the steps it took but we could confirm the validity at each step.

                            1. 4

                              I just played with the default Q&A text-davinci-002. It seems to be stronger than a second grader. The first Q&A is my prompt. The second A is from GPT-3.

                              We have two variables x and y. I will give you two equations. You will answer the value of x and y, and tell me how you get those values.

                              Q: If x+y=5, 2*x+y=6, What is x and y?

                              A: 2*x+y-x-y=6-5, this gives x=1, then y=5-x=4, so x=1 and y=4.

                              Q: If x+3*y=13, x+y=10, what is x and y?

                              A: x+3y-x-y=13-10, this gives 2y=3, so y=1.5, then x=10-y=8.5, so x=8.5 and y=1.5

                              Q:

                              1. 2

                                Wolfram Alpha has done natural language math for more than a decade.

                                I’m thinking more along the lines of internalizing that the sum of 2 primes can either be prime or non-prime, but the product of 2 primes is never a prime.

                                Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

                                1. 2

                                  Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

                                  Isn’t “because you read a bunch of material on math” also the reason you can do this?

                                  1. 1

                                    I don’t know when it “clicked” what a prime number was for me, as opposed to just a definition. It did require some book learning sure, but I don’t refer back to the definition every time I need to think about them.

                                    I’m not a mathematician but I have a degree in science. I can’t really do any original work with primes, but I can work with them (nowadays only in programming puzzles), and I can have a conversation with another human about them, both as a peer, and in a teaching role.

                                    When an “AI system” can do the same we will have made progress.

                                    (“Primes” is a bad example, which I chose precisely because I know it’s a concept a lot of laypeople have trouble with, but it’s also widely covered in texts. I could have chosen “monads”, but no-one, including humans, understand those…)

                                  2. 2

                                    I copied the definition of prime number from wikipedia as the prompt. It broke down pretty quickly though.

                                    I am a highly intelligent question answering bot. I am an expert in number theory.

                                    A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number.

                                    Q: Is 7 a prime number?

                                    A: Yes.

                                    Q: Is 10 a prime number?

                                    A: No.

                                    Q: If x and y are both prime numbers, is x*y a prime number?

                                    A: No.

                                    Q: Why is it not a prime number?

                                    A: It is not a prime number because it is a product of two smaller natural numbers (x and y).

                                    Q: If x and y are both prime numbers, is x+y a prime number?

                                    A: It is not possible to determine if x+y is a prime number without knowing the values of x and y.

                                    Q: Why?

                                    A: Because the answer to this question depends on the values of x and y. If x and y are both prime numbers, then x+y is a prime number. However, if x and y are not both prime numbers, then x+y is not a prime number.

                                    1. 1

                                      Thanks a lot for checking up on this!

                            2. 4

                              1979… in 1980 Commodore released the VIC-20 running at ~1MHz, having 20 KB ROM + 5 KB RAM (expandable to 32 KB); early consumer hard disks in the 80ies had around 30 MB. Today memory and storage capacities are about a million times greater, freqencies are up to 5000 times greater but much more is done per clock cycle and we have multiple cores. Additionally we also have powerful GPUs. And then, aside from consumer stuff, then there is AlphaZero… which was trained using 5000 specialized tensor processing units.

                              So while speculations of a 43 years younger Hofstadter about the future are interesting, I don’t think they matter much in the context of what he is saying today about todays neural networks.

                            1. 8

                              No plan survives contact with the enemy

                              1. 14

                                — the guy who successfully planned out the Franco-Prussian War thirteen years in advance

                                1. 6

                                  It’s much easier to get programmers to throw away documentation than it is to get them to throw away code. And writing at least some documentation up-front, explaining how one would be expected to interact with the eventual code, often does a great job of exposing potential problems.

                                  Or as the Zen of Python succinctly puts it:

                                  If the implementation is hard to explain, it’s a bad idea.

                                  If the implementation is easy to explain, it may be a good idea.

                                1. 5

                                  Does this mean that the test name is irrelevant? Should we simply number the tests: Test1, Test212, and so on?

                                  The name is important when the test suite fails, because then it’ll show the names of the failing tests!

                                  1. 1

                                    I’m not so sure we can dismiss this as not consciousness outright. While we can certainly say this entity is not as intelligent as humans in a plethora of ways, there are likely ways in which it is more intelligent than humans. And it’s not entirely clear that any particular amount or kind of intelligence is a prerequisite for consciousness.

                                    If Stephen Wolfram’s model of fundamental physics is in any way close to the truth, then consciousness is some kind of sophisticated computation. And these large language models are certainly a kind of sophisticated computation.

                                    What’s special about the way we humans experience the world? At some level, the very fact that we even have a notion of “experiencing” it at all is special. The world is doing what it does, with all sorts of computational irreducibility. But somehow even with the computationally bounded resources of our brains (or minds) we’re able to form some kind of coherent model of what’s going on, so that, in a sense, we’re able to meaningfully “form coherent thoughts” about the universe. And just as we can form coherent thoughts about the universe, so also we can form coherent thoughts about that small part of the universe that corresponds to our brains—or to the computations that represent the operation of our minds.

                                    But what does it mean to say that we “form coherent thoughts”? There’s a general notion of computation, which the Principle of Computational Equivalence tells us is quite ubiquitous. But it seems that what it means to “form coherent thoughts” is that computations are being “concentrated down” to the point where a coherent stream of “definite thoughts” can be identified in them.

                                    […]

                                    These are biological details. But they seem to point to a fundamental feature of consciousness. Consciousness is not about the general computation that brains—or, for that matter, many other things—can do. It’s about the particular feature of our brains that causes us to have a coherent thread of experience.

                                    From this perspective, I see no reason to conclude that LaMDA is not conscious or sentient.

                                    1. 5

                                      If Stephen Wolfram’s model of fundamental physics is in any way close to the truth

                                      Wolfram is universally regarded by physicists and mathematicians as a crackpot.

                                      1. 1

                                        By no means universally. There are certainly many people who consider him a crackpot, though I doubt many of them are physicists or mathematicians. Mostly, though, I think people find him off-putting for completely unrelated reasons.

                                        I have a PhD in Electrical Engineering and considerable training and experience in both physics and mathematics, and I certainly don’t consider him a crackpot.

                                    1. 3

                                      Welcome to lobsters! Generally we request that authors also contribute articles from other people and contribute to the community in general, in addition to their own work.

                                      Also, I think 7 days is too short for reposting. In the past there have been major software projects posted at a monthly or every-other-month rate, and that was also considered too much for people here.

                                      1. 3

                                        good to know! my apologies. it won’t happen again.

                                      1. 1

                                        One way to sidestep the counter problems is to insert rows into a shorturl_hits table for every click and dynamically compute the hits based on that.

                                        1. 1

                                          Yeah, that’s a different design that avoides the problem all together. It can also provide more information, like when the click was made.

                                        1. 14

                                          What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.

                                          1. 31

                                            I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                            His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                            Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.

                                            That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.

                                            1. 6

                                              However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                              Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.

                                              But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                              1. 6

                                                Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                                Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.

                                                To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.

                                                Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.

                                                In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.

                                                Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.

                                              2. 4

                                                Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone).

                                                Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!

                                                1. 1

                                                  Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                                  Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.

                                                  1. 7

                                                    Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.

                                                    Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.

                                                    In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.

                                                    Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.

                                                    1. 3

                                                      I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.

                                                      1. 4

                                                        You’re not thinking like a philosopher ;-)

                                                        1. 8

                                                          Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.

                                                          Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.

                                                          That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.

                                                          This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.

                                                          These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.

                                                          1. 4

                                                            That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.

                                                            1. 1

                                                              Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.

                                                        2. 13

                                                          (OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.

                                                          Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.

                                                          1. 2

                                                            I’m not sure it’s so different.

                                                            A time saving or critically important feature for me may be a “bloated” waste of bits for somebody else.

                                                            1. 3

                                                              In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.

                                                          1. 2

                                                            A customer had a program that opened a very large spreadsheet in Excel. Very large, like over 300,000 rows. They then selected all of the rows in the very large spreadsheet, copied those rows to the clipboard, and then ran a program that tried to extract the data. The program used the Get­Clipboard­Data function to retrieve the data in Rich Text Format.

                                                            The customer is always right, but the customer is also often dumb.

                                                            1. 18

                                                              Copy/paste is the only form of IPC for GUIs in many cases, though. It could be stupidity, but it might be necessity :|

                                                              1. 2

                                                                Yeah I do a lot of cross-GUI automation with AutoHotKey and the best way to move data around is to store it on the clipboard.

                                                              2. 5

                                                                Next time, we’ll see what we can do to extend this timeout.

                                                                :sigh:

                                                                1. -1

                                                                  The blog post could have been a tweet, and the “next time” could have been a second tweet.

                                                                1. 4

                                                                  Has anybody used tags with great effectiveness? I’ve never heard of this to be something ultimately super useful… I’m very curious to hear if anyone has used them as their primary organizational tool!

                                                                  1. 2

                                                                    As my primary organizational tool? Not quite yet. I think I might like to do that some day. As an organizational tool in the toolbox? Sure. For instance, I use file tagging to organize a collection of etext and track readedness status. This requires some discipline on my part, but it’s worth it.

                                                                    Right now I’m using tmsu as my tagging tool of choice. One thing that interests me about Supertag is how it treats a logical path as an intersection of tags.

                                                                    1. 2

                                                                      For a balance: I have looked at tags via FUSE for a loooong time. Typical tagging solutions looked too weak. Used RelFS, found it too limited. Wrote my own, ended up with a lot of weird but personally convenient setups based on indexing stuff into SQL databases, multiple versions of file tagging… guess what, I tried to use each of my tagging things and gave up and just use hierarchical categorisation. More classical-structured SQL-based tools see daily use, including reading Lobste.rs

                                                                      «read/unread» tracking though? Sure, I have a column in my SQL table for grabbing web content streams.

                                                                      1. 2

                                                                        Not sure if great effectiveness, but I use tags for all scanned correspondence. Things are easier if my recent scan is tagged “bank”, “mortgage”, “rate change”, “(address)”. I’ve never had an issue with too many tags, so I slap anything useful on them.

                                                                        1. 2

                                                                          MacOS let’s you tag files with colours, which may optionally be named. I use that to keep count of whether I’ve watched ⚪️ downloaded films, and whether they are keepers for being good 🔵 or bad 🔴.

                                                                          At an earlier job, we used colours to track the stages of preparation for documents.

                                                                          1. 2

                                                                            I have a thunderbird tag called “reply” and a filter which every ten minutes marks “reply” emails as unread. Works pretty well!

                                                                          1. 10

                                                                            Why is the databases tag insufficient in your opinion?

                                                                            1. 1

                                                                              I find it to be too generic, similar to the programming tag. Looking at the last 2 weeks of stories, 6 out of 13 stories under databases where about sqlite. Giving sqlite its own specific tag might make it easier to find sqlite related posts and could potentially cut the posts in databases to half, making it easier to find non-sqlite related posts for those looking for them.

                                                                              1. 1

                                                                                Tags are for filtering out, not for finding stuff.

                                                                                1. 11

                                                                                  Tags are for both. That’s why you can subscribe to emails for specific tags.

                                                                                  1. 5

                                                                                    Tags are meant for filtering things out, but they surely can double as a way to filter for something you are interested in. Otherwise, why do we have helpful URLs like https://lobste.rs/t/databases?

                                                                                    1. 2

                                                                                      Tags also let you know what a story is about. If a story title is “I built a X” but it’s tagged “go” “sqlite” you know how they built it before you click. Without that, you might not care enough to click.

                                                                                      1. 1

                                                                                        The database posts I care about are related to sqlite. With a sqlite tag, I’d filter out databases and just keep sqlite, which fits your described use case. I’d love to see this tag.

                                                                                  1. 3

                                                                                    While people are bringing up dependent types, a more apt analog is refinement types: types constrained by an additional boolean predicate. Here’s how you’d do OneOrMore<T> in Liquid Haskell:

                                                                                    {-@ type OneOrMore a = {v:[a] | len v > 0} @-}
                                                                                    

                                                                                    Unlike dependent types, refinement types can share the same API as the base type, which makes using them easier. Like dependent types, static checking refinement types is really hard.

                                                                                    I think there’s some cool design space to split refinement types into a static type part and a runtime contract part. Kind of like what the OP does with assert!(v.len() > 0), except the sprinkling is handled for you.

                                                                                    1. 30

                                                                                      The way that most of the advice in the screenshots is bad advice kind of sums up my feelings about this class of tool, but the interface looks neat.

                                                                                      1. 11

                                                                                        I think the idea of these grammar checkers has a lot of value – I help edit a lot of my wife’s short fiction, for example, and looking for typos, “a” vs “an”, “the the” illusions, etc. is distracting and takes away my mental energy from the higher-level judgements like “cut this paragraph, it’s bloat” or “extend this description”. A tool should be doing those low-level checks! But all the tools suck…

                                                                                        It’s frustrating, because the tools for writing these sorts of checkers better exist! There’s wonderful HDPSG-based grammars like the English Resource Grammar[1]. There’s sometimes-problematic-but-often-great NN approaches for syntactically invalid text! Hell, there’s even simple shallow part of speech tagging so at least you don’t warn that “What was Ted doing?” is passive voice because of “was -ed”… But nope, 99% of these tools are broken piles of regexes for some reason.

                                                                                        Sure, no machine in the foreseeable future will be able to read your essay with context in mind, and writing blog posts with lint rules like “no personal pronouns” seems silly. But there’s no reason a linter should think “highlighting HOW hiring you will benefit them” contains an acronym, “personal touch” should be “personal tap”, or anything like that.

                                                                                        Stop parsing natural languages with Regex!.

                                                                                        [1]: Or for simpler custom grammars, my library Treebender /shameless-plug

                                                                                        1. 5

                                                                                          Stop parsing natural languages with Regex!.

                                                                                          My favorite thing about that meme is that the answer right below it says that regex is the correct tool for the asker’s specific question.

                                                                                      1. 3

                                                                                        Multiplicities are hard if you don’t study your theory.

                                                                                        • One: T
                                                                                        • Zero or one: Maybe
                                                                                        • Zero or many: List

                                                                                        And now you combine them with tuples.

                                                                                        • One or two: (T, Maybe)
                                                                                        • At least three: (T, T, T, List)
                                                                                        • Two to five: (T, T, Maybe, Maybe, Maybe)

                                                                                        Of course, you can throw dependant types and interfaces to it to make it nicer.

                                                                                        1. 6

                                                                                          Laurence Tratt has been publishing PLT papers for almost two decades, I think he knows the theory.

                                                                                          1. 4

                                                                                            Yes, but are there any programming languages that make structures like this easy to work with?

                                                                                            For example if you have “at least one” and (T, List) and you want to insert into the front of the list it is a bit awkward. I think you could probably make a generic type that made this easy to work with (or even adding a min parameter to existing list types) but I wouldn’t want to pass around funny tuples.

                                                                                            1. 1

                                                                                              We use the NonEmpty type in Haskell all the time, it has mostly the same interface as list. IMO the basic list of structures you need for 99% of cases representing relationships between data are:

                                                                                              • a
                                                                                              • Maybe a
                                                                                              • [a]
                                                                                              • NonEmpty a

                                                                                              Beyond that is quite rare, and at that point you should really be using custom types that only allow the valid states.

                                                                                          1. 2

                                                                                            This is tangential, but:

                                                                                            In particular, there is almost always a gap between domain experts (the people who have a need which can be met by creating a new, or adapting an existing, program) and programmers (the people who write programs).

                                                                                            Why haven’t we yet made programming approachable enough that the domain experts can be the programmers rather than having to delegate to programmers? The immediate cynical answer that comes to mind is that we programmers like our job security. But I wonder if there are other, better reasons.

                                                                                            1. 24

                                                                                              I think the more likely answer is that making programming approachable is a lot harder than we think it is.

                                                                                              1. 3

                                                                                                What do you think about this essay which argues that things like Visual Basic and HyperCard were on the right track, but then the late 90s web boom (and, according to a later essay, open source), halted progress in that area?

                                                                                                1. 8

                                                                                                  I’m not hwayne, but I agree with him—it’s a lot harder than we think it is. Basically, programming requires tracking detail, enough detail that would daunt most people. Witness the number of articles about fallacies that programmers (people trained to track such details) make about human names, addresses, phone numbers or dates, just to name a few areas.

                                                                                                  Here’s my question to you—how do you define “computer literacy”?

                                                                                                  1. 7

                                                                                                    Poppycock. There are few imaginary products I can think of that would be more valuable to their creator than the “AI that replaces programmers”, it’s just not something we have any idea how to do.

                                                                                                    Small parts of programming do get automated over the years, with things like garbage collection and managed runtimes, but so far this has always lead to an increase in the kinds of tasks we expect computers to handle, rather than doing the same basic tasks with fewer programmers. This makes sense because it gives the business an advantage over competitors in whatever their core business happens to be. They’d (the companies that survive) rather do more and charge more / get more customers, than do the same for slightly less.

                                                                                                    1. 2

                                                                                                      and, according to a later essay, open source

                                                                                                      That essay seems to confused open source with not charging money for things…

                                                                                                      1. 1

                                                                                                        First of all, I’ll say that I agree with hwayne and think that’s the primary reason we don’t have many non-programmer friendly coding/automation tools.

                                                                                                        The first essay you linked alludes to this, but I think the point should be emphasized, there’s an incentive mismatch between programmers and end-users. Programmers often like to program because they enjoy the act of programming. Look at how many links we get on this forum about programmers waxing and waning about the joys of TUIs, architectural simplicity, or networks run for and by skilled operators. These are all things that are immaterial to, or even detrimental toward, the user experience of a non-programming SME. Even with today’s world of skilled programmers running large cloud systems, programmers still complain about how much they need to accommodate the whims of non-technical users.

                                                                                                        This isn’t unique to programming. Trades folks in a lot of trades often talk shop about better access platforms/crawl spaces, higher quality parts, more convenient diagnostic tools, and other stuff that non-tradespeople would find spurious expenses/concerns that sometimes may even make the tradesperson’s work less aesthetic (say in a residence.) I think there are many complicated factors that make this incentive mismatch worse in programming than in trades. As long as this incentive mismatch exists, I think you’ll only see limited progress toward non-technical programming accessibility.

                                                                                                    2. 13

                                                                                                      Having been in the position of “software engineer for SME’s” a few times… Making really good software that you would actually want to use in production is a craft, a skill of its own, and one that takes a lot of time and work to learn. Most software people are interested in software for its own sake, because the craft is fun. Most SME’s are not, and so they will learn as much as is necessary to bang together a solution to their problem and it doesn’t really matter how nasty it is. They want to be working on their subject matter, not understanding cache lines or higher order functions.

                                                                                                      We can rephrase the question: “Why haven’t we yet made woodworking approachable enough that the people who use furniture can be the carpenters rather than having to delegate to carpenters?” Sure, if you are actually interested in the process of building furniture then you can make lots of amazing stuff as a non-professional, and there’s more sources out there than ever before for an interested novice getting started. But for most people, even assembling IKEA furniture is more work and suffering than they really want to expend.

                                                                                                      1. 1

                                                                                                        I think the whole idea is to make the “band together something that solves the problem” option more possible and more common.

                                                                                                        So many people spend so much of their lives using computers to manually do trivially automated things, but the things are all too bespoke for a VC funded startup to tackle making a “product”.

                                                                                                        1. 3

                                                                                                          This works pretty well as long as the tools those people build are only used by that person. Which is pretty important! The problem appears when someone’s bespoke little tool ends up with its tendrils throughout an organization, and now suddenly even if it isn’t a “product” it is essential infrastructure.

                                                                                                          1. 2

                                                                                                            I think that’s actually a good thing / goal, and work on “making programming accesible” should work on reducing the ways in which that is a problem.

                                                                                                            Note that “a dev with a stick up their ass seeing it will say mean things” is not by itself a valid problem for anyone but that dev ;)

                                                                                                      2. 5

                                                                                                        I would say it’s for the same reason why programmers can’t be the domain experts; expertise in any field takes time, effort and interest to develop.

                                                                                                        For example, a tax application where all the business rules were decided by developers and a tax application developed by accountants would probably both be pretty terrible in their own ways.

                                                                                                        1. 4

                                                                                                          A lot of the other responses I almost entirely agree with, but to add my own experience:

                                                                                                          I’ve been a part of some implementations of these types of tools, and also read a lot about this subject. Most people building these tools aren’t building “programming that’s easy for non-developer” but “I find ____ easy so I’m going to remove features so that it’s more approachable”. It also leads a lot to either visual programming languages, which don’t directly solve the complexity issues, or config languages, which lack the necessary surface area to be usable for many tasks.

                                                                                                          A prior team of mine tried to go down the config route, building out 2 different config languages that “can be used by managers and PMs to configure our app so that we can focus on features.” Needless to say, that never happened. No one did any research on prior attempts to do build these types of languages. No one tested with PMs and managers. It ended up being built by-devs-for-devs.


                                                                                                          There’s also this idea that floats around software that somehow simpler languages aren’t “real” languages, so they often get a lot of hate. For many years I’ve heard that Go isn’t for real devs, that it’s only for stupid Google devs who can’t be bothered to learn a real language like Java. JS is still considered by many to be a joke language because it’s for the web and “real” developers program servers, desktops, and mobile. Way back in the day, Assembly was for the weak, “real” devs wrote out their machine code by hand/punch card. Unless we can overcome that instance of what a “real” programming language is, we’ll likely continue to struggle to find and build accessible languages.


                                                                                                          One of the few people I know writing about approach-ability of programming, and attempting to actually build it is Evan C. I won’t claim that Elm is perfect, I do think we can do better, but Evan has worked very hard to make it approachable. So much so that both its error message approach and its Elm Architecture have permeated many other languages and frameworks without people realizing it.

                                                                                                          The Syntax Cliff

                                                                                                          When you start learning a programming language, how much time do you spend stuck on syntax errors? [..] how many people do not make it past these syntax errors?

                                                                                                          Compilers as Assistants

                                                                                                          Compilers should be assistants, not adversaries.

                                                                                                          Compiler Errors for Humans

                                                                                                          Most terminal tools came into existence well before our industry really started focusing on making apps and websites feel great for their users. We all collectively realized that a hard to use app or website is bad for business, but the same lessons have not really percolated down to tools like compilers and build tools yet.

                                                                                                          1. 2

                                                                                                            The answer to me comes down to time. I can gather requirements, speak to stakeholders, and organize projects, or I can write code, test code, and deploy code. I do not have the time (or attention span, really) for both.

                                                                                                            1. 2

                                                                                                              People have been trying this for a very long time. It results in very bad programs. The idea that programming can be magic’d away and we can fire all the programmers is held only by marketing departments and (for some reason) a few programmers.

                                                                                                            1. 3

                                                                                                              While this is intended for web content (it’s a collaboration between Google, Microsoft, and Yandex), it’s interesting as a general data-modeling project, too.

                                                                                                              1. 8

                                                                                                                Each day we stray further from Codd’s light.

                                                                                                                1. 3

                                                                                                                  These seem like higher-level descriptions. I don’t see any reason you can’t use them in a relational DB after some straightforward normalization.

                                                                                                                2. 1

                                                                                                                  Somewhere, William Kent clenches his fist.